By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AITech

MIT dropout says AGI will wipe out humans before she finishes her degree

An MIT student dropped out after claiming artificial general intelligence could cause human extinction before she even graduates.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 16, 2025, 3:15 AM EDT
Share
Illustrated image of artificial intelligence (AI)
Illustration by Kasia Bojanowska / Dribbble
SHARE

We’re all a little twitchy about AI right now. It’s become shorthand for a bunch of anxieties — climate costs, job upheaval, misinformation, surveillance — and lately a new, louder fear has crept into the conversation: that the next leap in artificial intelligence will not just take our jobs, but might someday wipe us out.

That fear pushed one freshman at MIT to walk away from college. Alice Blair, who started at MIT in 2023, told Forbes she quit because she believes an artificial general intelligence (AGI) — a system able to match or surpass human intelligence across the board — could arrive fast enough to threaten human survival. “I was concerned I might not be alive to graduate because of AGI,” she told the outlet. Blair now works as a contract technical writer for the nonprofit Center for AI Safety and says she has no plans to return to campus.

Her decision landed in the headlines not because it’s a tidy argument for or against anything, but because it sharpens a question that’s been quietly spreading through certain corners of tech and campus life: when does legitimate caution cross into existential dread — and how should institutions, regulators and individuals respond?

The personal and the public

Blair’s story has a deeply personal logic. She enrolled expecting to find peers and professors who shared an interest in AI safety. Instead, she says, she found indifference. The Center for AI Safety — one of several organizations that rose to prominence arguing for stricter governance of powerful models — offered a path out of the ivory tower into advocacy and, for Blair, immediate work on the problem she found most worrying. Her move mirrors a wider pattern: some students are leaving academia not only for well-funded AI startups but for safety groups and policy shops that think the clock on AGI is ticking.

That timetable is deeply contested. Some people in the community — entrepreneurs, investors and a subset of researchers — say the pieces are coming together fast. Others call the talk premature, or even irresponsible. The debate is messy because it mixes hard technical disagreement (how do we measure general intelligence? what problems remain unsolved?) with PR, corporate strategy and human psychology.

Where the industry says it’s headed

It’s worth saying plainly what’s fanning this particular brand of fear: major companies themselves have used language that can sound apocalyptic. OpenAI’s recent push with its GPT-5 model — which in some accounts was rolled out awkwardly and met with user complaints — has been framed by some executives as a step toward AGI. OpenAI’s CEO Sam Altman has publicly described recent releases as big advances and has mapped out roadmaps that make the idea of “general” competence feel closer than it once did. News outlets covering the rollout, and the company’s public posts, fed the sense that the industry is sprinting toward something qualitatively different.

But the rollout’s reception also shows why many researchers remain cautious: incremental upgrades, bugs, hallucinations and generalization failures have persisted even as models get larger and more expensive to train. Those everyday failings are what many skeptics point to when they say true AGI is not right around the corner.

Related /

  • ChatGPT update adds three GPT-5 modes and restores GPT-4o for subscribers
  • OpenAI brought GPT-4o back despite GPT-5 launch
  • GPT-5 will replace GPT-4o in Apple Intelligence this fall
  • GPT-5 features — what OpenAI actually shipped
  • OpenAI’s GPT-5 now available for free and paid ChatGPT users

Experts, timelines, and disagreement

On timelines, opinion is all over the map. Some technologists argue for aggressive near-term timeframes; others — including vocal critics like Gary Marcus — call such predictions hype. Marcus and other skeptical voices say core problems such as robust reasoning, long-term planning and truthfulness haven’t been solved, so “AGI within five years” claims are unlikely. At the same time, surveys of AI researchers and forecasting groups show a wide spread of expectations — from a few decades to this century — with a small but significant minority predicting much sooner. What matters is not a single number, but that the uncertainty is real and the stakes are high enough that policymakers and industry should prepare for a range of outcomes.

The harms we already know about

Part of the reason the AGI discussion feels urgent is that AI is already causing clear, tangible harm. The technology’s environmental footprint is nontrivial: training and running massive models consume large amounts of electricity and cooling water, and several recent studies have tried to quantify the lifecycle carbon costs of generative AI at scale. Those impacts matter now, even if AGI never arrives.

AI is also reshaping the labor market. Firms are restructuring roles around automation and many CEOs openly point to AI as a reason for organizational change. Whether that becomes mass unemployment or a shift in job content is a separate and contested question, but the anxiety is real — and it colors decisions by students (like Blair) who worry their chosen career paths might evaporate.

And then there’s the quieter erosion: biased decisions baked into models, surveillance tools that scale automated observation, misinformation that spreads with breathtaking speed, and mental-health harms from interacting with systems people anthropomorphize. These are less cinematic than an “AI-kills-everyone” headline, but they’re present and policy-relevant today.

Are the doomsayers helping or hurting?

There’s an odd paradox at play. Tech leaders who talk up existential risk sometimes do so to justify tighter controls, more funding for safety work, or to influence regulation. Critics say that rhetoric can also be a PR lever: if a technology looks like it might be either miraculous or catastrophic, it gives companies leverage to shape policy and investment. In short, invoking catastrophe can fast-track both safety funding and corporate influence — and that ambiguity muddies public conversation.

That’s not to say existential concerns are illegitimate. The field of AI safety exists precisely because a handful of failure modes — from goal misalignment to fast, opaque capability jumps — could, in principle, be catastrophic. The question for most readers and policymakers is how to balance urgent, practical governance (data privacy, labor policy, environmental controls) against low-probability, high-impact scenarios that are hard to study with current tools.

What Blair’s choice signals

Blair’s decision is symbolic rather than singular. It tells us something about the mood in parts of the next generation: they see a world changing fast, they distrust institutions to respond quickly, and some would rather act than wait. Whether that action — walking away from a university degree and into advocacy work — is wise depends on your priors about timelines and on the non-trivial costs of leaving school. But as a public signal, it’s valuable: it forces universities, funders and regulators to reckon with the fact that existential worries are shaping real-life choices today.

Where to look next

If you want to follow this story without getting swept into hype, watch three things: how leading AI companies talk about capabilities (not just marketing language); what independent audits and peer-reviewed studies say about environmental and safety costs; and how governments move on concrete governance — disclosure, red-teaming requirements, and safety certifications. Those are the levers that will shape whether our grandchildren remember this era as the one that stumbled into a disaster, or the one that tightened the brakes in time.

Alice Blair isn’t the only person thinking hard about these questions. Whether you find her choice prudent or extreme, it finally forces a mundane, necessary discussion: what do we do about things we don’t yet fully understand but that already affect our planet, jobs and institutions? That’s the conversation worth having — loudly, skeptically, and with a lot more facts.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

ChatGPT for Clinicians is now free for verified US doctors

Microsoft finally adds passkey sync to its built-in password manager

Apple TV shares Star City trailer previewing its next premium sci-fi drama after For All Mankind

Windows Insider starts moving users to Experimental and Beta

OpenAI’s new workspace agents let ChatGPT run end-to-end team processes

Also Read
Promotional graphic of Snapchat’s Snap Map showing a location page for “The Rooftop @ Pier17” in New York within the Two Bridges area. The interface includes tabs like Memories, Trending, Footsteps, and Visited. A badge card in front highlights a Bitmoji character with a trophy and the title “Location Legend,” indicating Top 1% Visitor status among Snapchat users worldwide for place loyalty recognition.

Snapchat adds Place Loyalty to Snap Map

Promotional graphic showing Samsung SmartThings integration with IKEA smart home devices. The SmartThings and IKEA logos appear at the top, while connected devices such as sensors, smart plugs, lighting, a thermostat, and home control accessories are arranged around a central smart home hub. Dotted connection lines illustrate seamless device integration and Matter-compatible smart home connectivity between Samsung SmartThings and IKEA products.

Samsung SmartThings now supports IKEA Matter devices

Samsung Galaxy Z Fold7 foldable smartphone displayed partially open beside a black retail box labeled “Samsung Certified Re-Newed.” The device is shown in a silver finish with its large inner folding display visible, highlighting Samsung’s refurbished premium foldable phone program.

Samsung Certified Re-Newed now includes Galaxy Z Fold7 and Flip7

Illustration of hands holding a smartphone displaying a Meta account management screen, surrounded by social media app icons including Facebook, Instagram, Threads, WhatsApp, and other connected platforms. The image represents unified account access and identity management across Meta services and linked apps on a soft purple background.

Meta Account is replacing Accounts Center

Promotional collage of the Threads app interface showcasing live chat features for NBA discussions. Multiple overlapping screens display live chats such as “Warriors @ Clippers,” message threads, reactions, join chat buttons, and community pages labeled “NBA Threads.” The design highlights real-time sports conversations and group chat engagement within the Threads platform.

Meta launches Live Chats on Threads

Person relaxing on a couch in a cozy living room while wearing a virtual reality headset and watching a large curved floating screen. The screen displays a live TV program with emergency responders near an ambulance, creating an immersive home entertainment experience. Bookshelves, warm lighting, and modern decor surround the scene, highlighting mixed reality media viewing.

Meta Quest adds DIRECTV streaming

A stylish logo for Alexa, Amazon's digital voice assistant technology.

Amazon launches Alexa+ in Spain with local features

Hand holding a smartphone displaying the Amazon One Medical app with a GLP-1 weight loss treatment page. The screen shows a medication bottle image, the text “GLP-1 weight loss treatment,” and a yellow “Get started” button. The phone is centered against a soft mint-green circular background, representing digital healthcare access through Amazon One Medical.

Amazon One Medical launches GLP-1 weight loss program

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.