By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AITech

MIT dropout says AGI will wipe out humans before she finishes her degree

An MIT student dropped out after claiming artificial general intelligence could cause human extinction before she even graduates.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 16, 2025, 3:15 AM EDT
Share
Illustrated image of artificial intelligence (AI)
Illustration by Kasia Bojanowska / Dribbble
SHARE

We’re all a little twitchy about AI right now. It’s become shorthand for a bunch of anxieties — climate costs, job upheaval, misinformation, surveillance — and lately a new, louder fear has crept into the conversation: that the next leap in artificial intelligence will not just take our jobs, but might someday wipe us out.

That fear pushed one freshman at MIT to walk away from college. Alice Blair, who started at MIT in 2023, told Forbes she quit because she believes an artificial general intelligence (AGI) — a system able to match or surpass human intelligence across the board — could arrive fast enough to threaten human survival. “I was concerned I might not be alive to graduate because of AGI,” she told the outlet. Blair now works as a contract technical writer for the nonprofit Center for AI Safety and says she has no plans to return to campus.

Her decision landed in the headlines not because it’s a tidy argument for or against anything, but because it sharpens a question that’s been quietly spreading through certain corners of tech and campus life: when does legitimate caution cross into existential dread — and how should institutions, regulators and individuals respond?

The personal and the public

Blair’s story has a deeply personal logic. She enrolled expecting to find peers and professors who shared an interest in AI safety. Instead, she says, she found indifference. The Center for AI Safety — one of several organizations that rose to prominence arguing for stricter governance of powerful models — offered a path out of the ivory tower into advocacy and, for Blair, immediate work on the problem she found most worrying. Her move mirrors a wider pattern: some students are leaving academia not only for well-funded AI startups but for safety groups and policy shops that think the clock on AGI is ticking.

That timetable is deeply contested. Some people in the community — entrepreneurs, investors and a subset of researchers — say the pieces are coming together fast. Others call the talk premature, or even irresponsible. The debate is messy because it mixes hard technical disagreement (how do we measure general intelligence? what problems remain unsolved?) with PR, corporate strategy and human psychology.

Where the industry says it’s headed

It’s worth saying plainly what’s fanning this particular brand of fear: major companies themselves have used language that can sound apocalyptic. OpenAI’s recent push with its GPT-5 model — which in some accounts was rolled out awkwardly and met with user complaints — has been framed by some executives as a step toward AGI. OpenAI’s CEO Sam Altman has publicly described recent releases as big advances and has mapped out roadmaps that make the idea of “general” competence feel closer than it once did. News outlets covering the rollout, and the company’s public posts, fed the sense that the industry is sprinting toward something qualitatively different.

But the rollout’s reception also shows why many researchers remain cautious: incremental upgrades, bugs, hallucinations and generalization failures have persisted even as models get larger and more expensive to train. Those everyday failings are what many skeptics point to when they say true AGI is not right around the corner.

Related /

  • ChatGPT update adds three GPT-5 modes and restores GPT-4o for subscribers
  • OpenAI brought GPT-4o back despite GPT-5 launch
  • GPT-5 will replace GPT-4o in Apple Intelligence this fall
  • GPT-5 features — what OpenAI actually shipped
  • OpenAI’s GPT-5 now available for free and paid ChatGPT users

Experts, timelines, and disagreement

On timelines, opinion is all over the map. Some technologists argue for aggressive near-term timeframes; others — including vocal critics like Gary Marcus — call such predictions hype. Marcus and other skeptical voices say core problems such as robust reasoning, long-term planning and truthfulness haven’t been solved, so “AGI within five years” claims are unlikely. At the same time, surveys of AI researchers and forecasting groups show a wide spread of expectations — from a few decades to this century — with a small but significant minority predicting much sooner. What matters is not a single number, but that the uncertainty is real and the stakes are high enough that policymakers and industry should prepare for a range of outcomes.

The harms we already know about

Part of the reason the AGI discussion feels urgent is that AI is already causing clear, tangible harm. The technology’s environmental footprint is nontrivial: training and running massive models consume large amounts of electricity and cooling water, and several recent studies have tried to quantify the lifecycle carbon costs of generative AI at scale. Those impacts matter now, even if AGI never arrives.

AI is also reshaping the labor market. Firms are restructuring roles around automation and many CEOs openly point to AI as a reason for organizational change. Whether that becomes mass unemployment or a shift in job content is a separate and contested question, but the anxiety is real — and it colors decisions by students (like Blair) who worry their chosen career paths might evaporate.

And then there’s the quieter erosion: biased decisions baked into models, surveillance tools that scale automated observation, misinformation that spreads with breathtaking speed, and mental-health harms from interacting with systems people anthropomorphize. These are less cinematic than an “AI-kills-everyone” headline, but they’re present and policy-relevant today.

Are the doomsayers helping or hurting?

There’s an odd paradox at play. Tech leaders who talk up existential risk sometimes do so to justify tighter controls, more funding for safety work, or to influence regulation. Critics say that rhetoric can also be a PR lever: if a technology looks like it might be either miraculous or catastrophic, it gives companies leverage to shape policy and investment. In short, invoking catastrophe can fast-track both safety funding and corporate influence — and that ambiguity muddies public conversation.

That’s not to say existential concerns are illegitimate. The field of AI safety exists precisely because a handful of failure modes — from goal misalignment to fast, opaque capability jumps — could, in principle, be catastrophic. The question for most readers and policymakers is how to balance urgent, practical governance (data privacy, labor policy, environmental controls) against low-probability, high-impact scenarios that are hard to study with current tools.

What Blair’s choice signals

Blair’s decision is symbolic rather than singular. It tells us something about the mood in parts of the next generation: they see a world changing fast, they distrust institutions to respond quickly, and some would rather act than wait. Whether that action — walking away from a university degree and into advocacy work — is wise depends on your priors about timelines and on the non-trivial costs of leaving school. But as a public signal, it’s valuable: it forces universities, funders and regulators to reckon with the fact that existential worries are shaping real-life choices today.

Where to look next

If you want to follow this story without getting swept into hype, watch three things: how leading AI companies talk about capabilities (not just marketing language); what independent audits and peer-reviewed studies say about environmental and safety costs; and how governments move on concrete governance — disclosure, red-teaming requirements, and safety certifications. Those are the levers that will shape whether our grandchildren remember this era as the one that stumbled into a disaster, or the one that tightened the brakes in time.

Alice Blair isn’t the only person thinking hard about these questions. Whether you find her choice prudent or extreme, it finally forces a mundane, necessary discussion: what do we do about things we don’t yet fully understand but that already affect our planet, jobs and institutions? That’s the conversation worth having — loudly, skeptically, and with a lot more facts.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

Google Marketing Platform gets the Gemini Advantage

iOS 26.4 adds Ambient Music widget and chatbot support to CarPlay

YouTube rebranded BrandConnect to Creator Partnerships at NewFronts 2026

Claude Cowork and Claude Code now automate real desktop work while you’re away

Firefox 149 adds Split View for effortless side-by-side browsing

Also Read
A modern Amazon Echo Show 11 smart display with an 11‑inch screen sits on a wooden table, showing Alexa+ conversational prompts, smart home controls, weather, and family photos against a neutral wall background.

Amazon’s new Echo Show 11 is $50 off in Big Spring Sale 2026

A stylized Firefox logo in bright orange, pink and purple sits centered against a dark purple night sky with soft clouds and rolling hills in the background.

Firefox 149 update: Split View browsing, free VPN and more

Illustration of a Firefox browser window on a pastel background showing a purple landscape with a small orange Firefox mascot in the center, a “VPN” badge highlighted at the top of the window, and a status card in the corner reading “VPN is on – 50 GB left this month,” promoting Firefox’s built‑in VPN feature.

Firefox rolls out free VPN with 50GB a month

A modern flat‑screen TV mounted on a white wall shows a woman playing a cello in a golden field at sunset, with a slim black soundbar centered on a long wooden media console decorated with white flowers on the left and candles on the right.

Sony unveils BRAVIA Theatre soundbars and new BRAVIA 3 II, 2 II TVs

Light beige Denon Home wireless speakers, including a compact cylindrical model, a wider oval center speaker and a larger rounded rectangular unit, arranged on a wooden coffee table in a warm, modern living room with a beige sofa and rust‑colored cushions in the background.

Denon Home 200, 400 and 600 bring room-ready wireless sound

Black and white photograph of an Apple Store at night, featuring the iconic illuminated Apple logo on a modern glass storefront. The two-story retail space shows customers and staff silhouetted inside the brightly lit interior. An escalator is visible in the foreground leading up to the store level. The architectural design features clean lines with floor-to-ceiling windows and a distinctive slatted ceiling detail. Holiday lights can be seen decorating nearby areas, creating a festive atmosphere around the modern retail environment.

Apple expands American Manufacturing Program with new partners

A wide promotional image showing five vertical Snapchat‑style video frames arranged in an arc, each featuring a different person in a dynamic scene—walking in a city with pink hair, floating in space in an astronaut helmet, riding a horse through a canal city, posing among tall cacti with white flowers, and swimming underwater near coral and fish—with a colorful play‑button icon and the text “AI Clips” centered at the bottom on a dark gradient background.

Snapchat brings one-tap AI video magic to Lens Studio

A dark terminal window labeled “earthling — zsh” sits over a pastel green Figma‑style UI mockup, showing a command that says “Build me a new component set based on my button.tsx file,” followed by a status list indicating Figma skills successfully loaded, three files read, and a button component created with 72 variants.

Figma just opened its canvas to AI agents

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.