We’re all a little twitchy about AI right now. It’s become shorthand for a bunch of anxieties — climate costs, job upheaval, misinformation, surveillance — and lately a new, louder fear has crept into the conversation: that the next leap in artificial intelligence will not just take our jobs, but might someday wipe us out.
That fear pushed one freshman at MIT to walk away from college. Alice Blair, who started at MIT in 2023, told Forbes she quit because she believes an artificial general intelligence (AGI) — a system able to match or surpass human intelligence across the board — could arrive fast enough to threaten human survival. “I was concerned I might not be alive to graduate because of AGI,” she told the outlet. Blair now works as a contract technical writer for the nonprofit Center for AI Safety and says she has no plans to return to campus.
Her decision landed in the headlines not because it’s a tidy argument for or against anything, but because it sharpens a question that’s been quietly spreading through certain corners of tech and campus life: when does legitimate caution cross into existential dread — and how should institutions, regulators and individuals respond?
The personal and the public
Blair’s story has a deeply personal logic. She enrolled expecting to find peers and professors who shared an interest in AI safety. Instead, she says, she found indifference. The Center for AI Safety — one of several organizations that rose to prominence arguing for stricter governance of powerful models — offered a path out of the ivory tower into advocacy and, for Blair, immediate work on the problem she found most worrying. Her move mirrors a wider pattern: some students are leaving academia not only for well-funded AI startups but for safety groups and policy shops that think the clock on AGI is ticking.
That timetable is deeply contested. Some people in the community — entrepreneurs, investors and a subset of researchers — say the pieces are coming together fast. Others call the talk premature, or even irresponsible. The debate is messy because it mixes hard technical disagreement (how do we measure general intelligence? what problems remain unsolved?) with PR, corporate strategy and human psychology.
Where the industry says it’s headed
It’s worth saying plainly what’s fanning this particular brand of fear: major companies themselves have used language that can sound apocalyptic. OpenAI’s recent push with its GPT-5 model — which in some accounts was rolled out awkwardly and met with user complaints — has been framed by some executives as a step toward AGI. OpenAI’s CEO Sam Altman has publicly described recent releases as big advances and has mapped out roadmaps that make the idea of “general” competence feel closer than it once did. News outlets covering the rollout, and the company’s public posts, fed the sense that the industry is sprinting toward something qualitatively different.
But the rollout’s reception also shows why many researchers remain cautious: incremental upgrades, bugs, hallucinations and generalization failures have persisted even as models get larger and more expensive to train. Those everyday failings are what many skeptics point to when they say true AGI is not right around the corner.
Related /
- ChatGPT update adds three GPT-5 modes and restores GPT-4o for subscribers
- OpenAI brought GPT-4o back despite GPT-5 launch
- GPT-5 will replace GPT-4o in Apple Intelligence this fall
- GPT-5 features — what OpenAI actually shipped
- OpenAI’s GPT-5 now available for free and paid ChatGPT users
Experts, timelines, and disagreement
On timelines, opinion is all over the map. Some technologists argue for aggressive near-term timeframes; others — including vocal critics like Gary Marcus — call such predictions hype. Marcus and other skeptical voices say core problems such as robust reasoning, long-term planning and truthfulness haven’t been solved, so “AGI within five years” claims are unlikely. At the same time, surveys of AI researchers and forecasting groups show a wide spread of expectations — from a few decades to this century — with a small but significant minority predicting much sooner. What matters is not a single number, but that the uncertainty is real and the stakes are high enough that policymakers and industry should prepare for a range of outcomes.
The harms we already know about
Part of the reason the AGI discussion feels urgent is that AI is already causing clear, tangible harm. The technology’s environmental footprint is nontrivial: training and running massive models consume large amounts of electricity and cooling water, and several recent studies have tried to quantify the lifecycle carbon costs of generative AI at scale. Those impacts matter now, even if AGI never arrives.
AI is also reshaping the labor market. Firms are restructuring roles around automation and many CEOs openly point to AI as a reason for organizational change. Whether that becomes mass unemployment or a shift in job content is a separate and contested question, but the anxiety is real — and it colors decisions by students (like Blair) who worry their chosen career paths might evaporate.
And then there’s the quieter erosion: biased decisions baked into models, surveillance tools that scale automated observation, misinformation that spreads with breathtaking speed, and mental-health harms from interacting with systems people anthropomorphize. These are less cinematic than an “AI-kills-everyone” headline, but they’re present and policy-relevant today.
Are the doomsayers helping or hurting?
There’s an odd paradox at play. Tech leaders who talk up existential risk sometimes do so to justify tighter controls, more funding for safety work, or to influence regulation. Critics say that rhetoric can also be a PR lever: if a technology looks like it might be either miraculous or catastrophic, it gives companies leverage to shape policy and investment. In short, invoking catastrophe can fast-track both safety funding and corporate influence — and that ambiguity muddies public conversation.
That’s not to say existential concerns are illegitimate. The field of AI safety exists precisely because a handful of failure modes — from goal misalignment to fast, opaque capability jumps — could, in principle, be catastrophic. The question for most readers and policymakers is how to balance urgent, practical governance (data privacy, labor policy, environmental controls) against low-probability, high-impact scenarios that are hard to study with current tools.
What Blair’s choice signals
Blair’s decision is symbolic rather than singular. It tells us something about the mood in parts of the next generation: they see a world changing fast, they distrust institutions to respond quickly, and some would rather act than wait. Whether that action — walking away from a university degree and into advocacy work — is wise depends on your priors about timelines and on the non-trivial costs of leaving school. But as a public signal, it’s valuable: it forces universities, funders and regulators to reckon with the fact that existential worries are shaping real-life choices today.
Where to look next
If you want to follow this story without getting swept into hype, watch three things: how leading AI companies talk about capabilities (not just marketing language); what independent audits and peer-reviewed studies say about environmental and safety costs; and how governments move on concrete governance — disclosure, red-teaming requirements, and safety certifications. Those are the levers that will shape whether our grandchildren remember this era as the one that stumbled into a disaster, or the one that tightened the brakes in time.
Alice Blair isn’t the only person thinking hard about these questions. Whether you find her choice prudent or extreme, it finally forces a mundane, necessary discussion: what do we do about things we don’t yet fully understand but that already affect our planet, jobs and institutions? That’s the conversation worth having — loudly, skeptically, and with a lot more facts.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
