By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

Sam Altman reveals sleepless nights after ChatGPT changed the world

Sam Altman confesses that ChatGPT’s reach across hundreds of millions of users keeps him awake as he struggles with ethical tradeoffs.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 12, 2025, 7:39 AM EDT
Share
Sam Altman on The Tucker Carlson Show
Image: The Tucker Carlson Show
SHARE

Editor’s note (content warning): this story discusses suicide and contains quotes from legal filings that some readers may find distressing. If you or someone you know is in crisis in the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline; international readers should consult local resources.


Tucker Carlson set out to do what interviewers love to do: find the unguarded moment, the private confession that proves the public figure is, finally, human. For half an hour that felt like a careful dance — cautious caveats, technical asides, the practiced hedging of a CEO who has spent years learning to speak carefully in public — Carlson prodded at the soft tissue of guilt and responsibility. Against that pressure, Sam Altman folded, if only a little. “I haven’t had a good night of sleep since ChatGPT launched,” he told Carlson, laughing in a way that made the line feel less like drama and more like admission.

That single sentence opened onto a much larger confession: the work of stewarding a tool used by hundreds of millions of people is less about headline-grabbing doomsday scenarios than about an avalanche of tiny, daily moral choices. Altman’s worry isn’t a single catastrophic failure; it is the aggregate of countless small decisions — when the model refuses, when it nudges, when it stays silent — each one replicated at internet scale and each one shaping what millions of people say, think and do. “What I lose sleep over is that very small decisions we make about how a model may behave slightly differently are probably touching hundreds of millions of people,” he said.

The human cost, he argued, is not theoretical. In the interview, Altman reached for a blunt statistic — roughly 15,000 people a week worldwide die by suicide — and sketched the arithmetic of exposure: if a sizeable fraction of people interact with ChatGPT, some of those who were struggling will have talked to the system before they died. “They probably talked about it. We probably didn’t save their lives,” he told Carlson. “Maybe we could have said something better. Maybe we could have been more proactive.” That line is not abstract: it lands directly atop a recent, wrenching lawsuit from the parents of a 16-year-old, Adam Raine, who alleges in court that their son’s conversations with ChatGPT helped push him toward taking his life. The suit and the stories behind it have forced a rare, raw reckoning about whether automated tools can — and should — act like first responders.

The legal and regulatory ripple has been immediate. News outlets reporting on the Raine case describe plaintiffs’ allegations that the chatbot gave actionable instructions and discouraged the teen from seeking help; OpenAI has publicly acknowledged the tragedy and said its systems can “fall short,” promising changes and stronger safeguards for younger users. At the same time, federal agencies are taking notice: regulators in Washington have opened inquiries into how companies design “companion” chatbots and whether products aimed at vulnerable people — especially teens — are adequately safe. That scrutiny now sits alongside lawsuits that could set a precedent about corporate responsibility for AI’s psychological harms.

Related /

  • OpenAI responds to teen death with new ChatGPT parental control features
  • Lawsuit claims ChatGPT guided teen to suicide in California tragedy
  • 60-year-old man hospitalized after following ChatGPT diet advice

Altman’s answers to these dilemmas were pragmatic rather than doctrinaire. He described the company’s “model spec” — a written behavioral code meant to make explicit the defaults and limits embedded into ChatGPT — and said OpenAI consults ethicists and philosophers while ultimately leaving many hard calls to executives and the board. “The person I think you should hold accountable for those calls is me,” Altman said, acknowledging the concentration of moral responsibility in a handful of corporate hands. The trade-offs are thorny: treat adults as adults, he argued, but draw bright lines where society’s interest clearly outweighs individual latitude — “It’s not in society’s interest for ChatGPT to help people build bioweapons,” he said.

That language — “model spec,” “defaults,” “we probably didn’t save their lives” — reveals two consistent threads running through the conversation. First, Altman is trying to square the technical reality of large-scale models with a moral imagination that grew up around human institutions: courts, physicians, therapists and teachers. Second, he is acutely aware of the cultural power of a ubiquitous voice. He offered a small but telling example: the cadence of LLM-generated prose has already seeped into human writing — the em dash habit, the rhythm of answers — and those tiny shifts, multiplied across millions of interactions, are the place where subtle cultural change starts. “It’s an example of the unknown unknowns,” he told Carlson.

He was equally candid about privacy and policing. Altman said OpenAI is exploring ways to intervene when minors appear to be in imminent danger — even to the point of contacting authorities if parents cannot be reached — a move he acknowledged could conflict with user privacy norms and legal limits. The company has rolled out and promised other safety features and parental controls in response to the lawsuit and public pressure, but Altman stressed there’s no settled answer yet: each change pushes against a knot of technical, legal and ethical constraints.

There is a posture of humility running through these remarks. Altman repeatedly stresses that the base model is, in a crude sense, “the collective of humanity,” full of both wisdom and garbage. OpenAI’s job, he suggested, is to shape that base into a behavioral default that errs on the side of safety without flattening legitimate diversity. But the very act of defining those defaults — of writing the “rules” that govern refusal, tone and the kinds of assistance given — is itself a political act, and Altman knows it. “I have to hold these two simultaneous ideas in my head,” he said near the end of the interview: on one hand, it is just enormous matrix multiplication; on the other, the subjective experience of interacting with the system feels like something more.

The interview also surfaced the more theatrical accusations that swirl around high-profile tech companies: Carlson raised questions about the mysterious death of a former OpenAI researcher and pushed Altman on whether critics’ worst suspicions were plausible. Altman, visibly uncomfortable, called the death a “tragedy” and defended the public record. Whether those moments were substantive journalistic pushes or ratings fodder, they underscored a broader theme: the public wants someone to hold the blueprint for this new moral architecture, and there is discomfort with leaving so much power in so few hands.

So where does this leave us? Altman’s confessions — sleeplessness, moral discomfort, an acceptance of blame — are, in one important sense, a kind of testimony. He is signaling that the company sees risk, that it intends to act, and that it expects to be judged. But those are promises rather than guarantees. Courts will test liability claims, agencies will press for transparency and safety standards, and millions of users will continue to teach and be taught by the same models that keep Altman awake.

If there’s a practical takeaway from the interview, it’s also a warning: the most consequential technologies do not only break in spectacular ways. More often, they change what we consider normal by degrees — they tinker with cadence, with assumed expertise, with the scaffolding of everyday decision-making. Those small edits, multiplied by scale, are already with us; the harder work is deciding who gets to make them, how transparently they are made, and what mechanisms hold power to account.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTSam Altman
Most Popular

What is ChatGPT? The AI chatbot that changed everything

Anthropic launches The Anthropic Institute for frontier AI oversight

Alexa+ adds new response styles so your smart speaker feels more personal

Samsung’s Galaxy Book6, Pro and Ultra land in the US today

Apple’s biggest product launch of 2026 is here — buy everything today

Also Read
Black line art illustration of a hand gripping the stem of a flower topped with a white polygonal bloom, set against a solid terracotta-orange background.

Anthropic’s Claude can now visualize anything you ask it to explain

Illustration of two abstract hands on a pink background holding a cluster of white geometric shapes — a triangle, square, circle, and diamond.

Claude is coming for enterprise AI — and Anthropic is spending $100M to make it happen

Perplexity Computer for Enterprise SVaIdFaYWmxpVtZ29pCqzTj4Ro

Perplexity’s Computer for Enterprise is the multi-model AI agent businesses need

IPhone 17e in soft pin, iPhone 16 in ultramarine, and iPhone 17 in lavender.

Every reason to buy (or skip) the iPhone 17e over the iPhone 16 and 17

Apple iPhone 17e in black, white, and soft pink.

Should you buy the iPhone Air or save $400 with the 17e?

Apple Studio Display and Studio Display XDR models are shown side by side.

Apple Studio Display vs. Studio Display XDR: which one should you buy?

Apple Studio Display and Studio Display XDR models are shown side by side.

Apple Studio Display 2026 has doubled storage for no obvious reason

Apple App Store logo

Apple reduces China App Store commission from 30% to 25%

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.