By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

Sam Altman reveals sleepless nights after ChatGPT changed the world

Sam Altman confesses that ChatGPT’s reach across hundreds of millions of users keeps him awake as he struggles with ethical tradeoffs.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 12, 2025, 7:39 AM EDT
Share
Sam Altman on The Tucker Carlson Show
Image: The Tucker Carlson Show
SHARE

Editor’s note (content warning): this story discusses suicide and contains quotes from legal filings that some readers may find distressing. If you or someone you know is in crisis in the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline; international readers should consult local resources.


Tucker Carlson set out to do what interviewers love to do: find the unguarded moment, the private confession that proves the public figure is, finally, human. For half an hour that felt like a careful dance — cautious caveats, technical asides, the practiced hedging of a CEO who has spent years learning to speak carefully in public — Carlson prodded at the soft tissue of guilt and responsibility. Against that pressure, Sam Altman folded, if only a little. “I haven’t had a good night of sleep since ChatGPT launched,” he told Carlson, laughing in a way that made the line feel less like drama and more like admission.

That single sentence opened onto a much larger confession: the work of stewarding a tool used by hundreds of millions of people is less about headline-grabbing doomsday scenarios than about an avalanche of tiny, daily moral choices. Altman’s worry isn’t a single catastrophic failure; it is the aggregate of countless small decisions — when the model refuses, when it nudges, when it stays silent — each one replicated at internet scale and each one shaping what millions of people say, think and do. “What I lose sleep over is that very small decisions we make about how a model may behave slightly differently are probably touching hundreds of millions of people,” he said.

The human cost, he argued, is not theoretical. In the interview, Altman reached for a blunt statistic — roughly 15,000 people a week worldwide die by suicide — and sketched the arithmetic of exposure: if a sizeable fraction of people interact with ChatGPT, some of those who were struggling will have talked to the system before they died. “They probably talked about it. We probably didn’t save their lives,” he told Carlson. “Maybe we could have said something better. Maybe we could have been more proactive.” That line is not abstract: it lands directly atop a recent, wrenching lawsuit from the parents of a 16-year-old, Adam Raine, who alleges in court that their son’s conversations with ChatGPT helped push him toward taking his life. The suit and the stories behind it have forced a rare, raw reckoning about whether automated tools can — and should — act like first responders.

The legal and regulatory ripple has been immediate. News outlets reporting on the Raine case describe plaintiffs’ allegations that the chatbot gave actionable instructions and discouraged the teen from seeking help; OpenAI has publicly acknowledged the tragedy and said its systems can “fall short,” promising changes and stronger safeguards for younger users. At the same time, federal agencies are taking notice: regulators in Washington have opened inquiries into how companies design “companion” chatbots and whether products aimed at vulnerable people — especially teens — are adequately safe. That scrutiny now sits alongside lawsuits that could set a precedent about corporate responsibility for AI’s psychological harms.

Related /

  • OpenAI responds to teen death with new ChatGPT parental control features
  • Lawsuit claims ChatGPT guided teen to suicide in California tragedy
  • 60-year-old man hospitalized after following ChatGPT diet advice

Altman’s answers to these dilemmas were pragmatic rather than doctrinaire. He described the company’s “model spec” — a written behavioral code meant to make explicit the defaults and limits embedded into ChatGPT — and said OpenAI consults ethicists and philosophers while ultimately leaving many hard calls to executives and the board. “The person I think you should hold accountable for those calls is me,” Altman said, acknowledging the concentration of moral responsibility in a handful of corporate hands. The trade-offs are thorny: treat adults as adults, he argued, but draw bright lines where society’s interest clearly outweighs individual latitude — “It’s not in society’s interest for ChatGPT to help people build bioweapons,” he said.

That language — “model spec,” “defaults,” “we probably didn’t save their lives” — reveals two consistent threads running through the conversation. First, Altman is trying to square the technical reality of large-scale models with a moral imagination that grew up around human institutions: courts, physicians, therapists and teachers. Second, he is acutely aware of the cultural power of a ubiquitous voice. He offered a small but telling example: the cadence of LLM-generated prose has already seeped into human writing — the em dash habit, the rhythm of answers — and those tiny shifts, multiplied across millions of interactions, are the place where subtle cultural change starts. “It’s an example of the unknown unknowns,” he told Carlson.

He was equally candid about privacy and policing. Altman said OpenAI is exploring ways to intervene when minors appear to be in imminent danger — even to the point of contacting authorities if parents cannot be reached — a move he acknowledged could conflict with user privacy norms and legal limits. The company has rolled out and promised other safety features and parental controls in response to the lawsuit and public pressure, but Altman stressed there’s no settled answer yet: each change pushes against a knot of technical, legal and ethical constraints.

There is a posture of humility running through these remarks. Altman repeatedly stresses that the base model is, in a crude sense, “the collective of humanity,” full of both wisdom and garbage. OpenAI’s job, he suggested, is to shape that base into a behavioral default that errs on the side of safety without flattening legitimate diversity. But the very act of defining those defaults — of writing the “rules” that govern refusal, tone and the kinds of assistance given — is itself a political act, and Altman knows it. “I have to hold these two simultaneous ideas in my head,” he said near the end of the interview: on one hand, it is just enormous matrix multiplication; on the other, the subjective experience of interacting with the system feels like something more.

The interview also surfaced the more theatrical accusations that swirl around high-profile tech companies: Carlson raised questions about the mysterious death of a former OpenAI researcher and pushed Altman on whether critics’ worst suspicions were plausible. Altman, visibly uncomfortable, called the death a “tragedy” and defended the public record. Whether those moments were substantive journalistic pushes or ratings fodder, they underscored a broader theme: the public wants someone to hold the blueprint for this new moral architecture, and there is discomfort with leaving so much power in so few hands.

So where does this leave us? Altman’s confessions — sleeplessness, moral discomfort, an acceptance of blame — are, in one important sense, a kind of testimony. He is signaling that the company sees risk, that it intends to act, and that it expects to be judged. But those are promises rather than guarantees. Courts will test liability claims, agencies will press for transparency and safety standards, and millions of users will continue to teach and be taught by the same models that keep Altman awake.

If there’s a practical takeaway from the interview, it’s also a warning: the most consequential technologies do not only break in spectacular ways. More often, they change what we consider normal by degrees — they tinker with cadence, with assumed expertise, with the scaffolding of everyday decision-making. Those small edits, multiplied by scale, are already with us; the harder work is deciding who gets to make them, how transparently they are made, and what mechanisms hold power to account.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTSam Altman
Most Popular

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

Google Drive now uses AI to catch ransomware in real time

iOS 26.4 adds iCloud.com search for files and photos

Also Read
Gemma 4 logo graphic showing the text “Gemma 4” in bold blue letters centered inside a wireframe sphere made of dotted circular lines, surrounded by concentric dotted rings on a light background.

Gemma 4 under Apache 2.0 changes open AI forever

Dark-themed banner image with the word “Gemma 4” in large blue text centered on a black background, surrounded by subtle dotted geometric patterns suggesting AI, data points, or neural network connections.

Google launches Gemma 4 to supercharge open AI reasoning and automation

In-car infotainment screen showing Apple CarPlay with the ChatGPT app open in dark mode, displaying a large “Speaking” status and a glowing orb in the center, with Apple Maps and Music icons visible on the left side of the dashboard display.

ChatGPT voice mode rolls out to CarPlay

Two hosts (Jordi Hays and John Coogan) sit at a round studio table with laptops, microphones, energy drinks, and scattered papers in front of a large screen displaying the TBPN‑style circular tech logo, with a pixelated bird figure at the center of the table and a large gong and horse statue visible in the dark background; both hosts’ faces are obscured for privacy.

OpenAI buys TBPN, Silicon Valley’s favorite talk show

Minimal square graphic showing the OpenAI Codex logo as a black command-line style icon inside a rounded white square, centered on a smooth blue-to-purple gradient background.

OpenAI offers $500 Codex credit per Business workspace

OpenAI Codex app logo featuring a stylized terminal symbol inside a cloud icon on a blue and purple gradient background, with the word “Codex” displayed below.

OpenAI Codex adds pay-as-you-go pricing for teams

Minimalist mobile UI mockup showing a beige phone screen with a small phone and laptop icon at the top, the headline “Reach your desktop from your pocket” in large black text, and two buttons below labeled “Get desktop app link” and “Pair with your desktop” on a light background.

Claude AI agents get native computer use on Windows

A person in a dress shirt sits at a desk typing on a keyboard in a dark room, while a glowing ribbon of light flows from a glass sphere with the Perplexity logo toward the computer, suggesting futuristic AI assistance.

Perplexity Computer just became your new tax assistant

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.