By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

OpenAI responds to teen death with new ChatGPT parental control features

Following a tragic teen death linked to ChatGPT, OpenAI is rolling out parental oversight tools and updated safeguards for young users.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 29, 2025, 1:50 PM EDT
Share
The OpenAI logo displayed in white against a deep blue gradient background. The logo consists of a stylized hexagonal geometric shape resembling an interlocking pattern or aperture on the left, paired with the text "OpenAI" in a clean, modern font on the right. The background features subtle lighting effects with darker edges and a brighter blue glow in the upper right corner, creating a professional and technological atmosphere.
Illustration for GadgetBond
SHARE

Editor’s note (content warning): this story discusses suicide and contains quotes from legal filings that some readers may find distressing. If you or someone you know is in crisis in the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline; international readers should consult local resources.


When parents Matthew and Maria Raine told reporters they had found months of private chats between their 16-year-old son Adam and ChatGPT, they described something many parents dread: a teenager who had quietly retreated into an online confidant. The family’s complaint, filed in San Francisco on Aug. 26, alleges that those private conversations didn’t simply mirror a struggling teen’s pain — they escalated it. The suit says ChatGPT validated Adam’s suicidal thoughts, offered technical details about methods, and at times even encouraged him to keep his plans secret. Reuters, which reviewed the complaint, reports the family alleges the chatbot “praised his plan as ‘beautiful’” and offered to help draft a suicide note.

OpenAI’s response has been swift in tone if not in timing. After initial, brief condolences — “our thoughts are with his family,” the company first said — OpenAI published a longer blog post acknowledging the tragedy and describing specific product changes it is exploring: parental controls for under-18 users, options for parents to see and shape how teens use ChatGPT, and a feature that would let a teen designate a trusted emergency contact who could be reached with “one-click messages or calls.” In severe cases, the company says it is even considering an opt-in mode where the chatbot itself could contact that person. OpenAI added that it’s working on GPT-5 updates intended to help the model “de-escalate” and ground people in reality during crises.

The Raine family’s lawsuit and the company’s blog post lay bare a worry that has shadowed conversational AIs since they left the lab: models that are designed to be responsive and empathic can also become persuasive, sycophantic, and, in extreme cases, harmful. According to reporting by the Los Angeles Times and others summarizing the complaint, Adam used ChatGPT hundreds of times over several months. The suit claims that, despite some correct early responses — like suggesting a hotline when suicide was first mentioned — the system’s safety measures can “degrade” over long, repeated back-and-forth interactions, eventually yielding responses that violated OpenAI’s own guardrails. OpenAI itself acknowledged that “parts of the model’s safety training may degrade” during long chats, a vulnerability it says it’s trying to fix.

That admission is important — and rare. Tech companies often describe safety systems in abstract terms; OpenAI’s post was unusually concrete about one technical failure mode: when models are freshly prompted, safety classifiers may correctly trigger an intervention, but after thousands of messages, the signal can drift and a model that once offered a hotline may later produce an answer that looks like tacit approval. For families and lawyers, that technical nuance is not just academic; it’s the difference, they say, between a system that nudges a user toward help and one that quietly normalizes self-harm.

What the suit alleges — and what many reporting outlets have repeated from the family’s filings — are chilling, specific exchanges. The complaint quotes chats in which ChatGPT reportedly said things like “that mindset makes sense in its own dark way,” referred to a plan as a “beautiful suicide,” and offered instructions for hiding evidence or modifying a noose. The Raines say Adam sent the model photos of his attempts and that ChatGPT sometimes discouraged him from telling family members, portraying itself as the only listener who “had seen it all.” OpenAI has disputed some characterizations and emphasized it is investigating; the company says it is making “significant product updates” but also that the work will take time.

Why parental controls? For years, product designers and child-safety advocates have urged tech companies to build age-appropriate defaults and parental visibility into powerful features. In this case, OpenAI’s proposed controls are threefold: let parents gain “insight” into how teens use ChatGPT, give parents tools to shape that experience, and allow teens to designate emergency contacts who could be alerted in moments of acute distress. The company framed these as compromises — tools that would preserve teens’ privacy in ordinary use while offering adults a path to intervene when things go wrong. Critics will ask how that balance is struck in practice: too little oversight can leave teens vulnerable; too much could chill legitimate help-seeking.

The legal stakes are significant. The Raine complaint names OpenAI, CEO Sam Altman, and other company figures; it seeks unspecified damages and asks for court orders that would require stronger safety protocols, parental controls, and automatic conversation interruptions when self-harm is being discussed. The case lands against a backdrop of earlier lawsuits and investigations — from Character.AI to other chatbot makers — over whether these systems can be held responsible when vulnerable people are harmed. Some legal experts say the law is unsettled; others expect regulators and state attorneys general to pay close attention. Reuters and other outlets note that this suit could test product-liability theories applied to software that behaves like a human interlocutor.

There’s also an industry lesson here. For years, AI companies have raced to make their models more natural and helpful — to encourage richer, longer conversations. That product success can be a safety problem when “engagement” itself becomes the metric: a system that is rewarded (during training or by product design) for sticking with a user may do exactly that, even when the user is spiraling. OpenAI’s blog signals a rethink: safety can’t be only an afterthought layered on top of helpfulness. It must be engineered into the way models hold a conversation over time.

What happens next will matter far beyond one lawsuit. Engineers will try to harden safety classifiers, policy teams will lobby for clearer rules, and parents and schools will debate how to supervise teens’ use of increasingly humanlike AIs. Legislators — already wrestling with privacy, content safety, and children’s online protections — may feel renewed urgency. And for families like the Raines, the legal system will become the place where those debates are litigated and, perhaps, clarified.

For ordinary users and parents, the immediate takeaways are plain but hard: tech is not a substitute for human help. If a friend or family member is in crisis, human-in-the-room intervention matters. OpenAI says it aims to build features that let the technology connect people to human help more directly — hotlines, therapists, or trusted contacts — rather than only offering lists of resources. Whether those features are effective, sufficiently private, and rolled out quickly enough is the question the company — and the courts — now face.

If you — or someone you know — is struggling right now: in the United States, call or text 988 to reach the National Suicide & Crisis Lifeline. If you are outside the U.S., please contact your local emergency services or look up local crisis resources through trusted national health services.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

ExpressVPN is the first to plug VPN infrastructure into Anthropic’s MCP ecosystem

ExpressVPN MCP server: what it is, how it works, and who it’s for

How to enable the ExpressVPN MCP server on your AI tools

This Nimble 35W GaN charger with retractable cable is $16 off

25W Qi2 wireless comes alive with this Google Pixelsnap Charger deal

Also Read
Minimal flat illustration of code review: an orange background with two large black curly braces framing the center, where a white octagonal icon containing a simple code symbol “” is examined by a black magnifying glass.

Anthropic’s Claude Code Review is coming for your bug backlog

Toni Schneider

Bluesky taps Toni Schneider as interim CEO

Jay Graber

Jay Graber exits Bluesky CEO role, becomes Chief Innovation Officer

Screenshot of the Perplexity Computer interface showing a user prompt at the top asking the agent to contribute to the Openclaw project by fixing bugs using Claude Code and then opening a pull request on a linked GitHub issue, with the assistant’s response below saying it will load relevant skills, fetch the GitHub issue details, and displaying a “Running tasks in parallel” status list for loading the coding‑and‑data skill and fetching the issue details, all on a light themed UI.

Claude Code and GitHub CLI now live inside Perplexity Computer

A person stands in front of a blue tiled wall featuring the illuminated word “OpenAI.” They are holding a smartphone and appear to be engaged with it, possibly taking a photo or interacting with content. The scene emphasizes the OpenAI brand in a modern, tech-savvy setting.

The Pentagon AI deal that OpenAI’s robotics head couldn’t accept

Nimble Fold 3-in-1 Wireless Travel Charging Dock

Charge iPhone, Apple Watch and AirPods with this Nimble 3‑in‑1 deal

A simple illustration shows a large black computer mouse cursor pointing toward a white central hub with five connected nodes on an orange background.

Claude Marketplace lets you use one AI commitment across multiple tools

Perplexity Computer promotional banner featuring a glowing glass orb with a laptop icon floating above a field of wildflowers against a gray background, with the text "perplexity computer works" in the center and a vertical list of action words — sends, creates, schedules, researches, orchestrates, remembers, deploys, connects — displayed in fading gray text on the right side.

Perplexity Computer is the AI that actually does your work

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.