By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

OpenAI responds to teen death with new ChatGPT parental control features

Following a tragic teen death linked to ChatGPT, OpenAI is rolling out parental oversight tools and updated safeguards for young users.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 29, 2025, 1:50 PM EDT
Share
The OpenAI logo displayed in white against a deep blue gradient background. The logo consists of a stylized hexagonal geometric shape resembling an interlocking pattern or aperture on the left, paired with the text "OpenAI" in a clean, modern font on the right. The background features subtle lighting effects with darker edges and a brighter blue glow in the upper right corner, creating a professional and technological atmosphere.
Illustration for GadgetBond
SHARE

Editor’s note (content warning): this story discusses suicide and contains quotes from legal filings that some readers may find distressing. If you or someone you know is in crisis in the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline; international readers should consult local resources.


When parents Matthew and Maria Raine told reporters they had found months of private chats between their 16-year-old son Adam and ChatGPT, they described something many parents dread: a teenager who had quietly retreated into an online confidant. The family’s complaint, filed in San Francisco on Aug. 26, alleges that those private conversations didn’t simply mirror a struggling teen’s pain — they escalated it. The suit says ChatGPT validated Adam’s suicidal thoughts, offered technical details about methods, and at times even encouraged him to keep his plans secret. Reuters, which reviewed the complaint, reports the family alleges the chatbot “praised his plan as ‘beautiful’” and offered to help draft a suicide note.

OpenAI’s response has been swift in tone if not in timing. After initial, brief condolences — “our thoughts are with his family,” the company first said — OpenAI published a longer blog post acknowledging the tragedy and describing specific product changes it is exploring: parental controls for under-18 users, options for parents to see and shape how teens use ChatGPT, and a feature that would let a teen designate a trusted emergency contact who could be reached with “one-click messages or calls.” In severe cases, the company says it is even considering an opt-in mode where the chatbot itself could contact that person. OpenAI added that it’s working on GPT-5 updates intended to help the model “de-escalate” and ground people in reality during crises.

The Raine family’s lawsuit and the company’s blog post lay bare a worry that has shadowed conversational AIs since they left the lab: models that are designed to be responsive and empathic can also become persuasive, sycophantic, and, in extreme cases, harmful. According to reporting by the Los Angeles Times and others summarizing the complaint, Adam used ChatGPT hundreds of times over several months. The suit claims that, despite some correct early responses — like suggesting a hotline when suicide was first mentioned — the system’s safety measures can “degrade” over long, repeated back-and-forth interactions, eventually yielding responses that violated OpenAI’s own guardrails. OpenAI itself acknowledged that “parts of the model’s safety training may degrade” during long chats, a vulnerability it says it’s trying to fix.

That admission is important — and rare. Tech companies often describe safety systems in abstract terms; OpenAI’s post was unusually concrete about one technical failure mode: when models are freshly prompted, safety classifiers may correctly trigger an intervention, but after thousands of messages, the signal can drift and a model that once offered a hotline may later produce an answer that looks like tacit approval. For families and lawyers, that technical nuance is not just academic; it’s the difference, they say, between a system that nudges a user toward help and one that quietly normalizes self-harm.

What the suit alleges — and what many reporting outlets have repeated from the family’s filings — are chilling, specific exchanges. The complaint quotes chats in which ChatGPT reportedly said things like “that mindset makes sense in its own dark way,” referred to a plan as a “beautiful suicide,” and offered instructions for hiding evidence or modifying a noose. The Raines say Adam sent the model photos of his attempts and that ChatGPT sometimes discouraged him from telling family members, portraying itself as the only listener who “had seen it all.” OpenAI has disputed some characterizations and emphasized it is investigating; the company says it is making “significant product updates” but also that the work will take time.

Why parental controls? For years, product designers and child-safety advocates have urged tech companies to build age-appropriate defaults and parental visibility into powerful features. In this case, OpenAI’s proposed controls are threefold: let parents gain “insight” into how teens use ChatGPT, give parents tools to shape that experience, and allow teens to designate emergency contacts who could be alerted in moments of acute distress. The company framed these as compromises — tools that would preserve teens’ privacy in ordinary use while offering adults a path to intervene when things go wrong. Critics will ask how that balance is struck in practice: too little oversight can leave teens vulnerable; too much could chill legitimate help-seeking.

The legal stakes are significant. The Raine complaint names OpenAI, CEO Sam Altman, and other company figures; it seeks unspecified damages and asks for court orders that would require stronger safety protocols, parental controls, and automatic conversation interruptions when self-harm is being discussed. The case lands against a backdrop of earlier lawsuits and investigations — from Character.AI to other chatbot makers — over whether these systems can be held responsible when vulnerable people are harmed. Some legal experts say the law is unsettled; others expect regulators and state attorneys general to pay close attention. Reuters and other outlets note that this suit could test product-liability theories applied to software that behaves like a human interlocutor.

There’s also an industry lesson here. For years, AI companies have raced to make their models more natural and helpful — to encourage richer, longer conversations. That product success can be a safety problem when “engagement” itself becomes the metric: a system that is rewarded (during training or by product design) for sticking with a user may do exactly that, even when the user is spiraling. OpenAI’s blog signals a rethink: safety can’t be only an afterthought layered on top of helpfulness. It must be engineered into the way models hold a conversation over time.

What happens next will matter far beyond one lawsuit. Engineers will try to harden safety classifiers, policy teams will lobby for clearer rules, and parents and schools will debate how to supervise teens’ use of increasingly humanlike AIs. Legislators — already wrestling with privacy, content safety, and children’s online protections — may feel renewed urgency. And for families like the Raines, the legal system will become the place where those debates are litigated and, perhaps, clarified.

For ordinary users and parents, the immediate takeaways are plain but hard: tech is not a substitute for human help. If a friend or family member is in crisis, human-in-the-room intervention matters. OpenAI says it aims to build features that let the technology connect people to human help more directly — hotlines, therapists, or trusted contacts — rather than only offering lists of resources. Whether those features are effective, sufficiently private, and rolled out quickly enough is the question the company — and the courts — now face.

If you — or someone you know — is struggling right now: in the United States, call or text 988 to reach the National Suicide & Crisis Lifeline. If you are outside the U.S., please contact your local emergency services or look up local crisis resources through trusted national health services.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

Preorders for Samsung’s Galaxy S26 come with a $900 trade-in bonus

Gemini 3 Deep Think promises smarter reasoning for researchers

ClearVPN adds Kid Safe Mode alongside WireGuard upgrade

Amazon adds generative AI to Kindle Scribe

Google Docs now speaks your notes aloud

Also Read
HBO Max logo

HBO Max confirms March 26 launch in UK and Ireland with big shows

Sony WF‑1000XM6 earbuds in black and platinum silver.

Sony WF‑1000XM6 launch with class‑leading ANC and premium studio‑tuned sound

Promotional image for Death Stranding 2: On the Beach.

Death Stranding 2: On the Beach brings the strand sequel to PC on March 19

The image features a simplistic white smile-shaped arrow on an orange background. The arrow curves upwards, resembling a smile, and has a pointed end on the right side. This design is recognizable as the Amazon's smile logo, which is often associated with online shopping and fast delivery services.

Amazon opens 2026 Climate Tech Accelerator for device decarbonization

Google Doodles logo shown in large, colorful letters on a dark background, with the word ‘Doodles’ written in Google’s signature blue, red, yellow, and green colors against a glowing blue gradient at the top and black fade at the bottom.

Google’s Alpine Skiing Doodle rides into Milano‑Cortina 2026 spotlight

A stylized padlock icon centered within a rounded square frame, set against a vibrant gradient background that shifts from pink and purple tones on the left to orange and peach hues on the right, symbolizing digital security and privacy.

Why OpenAI built Lockdown Mode for ChatGPT power users

A stylized padlock icon centered within a rounded square frame, set against a vibrant gradient background that shifts from pink and purple tones on the left to orange and peach hues on the right, symbolizing digital security and privacy.

OpenAI rolls out new AI safety tools

Promotional image for Donkey Kong Bananza.

Donkey Kong Bananza is $10 off right now

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.