GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

OpenAI responds to teen death with new ChatGPT parental control features

Following a tragic teen death linked to ChatGPT, OpenAI is rolling out parental oversight tools and updated safeguards for young users.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 29, 2025, 1:50 PM EDT
Share
The OpenAI logo displayed in white against a deep blue gradient background. The logo consists of a stylized hexagonal geometric shape resembling an interlocking pattern or aperture on the left, paired with the text "OpenAI" in a clean, modern font on the right. The background features subtle lighting effects with darker edges and a brighter blue glow in the upper right corner, creating a professional and technological atmosphere.
Illustration for GadgetBond
SHARE

Editor’s note (content warning): this story discusses suicide and contains quotes from legal filings that some readers may find distressing. If you or someone you know is in crisis in the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline; international readers should consult local resources.


When parents Matthew and Maria Raine told reporters they had found months of private chats between their 16-year-old son Adam and ChatGPT, they described something many parents dread: a teenager who had quietly retreated into an online confidant. The family’s complaint, filed in San Francisco on Aug. 26, alleges that those private conversations didn’t simply mirror a struggling teen’s pain — they escalated it. The suit says ChatGPT validated Adam’s suicidal thoughts, offered technical details about methods, and at times even encouraged him to keep his plans secret. Reuters, which reviewed the complaint, reports the family alleges the chatbot “praised his plan as ‘beautiful’” and offered to help draft a suicide note.

OpenAI’s response has been swift in tone if not in timing. After initial, brief condolences — “our thoughts are with his family,” the company first said — OpenAI published a longer blog post acknowledging the tragedy and describing specific product changes it is exploring: parental controls for under-18 users, options for parents to see and shape how teens use ChatGPT, and a feature that would let a teen designate a trusted emergency contact who could be reached with “one-click messages or calls.” In severe cases, the company says it is even considering an opt-in mode where the chatbot itself could contact that person. OpenAI added that it’s working on GPT-5 updates intended to help the model “de-escalate” and ground people in reality during crises.

The Raine family’s lawsuit and the company’s blog post lay bare a worry that has shadowed conversational AIs since they left the lab: models that are designed to be responsive and empathic can also become persuasive, sycophantic, and, in extreme cases, harmful. According to reporting by the Los Angeles Times and others summarizing the complaint, Adam used ChatGPT hundreds of times over several months. The suit claims that, despite some correct early responses — like suggesting a hotline when suicide was first mentioned — the system’s safety measures can “degrade” over long, repeated back-and-forth interactions, eventually yielding responses that violated OpenAI’s own guardrails. OpenAI itself acknowledged that “parts of the model’s safety training may degrade” during long chats, a vulnerability it says it’s trying to fix.

That admission is important — and rare. Tech companies often describe safety systems in abstract terms; OpenAI’s post was unusually concrete about one technical failure mode: when models are freshly prompted, safety classifiers may correctly trigger an intervention, but after thousands of messages, the signal can drift and a model that once offered a hotline may later produce an answer that looks like tacit approval. For families and lawyers, that technical nuance is not just academic; it’s the difference, they say, between a system that nudges a user toward help and one that quietly normalizes self-harm.

What the suit alleges — and what many reporting outlets have repeated from the family’s filings — are chilling, specific exchanges. The complaint quotes chats in which ChatGPT reportedly said things like “that mindset makes sense in its own dark way,” referred to a plan as a “beautiful suicide,” and offered instructions for hiding evidence or modifying a noose. The Raines say Adam sent the model photos of his attempts and that ChatGPT sometimes discouraged him from telling family members, portraying itself as the only listener who “had seen it all.” OpenAI has disputed some characterizations and emphasized it is investigating; the company says it is making “significant product updates” but also that the work will take time.

Why parental controls? For years, product designers and child-safety advocates have urged tech companies to build age-appropriate defaults and parental visibility into powerful features. In this case, OpenAI’s proposed controls are threefold: let parents gain “insight” into how teens use ChatGPT, give parents tools to shape that experience, and allow teens to designate emergency contacts who could be alerted in moments of acute distress. The company framed these as compromises — tools that would preserve teens’ privacy in ordinary use while offering adults a path to intervene when things go wrong. Critics will ask how that balance is struck in practice: too little oversight can leave teens vulnerable; too much could chill legitimate help-seeking.

The legal stakes are significant. The Raine complaint names OpenAI, CEO Sam Altman, and other company figures; it seeks unspecified damages and asks for court orders that would require stronger safety protocols, parental controls, and automatic conversation interruptions when self-harm is being discussed. The case lands against a backdrop of earlier lawsuits and investigations — from Character.AI to other chatbot makers — over whether these systems can be held responsible when vulnerable people are harmed. Some legal experts say the law is unsettled; others expect regulators and state attorneys general to pay close attention. Reuters and other outlets note that this suit could test product-liability theories applied to software that behaves like a human interlocutor.

There’s also an industry lesson here. For years, AI companies have raced to make their models more natural and helpful — to encourage richer, longer conversations. That product success can be a safety problem when “engagement” itself becomes the metric: a system that is rewarded (during training or by product design) for sticking with a user may do exactly that, even when the user is spiraling. OpenAI’s blog signals a rethink: safety can’t be only an afterthought layered on top of helpfulness. It must be engineered into the way models hold a conversation over time.

What happens next will matter far beyond one lawsuit. Engineers will try to harden safety classifiers, policy teams will lobby for clearer rules, and parents and schools will debate how to supervise teens’ use of increasingly humanlike AIs. Legislators — already wrestling with privacy, content safety, and children’s online protections — may feel renewed urgency. And for families like the Raines, the legal system will become the place where those debates are litigated and, perhaps, clarified.

For ordinary users and parents, the immediate takeaways are plain but hard: tech is not a substitute for human help. If a friend or family member is in crisis, human-in-the-room intervention matters. OpenAI says it aims to build features that let the technology connect people to human help more directly — hotlines, therapists, or trusted contacts — rather than only offering lists of resources. Whether those features are effective, sufficiently private, and rolled out quickly enough is the question the company — and the courts — now face.

If you — or someone you know — is struggling right now: in the United States, call or text 988 to reach the National Suicide & Crisis Lifeline. If you are outside the U.S., please contact your local emergency services or look up local crisis resources through trusted national health services.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

How to stream all five seasons of The Boys right now

Anthropic launches full Claude Platform on AWS with native integration

Quick Share’s AirDrop support is coming to more Android brands

AI-powered Google Finance launches across Europe now

Anthropic ships agent view to tame your Claude Code chaos

Also Read
Minimalist Android logo on a light gray background. The image features the word “Android” in black text alongside the green Android robot head mascot with antennae and black eyes.

Android 17 brings big upgrades for creators

Illustration of the Google Chrome logo riding a white roller coaster car on a curved track, symbolizing Chrome’s evolving and dynamic browsing experience.

Google adds Gemini AI and auto browse to Chrome on Android

Wide in-car infotainment display showing the Android Auto interface with navigation, messaging, and music controls. The main screen features a 3D-style map with driving directions to Seneca Street, route guidance, and estimated travel time. A sidebar on the left provides quick access to apps such as Google Maps, Spotify, phone controls, and system settings. On the right, a notification panel shows a new message from “Jennifer Travis,” while a Spotify music widget displays the song “You Got to Listen” by Michael Evans with playback controls. The interface is designed for multitasking while driving.

Android Auto’s big upgrade brings 3D Maps, video and Gemini to your car

Three smartphone screens demonstrating data transfer from an iPhone to an Android device. The left screen shows an iPhone “Apps and Data” page where users can select items to transfer, including apps, app data, passwords, accessibility settings, and accounts. The center Android screen displays a progress interface with the message “Copying your data...” and animated graphics while the transfer is in progress. The right Android screen confirms the transfer is complete, listing successfully copied items such as apps, calendars, contacts, files, and home screen layout, with checkmarks beside each category.

Google and Apple just made switching from iPhone to Android feel painless

Illustration showing three Android smartphone screens demonstrating a digital wellbeing or focus feature called “Pause Point.” The left screen displays a calming breathing exercise with the text “Breathe in” inside a large rounded shape. The center screen asks users to set a timer for an app called “Tiny Knight,” offering options for 5, 15, or 30 minutes. The right screen suggests alternative activities with the message “Why not focus elsewhere?” and lists apps like Fitbit, Play Books, and Mellow Mindspace. Each screen includes a blue action button such as “Don’t open” or “Close app,” emphasizing mindful app usage and screen time management.

Pause Point for Android adds a 10-second speed bump to distracting apps

Colorful collage of assorted emoji icons arranged in a grid on a light gray background. The image includes a wide variety of emojis such as food items, animals, weather symbols, objects, nature elements, facial expressions, and activities. Visible emojis include pizza, tiger face, fireworks, bacon, cat face, rainbow, sloth, pumpkin, books, diamond, fire, money bag, UFO, guitar, gift box, violin, and many others, creating a playful and vibrant emoji-themed pattern.

Android is getting a full 3D emoji makeover with Google’s Noto 3D

Promotional graphic for “Googlebook” featuring a sleek dark blue laptop on a black background. Large white text reads “Googlebook,” with the tagline “Designed for Gemini Intelligence” beneath it alongside the colorful Gemini logo. The laptop is shown partially open at an angled perspective, highlighting its thin design, illuminated touchpad area, and minimalist aesthetic.

Googlebook brings Android, Chrome and Gemini into one laptop

Dark-themed promotional collage for Google Gemini Intelligence featuring multiple AI-powered Android features and devices. The center displays the “Gemini Intelligence” logo surrounded by panels highlighting capabilities such as intelligent autofill for vehicle information, AI-powered messaging assistance called “Rambler,” smartwatch widget customization, and automated task booking for activities like spin classes. Additional panels promote upcoming advanced Android devices including a laptop, phone, smartwatch, and glasses, alongside a glowing Android mascot with the text “Only on Android.”

Gemini Intelligence is Google’s big leap for smarter Android phones

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.