GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAISecurityTech

OpenAI adds an emergency-style Trusted Contact option inside ChatGPT settings

OpenAI is adding a Trusted Contact option inside ChatGPT so adults can name one person to be quietly alerted if a chat suggests they may be at serious risk of self-harm.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
May 10, 2026, 5:53 AM EDT
Share
We may get a commission from retail offers. Learn more
Three smartphone mockups displaying a ChatGPT trusted contact safety feature. The first screen explains how adding a trusted contact can help someone receive support during serious mental health or safety concerns. The second screen shows a form for inviting a trusted contact with fields for name, phone, email, and consent confirmation. The third screen confirms that the invitation was sent and offers an option to send a personal note.
Image: OpenAI
SHARE

OpenAI is giving ChatGPT a kind of “in case of emergency, call this person” button – baked right into the product – and it could quietly become one of its most impactful safety features yet. The new Trusted Contact option lets adults tell ChatGPT who in their real life should be looped in if a conversation suggests they might be at serious risk of self-harm.

At a high level, Trusted Contact is trying to solve a very human problem: people open up to AI more easily than they do to other humans, especially when they are struggling, but when things get dark, a chatbot alone is not enough. OpenAI’s answer is to use those sensitive moments as a bridge back to real-world support, not a replacement for it.

Here is how it works in practice. Adult ChatGPT users can go into settings and add one person as their Trusted Contact – typically a friend, family member or caregiver – who must be an adult (18+ globally, 19+ in South Korea) and who has to explicitly accept the role within a week. If they accept, they are on standby in the background; nothing happens unless ChatGPT’s safety systems later see something that looks like a serious self-harm risk.

Those safety systems are a combination of automated monitoring and human reviewers. If the models detect that a user is talking about harming themselves in a way that suggests an acute concern, ChatGPT first flags this to the user directly inside the chat, explains that their Trusted Contact may be notified, and nudges them to reach out to that person with suggested conversation starters. Only then does a small, specially trained human team review the situation, with OpenAI saying it aims to complete that review in under an hour. If those reviewers agree the situation looks serious, the Trusted Contact receives a short alert by email, SMS or in-app notification if they use ChatGPT themselves.

Deliberately, that alert is minimal. It does not contain chat transcripts or quotes from the conversation, and it does not give the Trusted Contact a full window into what the user has been telling ChatGPT. Instead, it simply says that self-harm came up in a potentially concerning way, encourages the contact to check in, and points them to expert guidance on how to handle a difficult conversation with someone who might be in crisis. Both sides keep control: users can remove or change their Trusted Contact at any time from settings, and Trusted Contacts can opt out themselves through OpenAI’s help center if they no longer want that responsibility.

There is a clear design philosophy showing through here. OpenAI repeatedly stresses that Trusted Contact does not replace professional care, emergency services or local crisis lines; it sits alongside those resources as another layer of support. ChatGPT will still surface localized helplines, encourage people to call emergency numbers like 988 in the US, and refuse to provide instructions for self-harm, instead redirecting to safer responses. The feature builds on existing parental safety notifications for teens, which already let parents get alerts when there are signs of serious distress on a linked teen account, but Trusted Contact is explicitly for adults who want to opt in for themselves.

The decision to lean into social connection is not accidental. Public health guidance consistently highlights strong, supportive relationships as one of the most powerful protective factors against suicide risk. The American Psychological Association’s CEO, Dr. Arthur Evans, puts it bluntly: psychological science shows social connection is a powerful buffer during emotional distress, and asking people to identify someone they trust ahead of time can make it easier to reach out when it matters. Another expert, Georgia Tech’s Dr. Munmun De Choudhury, frames Trusted Contact as a step toward AI that fosters “authentic human-to-human connection” instead of trying to be the primary source of emotional support.

Behind the scenes, the feature sits on top of a broader safety stack OpenAI has been quietly building for over a year. The company worked with more than 170 mental health professionals to improve how ChatGPT detects and responds to different levels of distress, from low-level anxiety to active self-harm ideation, and to tune the model towards de-escalation and referrals to real-world help. The Trusted Contact rollout is informed by OpenAI’s Global Physicians Network, a group of more than 260 doctors across 60 countries, and its Expert Council on Well-Being and AI, which advise on how these systems should behave in sensitive contexts.

The mechanics also matter because of the scale involved. OpenAI says hundreds of millions of people use ChatGPT, with some estimates suggesting around 10 percent of the world’s population interacts with the service every week. When that many people process personal challenges and mental health questions through a single AI system, the reality is that a non-trivial number of chats will touch suicide, self-harm and crisis situations. That context is why OpenAI is under growing public and regulatory pressure to show it has done more than just filter out obviously harmful answers; there is a wider duty of care question around what an AI should do when it “hears” someone in real distress.

Trusted Contact is OpenAI’s attempt at a measured answer to that question. It is opt-in, rather than something that silently routes data to third parties. It keeps the actual chat content private, even from the trusted person you nominate. It adds a human review step so that a single ambiguous message does not automatically trigger an alarm, while still trying to operate quickly enough that an alert could realistically help. And rather than trying to automate care, it hands off to a real relationship in the user’s life, plus the usual crisis lines and professional channels.

There are, of course, limits and open questions. The feature is only available on personal ChatGPT accounts and does not apply to shared workspaces like Business, Enterprise or Education, where account owners and admins complicate the privacy picture. One user can also hold multiple ChatGPT accounts and simply not set a Trusted Contact on any of them, so no one is claiming this will catch every dangerous situation. Accuracy will be an ongoing challenge: language around self-harm can range from dark humor to metaphor to genuine crisis, and OpenAI’s own announcement acknowledges that some notifications may not perfectly reflect what a person is going through, despite human review.

Still, in the broader story of how AI products are evolving, Trusted Contact marks an important shift. Instead of treating safety as a thin layer of refusals and content filters, OpenAI is moving toward safety as connection: use AI to notice when things look bad, then push people outward to friends, family and professionals. As more of our private, emotional processing moves into AI chats, that underlying philosophy – that the goal is to loop humans back in, not keep them out – may be the most consequential part of this update.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Leave a Comment

Leave a ReplyCancel reply

Most Popular

Anthropic’s SpaceX compute deal supercharges Claude usage limits

Codex now runs natively inside Chrome on Mac and Windows

ASUS’ 12.3-inch ROG Strix XG129C is made to sit under your gaming monitor

Anthropic was “evil” in February, now it runs on Musk’s Colossus 1 GPUs

Fitbit app becomes Google Health app with AI coach starting May 19, 2026

Also Read
Abstract blue gradient background featuring a centered rounded-square icon with a minimalist blue audio waveform symbol, representing a real-time voice or audio AI interface.

OpenAI upgrades its Realtime API with three new voice AI models

Minimal illustration on a muted orange background showing four white geometric shapes connected by black lines and dots like a flowchart. A hand with an extended finger points toward one of the shapes, suggesting interaction, navigation, or decision-making within a connected system.

Claude for Microsoft 365 is now generally available

Futuristic digital artwork showing a glowing computer face icon inside a translucent glass-like sphere resting on a soft grassy surface. Floating reflective droplets surround the sphere against a dark black background, creating a surreal and minimalist sci-fi atmosphere.

The new Perplexity Mac app ships with Personal Computer

Icon of Apple App Store mobile application on iPhone.

Apple now allows gambling apps on Brazil App Store with license requirements

Apple logo on iPhone 11

Apple’s next chips may come from Intel’s fabs

ASUS Chromebook CM14 (CM1406) laptop

ASUS Chromebook CM14 packs Kompanio 540 power and 23-hour battery

Anthropic logo displayed as bold black uppercase text on a light beige background.

Anthropic’s SpaceX AI deal collides with data center backlash

Fitbit Air hero

Fitbit Air is the $99 screenless wearable made for Google Health Coach

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.