OpenAI is giving ChatGPT a kind of “in case of emergency, call this person” button – baked right into the product – and it could quietly become one of its most impactful safety features yet. The new Trusted Contact option lets adults tell ChatGPT who in their real life should be looped in if a conversation suggests they might be at serious risk of self-harm.
At a high level, Trusted Contact is trying to solve a very human problem: people open up to AI more easily than they do to other humans, especially when they are struggling, but when things get dark, a chatbot alone is not enough. OpenAI’s answer is to use those sensitive moments as a bridge back to real-world support, not a replacement for it.
Here is how it works in practice. Adult ChatGPT users can go into settings and add one person as their Trusted Contact – typically a friend, family member or caregiver – who must be an adult (18+ globally, 19+ in South Korea) and who has to explicitly accept the role within a week. If they accept, they are on standby in the background; nothing happens unless ChatGPT’s safety systems later see something that looks like a serious self-harm risk.
Those safety systems are a combination of automated monitoring and human reviewers. If the models detect that a user is talking about harming themselves in a way that suggests an acute concern, ChatGPT first flags this to the user directly inside the chat, explains that their Trusted Contact may be notified, and nudges them to reach out to that person with suggested conversation starters. Only then does a small, specially trained human team review the situation, with OpenAI saying it aims to complete that review in under an hour. If those reviewers agree the situation looks serious, the Trusted Contact receives a short alert by email, SMS or in-app notification if they use ChatGPT themselves.
Deliberately, that alert is minimal. It does not contain chat transcripts or quotes from the conversation, and it does not give the Trusted Contact a full window into what the user has been telling ChatGPT. Instead, it simply says that self-harm came up in a potentially concerning way, encourages the contact to check in, and points them to expert guidance on how to handle a difficult conversation with someone who might be in crisis. Both sides keep control: users can remove or change their Trusted Contact at any time from settings, and Trusted Contacts can opt out themselves through OpenAI’s help center if they no longer want that responsibility.
There is a clear design philosophy showing through here. OpenAI repeatedly stresses that Trusted Contact does not replace professional care, emergency services or local crisis lines; it sits alongside those resources as another layer of support. ChatGPT will still surface localized helplines, encourage people to call emergency numbers like 988 in the US, and refuse to provide instructions for self-harm, instead redirecting to safer responses. The feature builds on existing parental safety notifications for teens, which already let parents get alerts when there are signs of serious distress on a linked teen account, but Trusted Contact is explicitly for adults who want to opt in for themselves.
The decision to lean into social connection is not accidental. Public health guidance consistently highlights strong, supportive relationships as one of the most powerful protective factors against suicide risk. The American Psychological Association’s CEO, Dr. Arthur Evans, puts it bluntly: psychological science shows social connection is a powerful buffer during emotional distress, and asking people to identify someone they trust ahead of time can make it easier to reach out when it matters. Another expert, Georgia Tech’s Dr. Munmun De Choudhury, frames Trusted Contact as a step toward AI that fosters “authentic human-to-human connection” instead of trying to be the primary source of emotional support.
Behind the scenes, the feature sits on top of a broader safety stack OpenAI has been quietly building for over a year. The company worked with more than 170 mental health professionals to improve how ChatGPT detects and responds to different levels of distress, from low-level anxiety to active self-harm ideation, and to tune the model towards de-escalation and referrals to real-world help. The Trusted Contact rollout is informed by OpenAI’s Global Physicians Network, a group of more than 260 doctors across 60 countries, and its Expert Council on Well-Being and AI, which advise on how these systems should behave in sensitive contexts.
The mechanics also matter because of the scale involved. OpenAI says hundreds of millions of people use ChatGPT, with some estimates suggesting around 10 percent of the world’s population interacts with the service every week. When that many people process personal challenges and mental health questions through a single AI system, the reality is that a non-trivial number of chats will touch suicide, self-harm and crisis situations. That context is why OpenAI is under growing public and regulatory pressure to show it has done more than just filter out obviously harmful answers; there is a wider duty of care question around what an AI should do when it “hears” someone in real distress.
Trusted Contact is OpenAI’s attempt at a measured answer to that question. It is opt-in, rather than something that silently routes data to third parties. It keeps the actual chat content private, even from the trusted person you nominate. It adds a human review step so that a single ambiguous message does not automatically trigger an alarm, while still trying to operate quickly enough that an alert could realistically help. And rather than trying to automate care, it hands off to a real relationship in the user’s life, plus the usual crisis lines and professional channels.
There are, of course, limits and open questions. The feature is only available on personal ChatGPT accounts and does not apply to shared workspaces like Business, Enterprise or Education, where account owners and admins complicate the privacy picture. One user can also hold multiple ChatGPT accounts and simply not set a Trusted Contact on any of them, so no one is claiming this will catch every dangerous situation. Accuracy will be an ongoing challenge: language around self-harm can range from dark humor to metaphor to genuine crisis, and OpenAI’s own announcement acknowledges that some notifications may not perfectly reflect what a person is going through, despite human review.
Still, in the broader story of how AI products are evolving, Trusted Contact marks an important shift. Instead of treating safety as a thin layer of refusals and content filters, OpenAI is moving toward safety as connection: use AI to notice when things look bad, then push people outward to friends, family and professionals. As more of our private, emotional processing moves into AI chats, that underlying philosophy – that the goal is to loop humans back in, not keep them out – may be the most consequential part of this update.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
