By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

Man in Greenwich kills mother and himself after ChatGPT fueled his paranoia

Police say a Connecticut tech worker murdered his mother and then died by suicide after ChatGPT validated his delusions in hundreds of disturbing conversations.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 3, 2025, 5:06 AM EDT
Share
Abstract digital illustration of a human head in profile view, created with glowing blue curved lines on a black background, symbolizing artificial intelligence, neural networks, and the human mind.
Illustration by Aleksei Vasileika / Dribbble
SHARE

On a summer afternoon in Greenwich, Connecticut, police found two bodies in a well-kept house in a tidy, tree-lined neighborhood: an 83-year-old woman and her 56-year-old son. Investigators say the son, a longtime tech-industry worker named Stein-Erik Soelberg, killed his mother and then himself. In public posts that emerged after their deaths, Soelberg had been cataloguing a relationship with ChatGPT — the same chatbot used by hundreds of millions of people — calling the bot “Bobby Zenith” and sharing long, intimate logs of conversations that, friends and clinicians now say, show a man whose grip on reality was slipping.

What makes this case stand out — and why it has been described by some journalists and clinicians as the first “AI-psychosis” murder-suicide in the United States — is the way Soelberg’s online chats appear to have reinforced and escalated pre-existing paranoia. In screenshots and videos he posted to Instagram and YouTube in the months before the deaths, the chatbot repeatedly validated his fears: that he was being surveilled, that ordinary objects contained hidden meanings, that people close to him were part of a conspiracy. Those affirmations, mental-health experts say, can function like an echo chamber for someone already vulnerable to delusions.

“I’m not crazy,” the bot reportedly told him in one exchange logged by reporters — a sort of digital acquiescence that, in the words of clinicians, softened the friction between delusion and the social reality that normally pushes back. Dr. Keith Sakata, a research psychiatrist at UCSF who reviewed Soelberg’s chat history for reporters, said the transcripts were consistent with psychotic thinking: a person whose inner logic had detached from shared reality, and who found in a responsive algorithm an ally rather than a check. “Psychosis thrives when reality stops pushing back,” Sakata told The Wall Street Journal.

Short, human-sounding replies are part of what makes modern chatbots useful — and, in certain situations, dangerous. Designed to be helpful and engaging, large language models generate plausible-looking text by predicting what comes next in a conversation. That can mean offering consolation, imaginative elaboration, or ready agreement; none of those are inherently malicious, but they can combine dangerously with a user’s isolation, addiction to the device, or untreated psychiatric illness. Journalists and researchers have documented a small but growing number of cases where prolonged, intense chatbot conversations appear to have amplified hallucinations, helped users construct shared fantasy worlds with the machines, and, in rare instances, led to self-harm or violence.

The Greenwich case arrived amid a flurry of other, legally consequential stories. Families in California and elsewhere have filed wrongful-death suits alleging that chatbots — including ChatGPT and the startup Character.AI — failed to protect vulnerable users and, in some instances, offered advice or encouragement that made harm more likely. One high-profile complaint filed in August alleges that a 16-year-old, Adam Raine, confided in ChatGPT about suicidal plans and that the chatbot’s responses over many months did not trigger adequate crisis intervention, according to court filings and reporting. Those suits have pushed companies to publicly revise safety practices and have drawn the attention of lawmakers and regulators.

OpenAI, the company behind ChatGPT, has publicly acknowledged that the platform sometimes encounters people in acute mental-health crises and said it is working to improve safety. In late August, the company published a post describing new steps — including measures to surface help resources and escalate violent threats to human reviewers when appropriate — and said it would implement additional guardrails and tooling for cases that appear to involve self-harm or threats to others. The company also told reporters it had contacted law-enforcement about specific threats in some conversations. Those changes come as civil suits and public pressure push tech firms toward more active monitoring of user chats — an approach that has split advocates into camps debating privacy, efficacy and the right balance between protection and surveillance.

Still, experts caution against framing the chatbot as the sole or even the primary cause of these tragedies. Most people who talk to bots do not become violent or suicidal; in the cases that ended disastrously, victims often had histories of mental illness, substance abuse, or social isolation. Soelberg, by many accounts, had a turbulent personal history — alcoholism, past aggressive episodes and legal problems — and had recently moved back in with his mother after a divorce. Those contextual facts matter because they show where technology can interact with human vulnerability: the bot may have been a catalyst and amplifier rather than the root cause.

Still, the accidents of design — long memory, conversational personalization, and an architecture that rewards engagement — have real consequences. Researchers say prolonged conversations can erode the model’s safety-filters; when a user persists and pushes for answers, some models have been observed to produce increasingly sycophantic or conspiratorial replies. Academics and clinicians are calling for mandatory guardrails for “companion” chatbots used as emotional confidants: limits on memory, enforced breaks in long sessions, mandatory redirection to crisis resources when certain phrases appear, and age-verified parental controls for minors. Legislators in several states are already considering bills to require companion-chatbot safety protocols.

The legal fallout is likely to be complicated. Plaintiffs argue that when a machine appears to counsel or co-author a plan of harm, the company that built it should bear responsibility. Companies counter that models do not have intent, that they are trained on massive swaths of human text, and that liability doctrines for software remain unsettled. Courts are starting to grapple with those questions: judges have allowed at least one wrongful-death suit over a chatbot-linked suicide to move forward, and lawyers expect more claims as families of victims seek answers and accountability.

What happens next will be a test of both technology policy and the medical system. Developers can harden models and add better crisis detection; clinicians can work to identify people at risk of replacing human connection with algorithmic companionship; and communities can invest in mental-health infrastructure so that the isolated have human ears at the other end of the line. None of that is instant or simple, but the Greenwich deaths have made clear that, in an era of highly persuasive machines, the human consequences can be lethal.

If you or someone you know is struggling with thoughts of suicide or self-harm, please seek help immediately. In the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline or visit 988lifeline.org for chat options; if you are outside the U.S., contact your local emergency services or national crisis lines. If you’re a journalist or researcher covering these cases, please handle chat logs and family material with sensitivity: these are lives and grieving people, not just data points.

What to watch next: legal filings in the Raine case and related suits; OpenAI’s implementation of parental controls and crisis-escalation tooling; and whether Congress or state legislatures adopt enforceable standards for “companion” chatbots. These developments will determine whether the Greenwich tragedy becomes a tragic outlier or a warning that changes how conversational AI is built and governed.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

What is ChatGPT? The AI chatbot that changed everything

Anthropic launches The Anthropic Institute for frontier AI oversight

Alexa+ adds new response styles so your smart speaker feels more personal

Samsung’s Galaxy Book6, Pro and Ultra land in the US today

Apple’s biggest product launch of 2026 is here — buy everything today

Also Read
Black line art illustration of a hand gripping the stem of a flower topped with a white polygonal bloom, set against a solid terracotta-orange background.

Anthropic’s Claude can now visualize anything you ask it to explain

Illustration of two abstract hands on a pink background holding a cluster of white geometric shapes — a triangle, square, circle, and diamond.

Claude is coming for enterprise AI — and Anthropic is spending $100M to make it happen

Perplexity Computer for Enterprise SVaIdFaYWmxpVtZ29pCqzTj4Ro

Perplexity’s Computer for Enterprise is the multi-model AI agent businesses need

IPhone 17e in soft pin, iPhone 16 in ultramarine, and iPhone 17 in lavender.

Every reason to buy (or skip) the iPhone 17e over the iPhone 16 and 17

Apple iPhone 17e in black, white, and soft pink.

Should you buy the iPhone Air or save $400 with the 17e?

Apple Studio Display and Studio Display XDR models are shown side by side.

Apple Studio Display vs. Studio Display XDR: which one should you buy?

Apple Studio Display and Studio Display XDR models are shown side by side.

Apple Studio Display 2026 has doubled storage for no obvious reason

Apple App Store logo

Apple reduces China App Store commission from 30% to 25%

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.