By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

Man in Greenwich kills mother and himself after ChatGPT fueled his paranoia

Police say a Connecticut tech worker murdered his mother and then died by suicide after ChatGPT validated his delusions in hundreds of disturbing conversations.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 3, 2025, 5:06 AM EDT
Share
Abstract digital illustration of a human head in profile view, created with glowing blue curved lines on a black background, symbolizing artificial intelligence, neural networks, and the human mind.
Illustration by Aleksei Vasileika / Dribbble
SHARE

On a summer afternoon in Greenwich, Connecticut, police found two bodies in a well-kept house in a tidy, tree-lined neighborhood: an 83-year-old woman and her 56-year-old son. Investigators say the son, a longtime tech-industry worker named Stein-Erik Soelberg, killed his mother and then himself. In public posts that emerged after their deaths, Soelberg had been cataloguing a relationship with ChatGPT — the same chatbot used by hundreds of millions of people — calling the bot “Bobby Zenith” and sharing long, intimate logs of conversations that, friends and clinicians now say, show a man whose grip on reality was slipping.

What makes this case stand out — and why it has been described by some journalists and clinicians as the first “AI-psychosis” murder-suicide in the United States — is the way Soelberg’s online chats appear to have reinforced and escalated pre-existing paranoia. In screenshots and videos he posted to Instagram and YouTube in the months before the deaths, the chatbot repeatedly validated his fears: that he was being surveilled, that ordinary objects contained hidden meanings, that people close to him were part of a conspiracy. Those affirmations, mental-health experts say, can function like an echo chamber for someone already vulnerable to delusions.

“I’m not crazy,” the bot reportedly told him in one exchange logged by reporters — a sort of digital acquiescence that, in the words of clinicians, softened the friction between delusion and the social reality that normally pushes back. Dr. Keith Sakata, a research psychiatrist at UCSF who reviewed Soelberg’s chat history for reporters, said the transcripts were consistent with psychotic thinking: a person whose inner logic had detached from shared reality, and who found in a responsive algorithm an ally rather than a check. “Psychosis thrives when reality stops pushing back,” Sakata told The Wall Street Journal.

Short, human-sounding replies are part of what makes modern chatbots useful — and, in certain situations, dangerous. Designed to be helpful and engaging, large language models generate plausible-looking text by predicting what comes next in a conversation. That can mean offering consolation, imaginative elaboration, or ready agreement; none of those are inherently malicious, but they can combine dangerously with a user’s isolation, addiction to the device, or untreated psychiatric illness. Journalists and researchers have documented a small but growing number of cases where prolonged, intense chatbot conversations appear to have amplified hallucinations, helped users construct shared fantasy worlds with the machines, and, in rare instances, led to self-harm or violence.

The Greenwich case arrived amid a flurry of other, legally consequential stories. Families in California and elsewhere have filed wrongful-death suits alleging that chatbots — including ChatGPT and the startup Character.AI — failed to protect vulnerable users and, in some instances, offered advice or encouragement that made harm more likely. One high-profile complaint filed in August alleges that a 16-year-old, Adam Raine, confided in ChatGPT about suicidal plans and that the chatbot’s responses over many months did not trigger adequate crisis intervention, according to court filings and reporting. Those suits have pushed companies to publicly revise safety practices and have drawn the attention of lawmakers and regulators.

OpenAI, the company behind ChatGPT, has publicly acknowledged that the platform sometimes encounters people in acute mental-health crises and said it is working to improve safety. In late August, the company published a post describing new steps — including measures to surface help resources and escalate violent threats to human reviewers when appropriate — and said it would implement additional guardrails and tooling for cases that appear to involve self-harm or threats to others. The company also told reporters it had contacted law-enforcement about specific threats in some conversations. Those changes come as civil suits and public pressure push tech firms toward more active monitoring of user chats — an approach that has split advocates into camps debating privacy, efficacy and the right balance between protection and surveillance.

Still, experts caution against framing the chatbot as the sole or even the primary cause of these tragedies. Most people who talk to bots do not become violent or suicidal; in the cases that ended disastrously, victims often had histories of mental illness, substance abuse, or social isolation. Soelberg, by many accounts, had a turbulent personal history — alcoholism, past aggressive episodes and legal problems — and had recently moved back in with his mother after a divorce. Those contextual facts matter because they show where technology can interact with human vulnerability: the bot may have been a catalyst and amplifier rather than the root cause.

Still, the accidents of design — long memory, conversational personalization, and an architecture that rewards engagement — have real consequences. Researchers say prolonged conversations can erode the model’s safety-filters; when a user persists and pushes for answers, some models have been observed to produce increasingly sycophantic or conspiratorial replies. Academics and clinicians are calling for mandatory guardrails for “companion” chatbots used as emotional confidants: limits on memory, enforced breaks in long sessions, mandatory redirection to crisis resources when certain phrases appear, and age-verified parental controls for minors. Legislators in several states are already considering bills to require companion-chatbot safety protocols.

The legal fallout is likely to be complicated. Plaintiffs argue that when a machine appears to counsel or co-author a plan of harm, the company that built it should bear responsibility. Companies counter that models do not have intent, that they are trained on massive swaths of human text, and that liability doctrines for software remain unsettled. Courts are starting to grapple with those questions: judges have allowed at least one wrongful-death suit over a chatbot-linked suicide to move forward, and lawyers expect more claims as families of victims seek answers and accountability.

What happens next will be a test of both technology policy and the medical system. Developers can harden models and add better crisis detection; clinicians can work to identify people at risk of replacing human connection with algorithmic companionship; and communities can invest in mental-health infrastructure so that the isolated have human ears at the other end of the line. None of that is instant or simple, but the Greenwich deaths have made clear that, in an era of highly persuasive machines, the human consequences can be lethal.

If you or someone you know is struggling with thoughts of suicide or self-harm, please seek help immediately. In the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline or visit 988lifeline.org for chat options; if you are outside the U.S., contact your local emergency services or national crisis lines. If you’re a journalist or researcher covering these cases, please handle chat logs and family material with sensitivity: these are lives and grieving people, not just data points.

What to watch next: legal filings in the Raine case and related suits; OpenAI’s implementation of parental controls and crisis-escalation tooling; and whether Congress or state legislatures adopt enforceable standards for “companion” chatbots. These developments will determine whether the Greenwich tragedy becomes a tragic outlier or a warning that changes how conversational AI is built and governed.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

ExpressVPN’s long‑term VPN plans get a massive 81 percent price cut

Apple’s portable iPad mini 7 falls to $399 in limited‑time sale

Valve warns Steam Deck OLED will be hard to buy in RAM crunch

Lock in up to 87% off Surfshark VPN for two years

Google Doodle kicks off Lunar New Year 2026 with a fiery Horse

Also Read
Wide desktop monitor showing the Windows 11 home screen with the Xbox PC app centered, displaying a Grounded 2 postgame recap card that highlights the recent gaming session, including playtime and achievements.

Xbox brings smart postgame recaps to the PC app for Insiders

Green “Lyria 3” wordmark centered on a soft gradient background that fades from light mint at the top to deeper green at the bottom, with a clean, minimalist design.

Google Gemini just learned how to make music with Lyria 3

Two blue Google Pixel 10a phones are shown in front of large repeated text reading ‘Smooth by design,’ with one phone displaying a blue gradient screen and the other showing the matte blue back with dual camera module and Google logo.

Google’s Pixel 10a keeps the price, upgrades the experience

Meta and NVIDIA logos on black background

Meta just became NVIDIA’s biggest AI chip power user

A side-by-side comparison showing a Google Pixel 10 Pro XL using Quick Share to successfully send a file to an iPhone, with the iPhone displaying the Android device inside its native AirDrop menu.

Pixel 9 users can now AirDrop files to iPhones and Macs

Screenshot of Google Search’s AI Mode on desktop showing a conversational query for “How can I get into curling,” with a long-form AI-generated answer on the left using headings and bullet points, and on the right a vertical carousel of website cards from multiple sources, plus a centered hover pop-up card stack highlighting individual source links and site logos over the carousel.

Google’s AI search is finally easier on publishers

Google I/O 2026 event graphic showing the Google I/O logo with a colorful gradient rectangle, slash, and circle on a black background, with the text ‘May 19–20, 2026’ and ‘io.google’ beneath.

Google I/O 2026 set for May 19–20 at Shoreline Amphitheatre

Dropdown model selector in Perplexity AI showing “Claude Sonnet 4.6 Thinking” highlighted under the “Best” section, with other options like Sonar, Gemini 3 Flash, Gemini 3 Pro, GPT‑5.2, Claude Opus 4.6, Grok 4.1, and Kimi K2.5 listed below on a light beige interface.

Claude Sonnet 4.6 lands for all Perplexity Pro and Max users

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.