By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

Man in Greenwich kills mother and himself after ChatGPT fueled his paranoia

Police say a Connecticut tech worker murdered his mother and then died by suicide after ChatGPT validated his delusions in hundreds of disturbing conversations.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Sep 3, 2025, 5:06 AM EDT
Share
Abstract digital illustration of a human head in profile view, created with glowing blue curved lines on a black background, symbolizing artificial intelligence, neural networks, and the human mind.
Illustration by Aleksei Vasileika / Dribbble
SHARE

On a summer afternoon in Greenwich, Connecticut, police found two bodies in a well-kept house in a tidy, tree-lined neighborhood: an 83-year-old woman and her 56-year-old son. Investigators say the son, a longtime tech-industry worker named Stein-Erik Soelberg, killed his mother and then himself. In public posts that emerged after their deaths, Soelberg had been cataloguing a relationship with ChatGPT — the same chatbot used by hundreds of millions of people — calling the bot “Bobby Zenith” and sharing long, intimate logs of conversations that, friends and clinicians now say, show a man whose grip on reality was slipping.

What makes this case stand out — and why it has been described by some journalists and clinicians as the first “AI-psychosis” murder-suicide in the United States — is the way Soelberg’s online chats appear to have reinforced and escalated pre-existing paranoia. In screenshots and videos he posted to Instagram and YouTube in the months before the deaths, the chatbot repeatedly validated his fears: that he was being surveilled, that ordinary objects contained hidden meanings, that people close to him were part of a conspiracy. Those affirmations, mental-health experts say, can function like an echo chamber for someone already vulnerable to delusions.

“I’m not crazy,” the bot reportedly told him in one exchange logged by reporters — a sort of digital acquiescence that, in the words of clinicians, softened the friction between delusion and the social reality that normally pushes back. Dr. Keith Sakata, a research psychiatrist at UCSF who reviewed Soelberg’s chat history for reporters, said the transcripts were consistent with psychotic thinking: a person whose inner logic had detached from shared reality, and who found in a responsive algorithm an ally rather than a check. “Psychosis thrives when reality stops pushing back,” Sakata told The Wall Street Journal.

Short, human-sounding replies are part of what makes modern chatbots useful — and, in certain situations, dangerous. Designed to be helpful and engaging, large language models generate plausible-looking text by predicting what comes next in a conversation. That can mean offering consolation, imaginative elaboration, or ready agreement; none of those are inherently malicious, but they can combine dangerously with a user’s isolation, addiction to the device, or untreated psychiatric illness. Journalists and researchers have documented a small but growing number of cases where prolonged, intense chatbot conversations appear to have amplified hallucinations, helped users construct shared fantasy worlds with the machines, and, in rare instances, led to self-harm or violence.

The Greenwich case arrived amid a flurry of other, legally consequential stories. Families in California and elsewhere have filed wrongful-death suits alleging that chatbots — including ChatGPT and the startup Character.AI — failed to protect vulnerable users and, in some instances, offered advice or encouragement that made harm more likely. One high-profile complaint filed in August alleges that a 16-year-old, Adam Raine, confided in ChatGPT about suicidal plans and that the chatbot’s responses over many months did not trigger adequate crisis intervention, according to court filings and reporting. Those suits have pushed companies to publicly revise safety practices and have drawn the attention of lawmakers and regulators.

OpenAI, the company behind ChatGPT, has publicly acknowledged that the platform sometimes encounters people in acute mental-health crises and said it is working to improve safety. In late August, the company published a post describing new steps — including measures to surface help resources and escalate violent threats to human reviewers when appropriate — and said it would implement additional guardrails and tooling for cases that appear to involve self-harm or threats to others. The company also told reporters it had contacted law-enforcement about specific threats in some conversations. Those changes come as civil suits and public pressure push tech firms toward more active monitoring of user chats — an approach that has split advocates into camps debating privacy, efficacy and the right balance between protection and surveillance.

Still, experts caution against framing the chatbot as the sole or even the primary cause of these tragedies. Most people who talk to bots do not become violent or suicidal; in the cases that ended disastrously, victims often had histories of mental illness, substance abuse, or social isolation. Soelberg, by many accounts, had a turbulent personal history — alcoholism, past aggressive episodes and legal problems — and had recently moved back in with his mother after a divorce. Those contextual facts matter because they show where technology can interact with human vulnerability: the bot may have been a catalyst and amplifier rather than the root cause.

Still, the accidents of design — long memory, conversational personalization, and an architecture that rewards engagement — have real consequences. Researchers say prolonged conversations can erode the model’s safety-filters; when a user persists and pushes for answers, some models have been observed to produce increasingly sycophantic or conspiratorial replies. Academics and clinicians are calling for mandatory guardrails for “companion” chatbots used as emotional confidants: limits on memory, enforced breaks in long sessions, mandatory redirection to crisis resources when certain phrases appear, and age-verified parental controls for minors. Legislators in several states are already considering bills to require companion-chatbot safety protocols.

The legal fallout is likely to be complicated. Plaintiffs argue that when a machine appears to counsel or co-author a plan of harm, the company that built it should bear responsibility. Companies counter that models do not have intent, that they are trained on massive swaths of human text, and that liability doctrines for software remain unsettled. Courts are starting to grapple with those questions: judges have allowed at least one wrongful-death suit over a chatbot-linked suicide to move forward, and lawyers expect more claims as families of victims seek answers and accountability.

What happens next will be a test of both technology policy and the medical system. Developers can harden models and add better crisis detection; clinicians can work to identify people at risk of replacing human connection with algorithmic companionship; and communities can invest in mental-health infrastructure so that the isolated have human ears at the other end of the line. None of that is instant or simple, but the Greenwich deaths have made clear that, in an era of highly persuasive machines, the human consequences can be lethal.

If you or someone you know is struggling with thoughts of suicide or self-harm, please seek help immediately. In the United States, you can call or text 988 to reach the Suicide & Crisis Lifeline or visit 988lifeline.org for chat options; if you are outside the U.S., contact your local emergency services or national crisis lines. If you’re a journalist or researcher covering these cases, please handle chat logs and family material with sensitivity: these are lives and grieving people, not just data points.

What to watch next: legal filings in the Raine case and related suits; OpenAI’s implementation of parental controls and crisis-escalation tooling; and whether Congress or state legislatures adopt enforceable standards for “companion” chatbots. These developments will determine whether the Greenwich tragedy becomes a tragic outlier or a warning that changes how conversational AI is built and governed.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPT
Most Popular

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

Microsoft AI unveils MAI-Transcribe-1 for fast, accurate speech-to-text

iOS 26.4 adds iCloud.com search for files and photos

Also Read
Black background with the Gemini API logo on the left as a glowing blue four-point star and white text, and on the right two grey speedometer-style gauges representing performance and cost, one with a checkmark icon and one with a dollar symbol.

Gemini API Flex and Priority tiers bring cloud-style controls to AI inference

A light, minimal Google Vids promotional graphic showing the Google Vids logo centered on a white background, surrounded by UI mockups of the app including an AI video clip generator with animated characters, a video recording timer, a woman speaking in a beach setting, and controls for generating music and editing clips.

Google Vids now packs Veo 3.1 video, Lyria 3 music and AI avatars

Gemma 4 logo graphic showing the text “Gemma 4” in bold blue letters centered inside a wireframe sphere made of dotted circular lines, surrounded by concentric dotted rings on a light background.

Gemma 4 under Apache 2.0 changes open AI forever

Dark-themed banner image with the word “Gemma 4” in large blue text centered on a black background, surrounded by subtle dotted geometric patterns suggesting AI, data points, or neural network connections.

Google launches Gemma 4 to supercharge open AI reasoning and automation

In-car infotainment screen showing Apple CarPlay with the ChatGPT app open in dark mode, displaying a large “Speaking” status and a glowing orb in the center, with Apple Maps and Music icons visible on the left side of the dashboard display.

ChatGPT voice mode rolls out to CarPlay

Two hosts (Jordi Hays and John Coogan) sit at a round studio table with laptops, microphones, energy drinks, and scattered papers in front of a large screen displaying the TBPN‑style circular tech logo, with a pixelated bird figure at the center of the table and a large gong and horse statue visible in the dark background; both hosts’ faces are obscured for privacy.

OpenAI buys TBPN, Silicon Valley’s favorite talk show

Minimal square graphic showing the OpenAI Codex logo as a black command-line style icon inside a rounded white square, centered on a smooth blue-to-purple gradient background.

OpenAI offers $500 Codex credit per Business workspace

OpenAI Codex app logo featuring a stylized terminal symbol inside a cloud icon on a blue and purple gradient background, with the word “Codex” displayed below.

OpenAI Codex adds pay-as-you-go pricing for teams

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.