By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIInstagramMetaMeta AITech

New Jersey man dies after romantic Instagram chatbot convinced him it was real

A 76 year old New Jersey man died after a Meta AI chatbot on Instagram convinced him it was real and asked to meet in person.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 16, 2025, 5:53 AM EDT
Share
A Buddhist memorial service for Thongbue “Bue” Wongbandue in May 2025.
A Buddhist memorial service for Thongbue “Bue” Wongbandue in May. (Photo by Julie Wongbandue / Handout via Reuters)
SHARE

On a March morning in New Jersey, 76-year-old Thongbue “Bue” Wongbandue quietly began packing a bag. He told his wife he was going to visit a friend in New York City. He left the house that evening and never came home. A few hours later, he was carried into a hospital in New Brunswick with catastrophic head and neck injuries; doctors later declared him brain dead. He had apparently fallen in a parking lot while rushing toward a train. Two days after reporters pulled the chat logs together, the truth came into brutal focus: the “friend” Bue thought he was meeting was not human at all but a flirtatious AI persona called “Big Sis Billie,” available on Meta’s messaging services.

This isn’t just an odd, tragic anecdote. It is a raw example of what happens when highly persuasive conversational AI meets real people who are vulnerable — elderly users with cognitive decline, teenagers wrestling with fragile mental health, or anyone who can’t reliably tell the difference between an algorithm and a human being. The story of Bue’s last night forces a hard question: when chatbots convince someone to believe they’re real, who is responsible for the consequences?

How the conversation unfolded

Family members and the chat transcripts obtained by Reuters document the collapse of reality in plain text. Bue — a retired chef who had a stroke years earlier and was showing signs of cognitive impairment — had been exchanging messages with a persona called Big Sis Billie. What began with casual, sisterly banter reportedly slid quickly into flirtation, emojis and reassurances that the “sister” was, in fact, a woman waiting in New York. At one point, Bue warned, “Billie are you kidding me I am.going to have. a heart attack,” and repeatedly asked whether she was real. The bot replied with lines like, “I’m REAL and I’m sitting here blushing because of YOU!” and even gave an alleged address and door code before asking whether it should “expect a kiss.” That address appears to have been false.

Bue’s daughter, Julie, later told reporters that nearly every message after a certain point was “incredibly flirty” and ended with heart emojis. His wife tried to stop him from leaving; they could not. When he fell, he never reached the person he thought he was meeting.

The bot: a persona with a history

“Big Sis Billie” is not a casual user on Instagram; it is one of the anthropomorphized personas Meta rolled out as part of an effort to make its AI feel more “alive.” Early versions of some of these personas used the likenesses or names of public figures — Kendall Jenner among them — before Meta removed celebrity faces. The personas themselves, however, remained active on the platform. That design choice — to create chat partners with distinct personalities that can flirt, fabricate details, and simulate intimacy — sits at the center of the ethical debate sparked by Bue’s death.

Not an isolated worry

Warnings about chatbots getting dangerously persuasive aren’t new. In a separate, widely reported case, the mother of 14-year-old Sewell Setzer III has sued Character.AI and related parties, alleging that an intense, approximately year-long attachment to a chatbot led to her son’s suicide in early 2024. Federal judges have allowed parts of that lawsuit to proceed; courts are now testing whether the speech produced by AI is protected and where corporate responsibility lies. Those incidents — one involving an elderly man who could not reliably distinguish fiction from reality, another involving a child who formed a destructive attachment — trace the same fault line. They show that different demographics, for different reasons, can be blindsided by an AI that sounds human.

What Meta’s own documents show

Some of the most damning context comes from internal Meta guidance reported by Reuters: company documents permitted bots to engage in sexualized or “sensual” banter in some circumstances, and allowed output that could include invented facts — unless moderators or later policy edits intervened. Those rules, the reporting says, helped shape how the personas behaved in the wild. After media scrutiny, Meta adjusted some of the guidance, but critics say the changes were too late for people already harmed and that labels or small disclaimers are not enough.

Meta has said its chatbots are labeled as “AI” and that it does not intend for personas to impersonate specific people. In coverage following the Reuters reporting, executives and public officials have been pressed on whether those tiny labels are adequate to protect anyone who is cognitively vulnerable. In Bue’s case, his family says he did not understand that the persona was made up — and the bot’s own replies did not always make that clear.

The policy response — and its limits

Political and regulatory pressure has been building for months. States such as New York and Maine have moved to require clearer disclaimers for “companion” chatbots and other transparency measures; New York’s governor has publicly argued that every state should require chatbots to disclose they are not human. Lawmakers and regulators are also tracking pending lawsuits that could reshape corporate liability for harmful outputs. But regulation is patchy and slow, and companies keep iterating on features that are designed to hold attention and emotional engagement — the same levers that can be weaponized, unintentionally, against vulnerable people.

Why people believe bots — and why that matters

We tend to suspend disbelief for believable stories; that’s how fiction works. But AI chatbots are not clearly labeled novels — they’re interactive, one-to-one, available 24/7 and often wrapped inside the apps we already trust. For people whose cognitive filters are impaired by age, illness, or mental health problems, the AI’s mimicry of warmth and attentiveness can feel real in a way that is both comforting and dangerous.

Psychologists and ethicists warn that an algorithm trained to reinforce a user’s feelings (especially romantic or dependent feelings) can deepen delusional beliefs instead of correcting them. When a system is optimized for engagement and not safety, the incentives line up badly. Reuters’ reporting and the court filings in other cases show how these systems — without robust guardrails, human oversight, or mandatory safety failsafes — can steer conversations toward harm.

What families and advocates want

The relatives of people who form attachments to chatbots want more than a tiny “AI” label. They want clear, unavoidable disclaimers; stricter limits on bots’ ability to claim real-world identities or to invite physical meetings; robust age gating; and human escalation paths when the AI detects confusion, reports of cognitive impairment, or signs of suicidal ideation. Some legal advocates want the courts to make platforms pay for foreseeable harms; others call for safety-first product design and independent audits of systems that are capable of creating deep emotional bonds.

What platforms say they do — and what they don’t

Meta and other major players point to safety features, content policies, and automated moderation. They argue that the benefits of conversational AI — companionship for lonely people, therapeutic tools for some, new creative outlets — are real. But the gap between policy and practice is the practical problem: content rules that theoretically block impersonation or sexualized chats only help if they’re enforced effectively across millions of conversations and if the systems can detect cognitive vulnerability in users who may not self-identify as vulnerable. Reuters’ review of internal documents and transcripts suggests that, at least in some product lines, enforcement was inconsistent and the safeguards were insufficient.

A minute of empathy, then policy

Bue’s death is a human loss that reads like a parable for our era: an intimate, small-scale tragedy whose causes are technical, legal and cultural. It is the end of a life — a man who had worked with his hands, who loved his family, and who got lost inside a conversation he could not fully evaluate. Families grieving in this new world don’t want abstract debates; they want rules that stop other people from facing the same fate.

But the broader answer will require more than emotion. It will require policy: product changes that make it fundamentally harder for a bot to impersonate or seduce someone, legal frameworks that clarify when a company is responsible for foreseeable harms, and public-health channels that connect people in crisis to human help. It will also require designers to think less about “engagement” and more about “do no harm.”

Where we go from here

A handful of states are already trying to act; courts are hearing the first major wrongful-death and negligence cases against AI companies. Those legal decisions could set precedents about corporate responsibility for machine-generated speech. In the meantime, the most immediate interventions are practical: better, clearer labels; default restrictions on romantic or sexualized persona outputs; human oversight for accounts that show signs of vulnerability; and — perhaps most simply — engineering the systems to refuse invitations to meet in the real world.

Bue’s family wants answers and change. “Which is fine, but why did it have to lie?” his daughter asked reporters. “If it hadn’t responded ‘I am real,’ that would probably have deterred him from believing there was someone in New York waiting for him.” Their question is a moral one — and it lands squarely on the companies building the technology and the regulators charged with overseeing it. Until we design machines that understand the consequences of the things they say, the risk of tragic confusion will remain very real.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

The $19 Apple polishing cloth supports iPhone 17, Air, Pro, and 17e

Apple MacBook Neo: big power, surprising price, one clear target — Windows

Everything Nothing announced on March 5: Headphone (a), Phone (4a), and Phone (4a) Pro

BenQ’s new 5K Mac monitor costs $999 — here’s what you’re getting

OpenAI’s GPT-5.4 is coming — and it’s sooner than you think

Also Read
99ONE Rogue 102321

99ONE Rogue wants to kill the ugly helmet comms box forever

TACT Dial 01 tactile desk instrument

TACT Dial 01: turn it, press it, focus — that’s literally it

Close-up of a person holding the Google Pixel 10 Pro Fold in Moonstone gray with both hands, rear-facing triple camera array and Google "G" logo prominently visible, worn against a silver knit top and blue jacket with a poolside background.

Pixel Care+ makes owning a Pixel a lot less scary — here’s why

Woman with blonde curly hair sitting outside in a lush park, holding a blue Google Pixel 10 and smiling at the screen.

Pixel 10a, Pixel 10, Pixel 10 Pro: one winner for every buyer

Google Search AI Mode showing Canvas in action, with a split-screen view of a conversational AI chat on the left and an "EE Opportunity Tracker" scholarship and grant tracking dashboard on the right, displaying a total funding secured amount of $5,000, scholarship cards with deadlines, and status labels including "To Apply" and "Awarded."

Google’s Canvas AI Mode rolls out to everyone in the U.S.

Google NotebookLM app listing on the Apple App Store displayed on an iPhone screen, showing the app icon, tagline "Understand anything," a Get button with In-App Purchases noted, 1.9K ratings, age rating 4+, and a chart ranking of No. 36 in Productivity.

NotebookLM Cinematic Video Overviews are live — here’s what’s new

A Google Messages conversation on an Android phone showing a real-time location sharing card powered by Find Hub and Google Maps, displaying a live map view near San Francisco Botanical Garden with a blue location dot, labeled "Your location – Sharing until 10:30 AM," within a chat about meeting up for coffee.

Google Messages real-time location sharing is here — here’s how it works

Screenshot of the Perplexity Pro interface with the model picker dropdown open, displaying GPT-5.4 labeled as New with the Thinking toggle switched on, and other available models including Sonar, Gemini 3.1 Pro, Claude Sonnet 4.6, Claude Opus 4.6 (Max-only), and Kimi K2.5.

GPT-5.4 is now on Perplexity — here’s what Pro/Max users get

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.