By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
AIOpenAITech

60-year-old man hospitalized after following ChatGPT diet advice

ChatGPT’s salt replacement suggestion led a man to develop bromism, a rare toxic condition, and spend three weeks in the hospital.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
Aug 8, 2025, 4:37 AM EDT
Share
OpenAI's ChatGPT chatbot mobile app icon on iPhone smartphone
Photo: Alamy
SHARE

When a 60-year-old man decided to remove sodium chloride from his diet, he didn’t call a doctor — he opened a chat window. Three months later, he was in hospital with a rare but well-documented toxic condition, having traded table salt for sodium bromide after reading about salt’s harms and “consulting ChatGPT.” The episode, laid out in a new case report, is a sharp reminder that conversational AI can confidently lead people down dangerous paths when medical context and judgment are required.

According to the clinical case published in Annals of Internal Medicine: Clinical Cases, the man replaced table salt with sodium bromide he bought online after an interaction with ChatGPT that apparently suggested chloride could be swapped out — a suggestion that makes sense in some industrial or cleaning contexts but is medically perilous. After weeks of consuming bromide, he developed neuropsychiatric symptoms (paranoia, auditory and visual hallucinations), skin changes (new acne and cherry angiomas), and profound thirst — enough that he spent three weeks in hospital before being discharged to psychiatry care as his condition improved.

It’s a striking sequence because bromide toxicity — bromism — used to be much more common decades ago when bromide salts were an ingredient in sedatives and other over-the-counter remedies. Today it’s rare, which helped delay diagnosis: clinicians had to piece together the psychiatric symptoms and the patient’s dietary experiment to identify chronic bromide exposure as the culprit.

What is bromism, and why is it dangerous?

Bromism is the clinical syndrome that follows chronic exposure to bromide ions. Symptoms can be neurologic (confusion, ataxia, psychosis, hallucinations), psychiatric (paranoia, delirium), gastrointestinal, and dermatologic (acneiform eruptions, cherry angiomas). Because bromide has a long elimination half-life, it accumulates over time and can mimic primary psychiatric disorders — which means the diagnosis can be easily missed unless someone suspects toxic exposure. Treatment focuses on stopping the exposure and hastening removal (salt and fluids, diuretics, or in severe cases dialysis).

That clinical background helps explain why swapping one white powder — table salt — for another can be more than a quirky nutrition experiment. The body treats chloride and bromide very differently at the doses someone might be ingesting when they start buying chemical salts online.

Chatbots, confidence, and risky advice

This isn’t the first time AI chatbots’ medical responses have stirred trouble. Studies and reports have repeatedly shown that large language models can produce plausible but sometimes incorrect or misleading medical guidance, and that some chat interfaces have dropped or softened health disclaimers, increasing the risk that users take the output as authoritative. In short: hallucinations aren’t just weird sentences — they can become harmful actions when users act on them without verification.

The authors of the Annals case report explicitly warned that AI can generate scientific inaccuracies and lacks the capacity to critically appraise results or a person’s medical history — exactly the sort of context a clinician brings to decisions about diet, supplements, and chemical exposures. The case reads as a cautionary tale: a technically literate person (the patient had studied nutrition in college) used web and AI tools to self-experiment, but without medical oversight the experiment turned toxic.

What companies say — and what the lawbook already warns users about

OpenAI’s own service terms make clear that its chat services are not intended for medical diagnosis or treatment, and that output “may not always be accurate” and shouldn’t be relied on as a sole source of truth or professional advice. Those legal lines are blunt, but they don’t always reach or convince every user in the moment — especially when an answer sounds authoritative.

Takeaways

  • If a chatbot suggests swapping chemicals or “removing” a nutrient, treat that as a red flag, not an instruction manual. Substances sold for industrial or cleaning uses are often toxic if ingested.
  • Apparent medical advice from an AI is a conversation starter, not a prescription. Ask a clinician, pharmacist, or poison-control center before doing anything that changes what you ingest.
  • Clinicians: remember that patients may arrive having tried internet or AI-sourced experiments; a careful exposure history can be diagnostic. The Annals paper is a useful clinical vignette for teaching that point.

The human cost

For this man, it meant weeks in hospital, frightening psychiatric symptoms, and a recovery that could have been avoided if a trusted, expert source had been consulted first. The episode is small but telling: AI tools have reshaped how people seek health information, and while that can democratize knowledge, it also amplifies the risk that a single confidently worded, context-free answer will trigger real-world harm.

If there’s a moral to take away, it isn’t that AI is bad — it’s that context matters. Medical judgment is less about facts in isolation and more about fitting those facts to a person’s history, meds, labs, and risks. Machines can help surface ideas; humans still need to vet them.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Topic:ChatGPTHealth
Most Popular

Claude Platform’s new Compliance API answers “who did what and when”

Amazon Prime just made Friday gas runs $0.20 per gallon cheaper

This $3 ChromeOS Flex stick from Google and Back Market wants to save your old PC

Google Drive now uses AI to catch ransomware in real time

iOS 26.4 adds iCloud.com search for files and photos

Also Read
A person in a dress shirt sits at a desk typing on a keyboard in a dark room, while a glowing ribbon of light flows from a glass sphere with the Perplexity logo toward the computer, suggesting futuristic AI assistance.

Perplexity Computer just became your new tax assistant

Abstract sound wave illustration made of vertical textured lines in dark mauve on a soft pink background, suggesting audio waveform or voice signal for a modern tech or speech recognition theme.

Microsoft AI unveils MAI-Transcribe-1 for fast, accurate speech-to-text

Google Gemini AI. The image shows the word "Gemini" written in a modern, sans-serif font on a black background. The letters "G" and "e" are in a gradient blue color, while the letters "m," "i," "n," and "i" transition from a light blue to a light beige color. Above the second "i" in "Gemini," there is a stylized star or sparkle symbol, adding a celestial or futuristic touch to the design.

Google’s new MCP tools stop Gemini agents from hallucinating old APIs

A smart TV screen showing a paused YouTube podcast‑style video with two people talking into microphones, overlaid by a large circular “Ask” button with a sparkle icon in the bottom right corner.

YouTube’s new Ask AI button lands on smart TVs

Ray-Ban Meta Blayzer Optics (Gen 2) AI glasses

Meta’s new Ray-Ban AI glasses finally put prescriptions first

AT&T logo

AT&T OneConnect starts at $90 for fiber and wireless together

A wide Opera Neon promotional graphic showing the “MCP Connector” interface centered on a blurred gradient background, with a dialog that says “Connect AI systems to Opera Neon” and toggle for “Allow AI connection,” surrounded by labeled boxes for OpenClaw MCP Client, ChatGPT MCP Client, N8N MCP Client, Claude MCP Client, and Lovable MCP Client connected by dotted lines.

Opera Neon adds MCP Connector for true agentic browsing

Assassin’s Creed Shadows

Assassin’s Creed Shadows PS5 Pro patch adds new PSSR

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.