AI diet advice lands man in hospital with rare poisoning syndrome


  • A 60-year-old man developed psychosis, paranoia and hallucinations after ChatGPT recommended replacing dietary chloride with toxic bromide. This advice he followed for three month led to life-threatening bromism.
  • When tested, ChatGPT suggested bromide as a chloride substitute without warnings about its toxicity or medical risks, failing to ask critical context-seeking questions that a professional would.
  • Bromism, a poisoning syndrome from excessive bromide exposure, was common in the early 20th century – but is now rare due to bans on medicinal use. The patient’s case highlights its enduring dangers when misused.
  • After weeks of hospitalization, the man recovered, but the incident underscores the risks of relying on AI for health advice without professional verification – emphasizing the need for human oversight.
  • As AI integrates into healthcare (e.g., symptom checkers), cases like this reveal how easily misinformation can lead to harm. Experts urge stricter safeguards and emphasize consulting doctors before acting on AI-driven advice.

In an alarming case highlighting the dangers of relying on artificial intelligence (AI) for medical advice, a 60-year-old man developed severe psychiatric symptoms – including paranoia, hallucinations and delusions – after following diet recommendations from ChatGPT.

The incident was detailed in a report published Aug. 5 in Annals of Internal Medicine Clinical Cases. The unnamed patient, inspired by his college nutrition studies, sought to eliminate chloride – a component of table salt – from his diet after reading about sodium chloride’s health risks.

Unable to find reliable sources recommending a chloride-free diet, he turned to ChatGPT. The chatbot allegedly advised him to replace chloride with bromide, a chemical cousin with toxic effects. For three months, the man consumed sodium bromide purchased online instead of table salt. (Related: Experts Dr. Sherri Tenpenny and Matthew Hunt Warn: AI may replace doctors, threatening medical freedom and privacy.)

By the time he arrived at the emergency department, he was convinced his neighbor was poisoning him. Doctors quickly identified his symptoms – psychosis, agitation and extreme thirst – as classic signs of bromism, a rare poisoning syndrome caused by excessive bromide exposure.

Bromism was far more common in the early 20th century when bromide was a key ingredient in sedatives, sleep aids and over-the-counter medications. Chronic exposure led to neurological damage, and by the 1970s, regulators had banned most medicinal uses of bromide due to its toxicity. While cases are rare today, this patient’s ordeal proves it hasn’t disappeared entirely.

His blood tests initially showed abnormal chloride levels, but further analysis revealed pseudohyperchloremia – a false reading caused by bromide interference. Only after consulting toxicology experts did doctors confirm bromism as the culprit behind his rapid mental decline. After weeks of hospitalization, antipsychotics and electrolyte stabilization, the man recovered.

AI’s dangerous oversight: Chatbots prone to giving lethal advice

The report’s authors later tested ChatGPT’s response to similar dietary queries and found the bot indeed suggested bromide as a chloride substitute – without critical context, warnings or clarification about its toxicity. Unlike a medical professional, the AI failed to ask why the user sought this substitution or caution against ingesting industrial-grade chemicals.

ChatGPT’s creator OpenAI states in its terms that the bot is not intended for medical advice. Yet users frequently treat AI as an authority, blurring the line between general information and actionable health guidance.

This case underscores the risks of trusting AI over professional healthcare guidance. It also serves as a cautionary tale for the AI era. “While AI has potential to bridge gaps in public health literacy, it also risks spreading decontextualized – and dangerous – information,” the report’s authors concluded.

With AI integration accelerating in healthcare – from symptom checkers to virtual nursing assistants – misinformation risks loom large. A 2023 study found that language models frequently hallucinate false clinical details, potentially leading to misdiagnoses or harmful recommendations. While tech companies emphasize disclaimers, cases like this reveal how easily those warnings get overlooked in practice.

As chatbots proliferate, experts urge users to verify health advice with licensed professionals. The cost of skipping that step, as this case proves, can be far steeper than a Google search.

Watch the Health Ranger Mike Adams discussing the risks and benefits of AI in healthcare with Dr. Sherri Tenpenny and Matthew Hunt in this episode of the “Health Ranger Report.”

This video is from Health Ranger Report on Brighteon.com.

More related stories:

THE CYBORG WILL SEE YOU NOW: Tech company launches self-serve “CarePod” healthcare booths that use AI instead of doctors.

Cigna Healthcare used AI to deny hundreds of thousands of valid health insurance claims, lawsuit alleges.

Doctor Google: Online search engine starts providing medical advice, pushing drugs and surgery while censoring natural health.

Sources include: 

LiveScience.com

ACPJournals.org

HuffPost.com

Independent.co.uk

Brighteon.com


Submit a correction >>

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.


Comments
comments powered by Disqus

Get Our Free Email Newsletter
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
Your privacy is protected. Subscription confirmation required.

RECENT NEWS & ARTICLES

Get the world's best independent media newsletter delivered straight to your inbox.
x

By continuing to browse our site you agree to our use of cookies and our Privacy Policy.