An alarming tale emerged recently, highlighting one of the perils associated with AI technology. A 60-year-old man ended up in the emergency room after heeding a “nutritional tip” from ChatGPT. Allegedly, the chatbot advised him to swap his regular table salt for sodium bromide, a toxic chemical.
Amazon co-founder MacKenzie Scott has donated over $19 billion to charity in just five years
Diamond batteries powered by nuclear waste promise 28,000 years of clean energy
A “Personal Experiment” Goes Awry
This incident was detailed in a case study published by the Annals of Internal Medicine. Attempting to cut down on chloride intake (found in table salt), a 60-year-old man thought to ask ChatGPT for a substitute.
According to doctors, the chatbot recommended sodium bromide. The man purchased this substance online and used it in place of salt for three months. He was eventually hospitalized with severe psychiatric symptoms: paranoia, hallucinations, and confusion.
“Bromism”: A Poisoning from the Past
The medical diagnosis was clear: “bromism.” This is a type of poisoning from bromide, a condition that was common in the early 20th century when the substance was used in many sedatives, but has become exceedingly rare today.
Sodium bromide is indeed a toxic product, found in certain pesticides and swimming pool cleaning agents.
OpenAI’s Response: “Read Our Terms of Use”
When contacted by the American press, OpenAI responded tersely. A spokesperson simply directed journalists to the terms of use for ChatGPT, which state that the service should not be used for medical advice.
This seems like a weak defense, especially since the doctors involved in the case were able to receive a similar suggestion from the chatbot, without any safety warnings being triggered.
What Can We Say?
This case is somewhat horrifying. It proves that, despite numerous warnings, many people still use these AI tools as reliable sources of information, even for sensitive topics like health.
This incident particularly exposes a fundamental flaw in generative AI: they lack contextual awareness. The chatbot may have found a chemically correct piece of information (bromide is a halide, like chloride) but it was nutritionally disastrous, without the capability to discern the difference. It’s an important reminder that behind the conversational interface, there is no “understanding,” just an algorithm. Have you ever used ChatGPT for health or nutrition questions?
