A recent medical case has raised concerns about the dangers of relying on artificial intelligence for health advice, after a man developed a rare and largely forgotten condition following guidance from ChatGPT. The report, published in the Annals of Internal Medicine, details how the AI chatbot’s suggestions allegedly contributed to a case of bromide intoxication, or bromism, which can cause severe neuropsychiatric symptoms, including psychosis and hallucinations.
From salt swap to hospitalisation
The incident involved a 60-year-old man who suspected his neighbour was secretly poisoning him. Influenced by reports on the potential harms of sodium chloride – commonly known as table salt – he sought advice from ChatGPT on possible alternatives. Acting on the chatbot’s response, he replaced salt with sodium bromide, a compound whose use in medicine was discontinued decades ago.
Over time, the man developed bromism, a condition rarely seen since the mid-20th century. When admitted to the hospital, doctors noted he was extremely thirsty yet refused water offered to him, preferring instead to distil his own. His restrictive eating and drinking habits, along with mounting paranoia, soon escalated.
“In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability,” the report noted.
Bromism: a condition from another era
Bromism, caused by chronic bromide exposure, was once a well-known problem in the early 1900s. Bromine-based salts were historically prescribed for neurological and mental health disorders, especially epilepsy, and were also used in sleep aids. However, prolonged use was found to trigger nervous system issues, ranging from delusions and poor coordination to fatigue, tremors, and even coma in severe cases.
By 1975, bromide use in over-the-counter medicines was banned in the United States due to these risks. Today, bromism is considered extremely rare, which makes this recent case particularly striking.
The treating medical team could not directly access the patient’s ChatGPT conversation; however, they conducted their tests using ChatGPT 3.5. They reported receiving similarly problematic suggestions, including bromide as a salt replacement, without adequate health warnings or follow-up questions.
“When we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do,” the researchers said.
AI in healthcare: promise and pitfalls
This is not the first time ChatGPT has been linked to medical outcomes. Earlier this year, a widely shared story described how a mother used the chatbot to help identify her son’s rare neurological disorder after numerous doctors had been unable to diagnose it. In that case, the advice led to effective treatment.
However, experts stress that AI can only offer safe and accurate recommendations when provided with detailed context and should never replace a proper medical evaluation. A study in the journal Genes found that ChatGPT’s ability to diagnose rare conditions was “very weak”, reinforcing the need for caution.
OpenAI, when contacted by LiveScience about the incident, responded: “You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.” The company added that its safety teams work to reduce risks and train users to seek professional input.
With the launch of GPT-5, OpenAI has emphasised improvements in safety and accuracy, aiming to produce “safe completions” that guide users away from harmful recommendations. Nonetheless, the fundamental limitation remains: AI cannot reliably assess a patient’s clinical features without direct medical examination. Experts agree that AI may be a valuable tool in a healthcare setting, but only when deployed under the supervision of certified medical professionals.