Tuesday, 12 August 2025
29.2 C
Singapore
35.7 C
Thailand
26.9 C
Indonesia
29.5 C
Philippines

ChatGPT medical advice linked to rare psychosis case, report says

A man developed rare bromism after following ChatGPT's advice, highlighting the dangers of relying on AI for medical decisions.

A recent medical case has raised concerns about the dangers of relying on artificial intelligence for health advice, after a man developed a rare and largely forgotten condition following guidance from ChatGPT. The report, published in the Annals of Internal Medicine, details how the AI chatbot’s suggestions allegedly contributed to a case of bromide intoxication, or bromism, which can cause severe neuropsychiatric symptoms, including psychosis and hallucinations.

From salt swap to hospitalisation

The incident involved a 60-year-old man who suspected his neighbour was secretly poisoning him. Influenced by reports on the potential harms of sodium chloride – commonly known as table salt – he sought advice from ChatGPT on possible alternatives. Acting on the chatbot’s response, he replaced salt with sodium bromide, a compound whose use in medicine was discontinued decades ago.

Over time, the man developed bromism, a condition rarely seen since the mid-20th century. When admitted to the hospital, doctors noted he was extremely thirsty yet refused water offered to him, preferring instead to distil his own. His restrictive eating and drinking habits, along with mounting paranoia, soon escalated.

“In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability,” the report noted.

Bromism: a condition from another era

Bromism, caused by chronic bromide exposure, was once a well-known problem in the early 1900s. Bromine-based salts were historically prescribed for neurological and mental health disorders, especially epilepsy, and were also used in sleep aids. However, prolonged use was found to trigger nervous system issues, ranging from delusions and poor coordination to fatigue, tremors, and even coma in severe cases.

By 1975, bromide use in over-the-counter medicines was banned in the United States due to these risks. Today, bromism is considered extremely rare, which makes this recent case particularly striking.

The treating medical team could not directly access the patient’s ChatGPT conversation; however, they conducted their tests using ChatGPT 3.5. They reported receiving similarly problematic suggestions, including bromide as a salt replacement, without adequate health warnings or follow-up questions.

“When we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do,” the researchers said.

AI in healthcare: promise and pitfalls

This is not the first time ChatGPT has been linked to medical outcomes. Earlier this year, a widely shared story described how a mother used the chatbot to help identify her son’s rare neurological disorder after numerous doctors had been unable to diagnose it. In that case, the advice led to effective treatment.

However, experts stress that AI can only offer safe and accurate recommendations when provided with detailed context and should never replace a proper medical evaluation. A study in the journal Genes found that ChatGPT’s ability to diagnose rare conditions was “very weak”, reinforcing the need for caution.

OpenAI, when contacted by LiveScience about the incident, responded: “You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.” The company added that its safety teams work to reduce risks and train users to seek professional input.

With the launch of GPT-5, OpenAI has emphasised improvements in safety and accuracy, aiming to produce “safe completions” that guide users away from harmful recommendations. Nonetheless, the fundamental limitation remains: AI cannot reliably assess a patient’s clinical features without direct medical examination. Experts agree that AI may be a valuable tool in a healthcare setting, but only when deployed under the supervision of certified medical professionals.

Hot this week

Samsung to release One UI 8 for Galaxy S25 series in September

Samsung confirms One UI 8 update for Galaxy S25 in September, with beta access for other Galaxy devices starting this month.

Redmi 15 5G debuts in Singapore with large battery, 6.9-inch display and AI-powered tools

Xiaomi launches Redmi 15 5G in Singapore with a 7000mAh battery, 6.9-inch display, AI features and 5G connectivity for everyday use.

NTT DATA launches global Microsoft Cloud unit to boost enterprise AI adoption

NTT DATA has launched a global Microsoft Cloud unit to help enterprises accelerate AI-driven cloud transformation and modernisation.

CreateAI develops Heroes of Jin Yong as China’s next AAA video game

CreateAI is developing Heroes of Jin Yong, a massive open-world RPG based on Louis Cha’s novels, aiming for a 2028 release.

HPE introduces AI-powered security and data protection innovations at Black Hat USA 2025

HPE reveals AI-driven networking and data protection solutions at Black Hat USA 2025, featuring CrowdStrike partnership and secure storage upgrades.

Changi Airport opens Star Wars pop-up library at Terminal 3

Changi Airport’s Star Wars pop-up library offers over 2,000 titles, themed activities, and a robotic lending system until 24 January 2026.

OLED MacBook Pro could arrive between late 2026 and early 2027

Apple may release an OLED MacBook Pro with an M6 chip between late 2026 and early 2027, alongside a new design and a possible 5G option.

Nvidia and AMD to give 15% of China chip revenue to the U.S. to secure export licences

Nvidia and AMD will pay 15% of their China AI-chip revenues to the U.S. for export licences, while Intel’s CEO meets Trump amid China ties scrutiny.

CreateAI develops Heroes of Jin Yong as China’s next AAA video game

CreateAI is developing Heroes of Jin Yong, a massive open-world RPG based on Louis Cha’s novels, aiming for a 2028 release.

Related Articles

Popular Categories