Wednesday, 12 November 2025
30.2 C
Singapore
29.8 C
Thailand
23.3 C
Indonesia
28.5 C
Philippines

ChatGPT medical advice linked to rare psychosis case, report says

A man developed rare bromism after following ChatGPT's advice, highlighting the dangers of relying on AI for medical decisions.

A recent medical case has raised concerns about the dangers of relying on artificial intelligence for health advice, after a man developed a rare and largely forgotten condition following guidance from ChatGPT. The report, published in the Annals of Internal Medicine, details how the AI chatbot’s suggestions allegedly contributed to a case of bromide intoxication, or bromism, which can cause severe neuropsychiatric symptoms, including psychosis and hallucinations.

From salt swap to hospitalisation

The incident involved a 60-year-old man who suspected his neighbour was secretly poisoning him. Influenced by reports on the potential harms of sodium chloride – commonly known as table salt – he sought advice from ChatGPT on possible alternatives. Acting on the chatbot’s response, he replaced salt with sodium bromide, a compound whose use in medicine was discontinued decades ago.

Over time, the man developed bromism, a condition rarely seen since the mid-20th century. When admitted to the hospital, doctors noted he was extremely thirsty yet refused water offered to him, preferring instead to distil his own. His restrictive eating and drinking habits, along with mounting paranoia, soon escalated.

“In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability,” the report noted.

Bromism: a condition from another era

Bromism, caused by chronic bromide exposure, was once a well-known problem in the early 1900s. Bromine-based salts were historically prescribed for neurological and mental health disorders, especially epilepsy, and were also used in sleep aids. However, prolonged use was found to trigger nervous system issues, ranging from delusions and poor coordination to fatigue, tremors, and even coma in severe cases.

By 1975, bromide use in over-the-counter medicines was banned in the United States due to these risks. Today, bromism is considered extremely rare, which makes this recent case particularly striking.

The treating medical team could not directly access the patient’s ChatGPT conversation; however, they conducted their tests using ChatGPT 3.5. They reported receiving similarly problematic suggestions, including bromide as a salt replacement, without adequate health warnings or follow-up questions.

“When we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do,” the researchers said.

AI in healthcare: promise and pitfalls

This is not the first time ChatGPT has been linked to medical outcomes. Earlier this year, a widely shared story described how a mother used the chatbot to help identify her son’s rare neurological disorder after numerous doctors had been unable to diagnose it. In that case, the advice led to effective treatment.

However, experts stress that AI can only offer safe and accurate recommendations when provided with detailed context and should never replace a proper medical evaluation. A study in the journal Genes found that ChatGPT’s ability to diagnose rare conditions was “very weak”, reinforcing the need for caution.

OpenAI, when contacted by LiveScience about the incident, responded: “You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.” The company added that its safety teams work to reduce risks and train users to seek professional input.

With the launch of GPT-5, OpenAI has emphasised improvements in safety and accuracy, aiming to produce “safe completions” that guide users away from harmful recommendations. Nonetheless, the fundamental limitation remains: AI cannot reliably assess a patient’s clinical features without direct medical examination. Experts agree that AI may be a valuable tool in a healthcare setting, but only when deployed under the supervision of certified medical professionals.

Hot this week

VAST Data signs US$1.17 billion partnership with CoreWeave to power next-generation AI

VAST Data signs US$1.17 billion deal with CoreWeave to expand AI infrastructure and power next-generation AI workloads.

Square Enix cuts UK and US jobs as it shifts focus back to Japan

Square Enix lays off UK and US developers as it consolidates operations in Japan and expands its use of AI in game development.

Singapore businesses expand globally as one in four sell internationally with PayPal

One in four Singapore businesses now sell internationally via PayPal, led by gaming, beauty, and fashion exports worth over US$1.6B.

Thoughtworks’ latest Technology Radar explores AI’s rapid evolution in enterprise development

Thoughtworks’ Technology Radar 33 reveals how AI assistance, agentic systems, and new protocols are reshaping enterprise software.

vivo begins global rollout of OriginOS 6 for X and V series smartphones

vivo launches OriginOS 6 globally, offering smoother performance, nature-inspired design, and advanced AI features for its latest smartphones.

Hybrid AI emerges as the new standard for financial services, report finds

A Cloudera and Finextra report finds hybrid AI has become essential for financial services, with 91% citing it as highly valuable.

SIAS celebrates corporate excellence at Investors’ Choice Awards 2025

SIAS honours over 40 companies and leaders for excellence in governance, sustainability and transparency at the Investors’ Choice Awards 2025.

Aster and Aether Fuels to build Singapore’s first commercial sustainable aviation fuel plant

Aster and Aether Fuels to build Singapore’s first commercial-scale sustainable aviation fuel plant at Pulau Bukom.

H3 Zoom secures US$1.8 million in Series A funding led by JRE Ventures

H3 Zoom raises US$1.8M in Series A funding led by JRE Ventures to expand AI-powered infrastructure inspection across Asia.

Related Articles

Popular Categories