Saturday, 29 November 2025
27.8 C
Singapore
17.8 C
Thailand
21.1 C
Indonesia
27.9 C
Philippines

Oxford study warns about relying on chatbots for medical guidance

An Oxford study says many struggle to get useful health advice from AI chatbots, raising concerns over trust and decision-making accuracy.

With long NHS wait times and rising healthcare costs, you might be tempted to turn to AI chatbots like ChatGPT when feeling unwell. You’re not alone—around one in six adults in the United States already ask chatbots for health advice at least once a month. However, a new study by Oxford researchers shows this might not be the best idea.

You may not always know what details to share with a chatbot, which can lead to confusing or risky advice. A recent study shows that people often fail to get clear or helpful health guidance from AI tools, mainly because they don’t know how to ask the right questions or understand the answers.

Study shows chatbots didn’t improve health decision-making

The research team from the Oxford Internet Institute worked with about 1,300 people across the UK. Each participant was given fictional medical situations created by real doctors. Their task was to identify what could be wrong in each case and decide on the best next step—seeing a GP or going to the hospital. Participants could use chatbots, their judgement, or traditional online searches to decide.

They used three popular AI tools: ChatGPT’s latest model, GPT-4o, Cohere’s Command R+, and Meta’s Llama 3. Surprisingly, the study found that chatbot users were no better at identifying health problems than those who didn’t. Chatbot users were more likely to misjudge how serious a condition was.

Dr Adam Mahdi, a co-author of the study and Director of Graduate Studies at the Oxford Internet Institute, explained that there was an apparent breakdown in communication between humans and machines. “Participants often left out key information when asking questions,” he said. “And the answers they got were a mix of useful and misleading advice.”

This mix of good and bad suggestions made it harder for people to make clear choices. Dr Mahdi also warned that the current ways we test chatbots don’t fully show how confusing they can be in real conversations. “The evaluations don’t reflect the messy reality of human interaction,” he said.

Big tech companies still pushing AI health tools

Despite these concerns, tech companies continue to invest in AI health solutions. Apple is reportedly working on a tool to advise you on exercise, food, and sleep. Amazon is exploring ways to use AI to understand how social factors impact your health. Microsoft, meanwhile, is building AI tools to help doctors manage patient messages.

However, many experts, including the American Medical Association, believe AI shouldn’t be used for serious health decisions. OpenAI, the company behind ChatGPT, advises against using its chatbots to diagnose illnesses.

As interest in AI health tools grows, the Oxford team says treating these systems like any other medical product is vital. Dr Mahdi recommends testing them in real-life settings, just like you would with new medicine.

Chatbots may help—but not without human care

This new research shows that while chatbots might seem helpful at first glance, they’re not ready to replace real medical advice. If you feel unwell or unsure, speaking to a doctor or using trusted health sources is still best. AI can support healthcare only when used carefully and thoroughly aware of its limits.

Dr Mahdi says, “You should rely on trusted sources when making healthcare decisions. Chatbot systems need proper testing before they’re safe for public use.”

Hot this week

ChatGPT introduces new shopping research tool for personalised product guidance

ChatGPT launches a shopping research tool that creates personalised buyer’s guides through interactive product discovery.

Singapore consumers show growing interest in AI shopping companions

Research shows rising consumer interest in AI shopping agents in Singapore, with strong demand for cost savings and secure automation.

Crunchyroll brings world-first premieres and major anime showcases to AFA Singapore 2025

Crunchyroll brings exclusive premieres, guest panels and a large interactive booth to AFA Singapore 2025.

Chrome tests new privacy feature to limit precise location sharing on Android

Chrome for Android tests a new privacy feature that lets websites access only approximate location data instead of precise GPS information.

AMD powers Zyphra’s large-scale AI training milestone

Zyphra trains its ZAYA1 foundation model entirely on AMD hardware, marking a major step for large-scale AI development.

DeepSeek launches open AI model achieving gold-level scores at the Maths Olympiad

DeepSeek launches Math-V2, the first open AI model to achieve gold-level scores at the International Mathematical Olympiad.

AI browsers vulnerable to covert hacks using simple URL fragments, experts warn

Experts warn AI browsers can be hacked with hidden URL fragments, posing risks invisible to traditional security measures.

Slop Evader filters out AI content to restore pre-ChatGPT internet

Slop Evader filters AI-generated content online, restoring pre-ChatGPT search results for a more human web.

Lara Croft becomes gaming’s best-selling heroine amid new Tomb Raider rumours

Lara Croft becomes gaming’s best-selling heroine as new Tomb Raider rumours fuel excitement.

Related Articles

Popular Categories