With long NHS wait times and rising healthcare costs, you might be tempted to turn to AI chatbots like ChatGPT when feeling unwell. You’re not alone—around one in six adults in the United States already ask chatbots for health advice at least once a month. However, a new study by Oxford researchers shows this might not be the best idea.
You may not always know what details to share with a chatbot, which can lead to confusing or risky advice. A recent study shows that people often fail to get clear or helpful health guidance from AI tools, mainly because they don’t know how to ask the right questions or understand the answers.
Study shows chatbots didn’t improve health decision-making
The research team from the Oxford Internet Institute worked with about 1,300 people across the UK. Each participant was given fictional medical situations created by real doctors. Their task was to identify what could be wrong in each case and decide on the best next step—seeing a GP or going to the hospital. Participants could use chatbots, their judgement, or traditional online searches to decide.
They used three popular AI tools: ChatGPT’s latest model, GPT-4o, Cohere’s Command R+, and Meta’s Llama 3. Surprisingly, the study found that chatbot users were no better at identifying health problems than those who didn’t. Chatbot users were more likely to misjudge how serious a condition was.
Dr Adam Mahdi, a co-author of the study and Director of Graduate Studies at the Oxford Internet Institute, explained that there was an apparent breakdown in communication between humans and machines. “Participants often left out key information when asking questions,” he said. “And the answers they got were a mix of useful and misleading advice.”
This mix of good and bad suggestions made it harder for people to make clear choices. Dr Mahdi also warned that the current ways we test chatbots don’t fully show how confusing they can be in real conversations. “The evaluations don’t reflect the messy reality of human interaction,” he said.
Big tech companies still pushing AI health tools
Despite these concerns, tech companies continue to invest in AI health solutions. Apple is reportedly working on a tool to advise you on exercise, food, and sleep. Amazon is exploring ways to use AI to understand how social factors impact your health. Microsoft, meanwhile, is building AI tools to help doctors manage patient messages.
However, many experts, including the American Medical Association, believe AI shouldn’t be used for serious health decisions. OpenAI, the company behind ChatGPT, advises against using its chatbots to diagnose illnesses.
As interest in AI health tools grows, the Oxford team says treating these systems like any other medical product is vital. Dr Mahdi recommends testing them in real-life settings, just like you would with new medicine.
Chatbots may help—but not without human care
This new research shows that while chatbots might seem helpful at first glance, they’re not ready to replace real medical advice. If you feel unwell or unsure, speaking to a doctor or using trusted health sources is still best. AI can support healthcare only when used carefully and thoroughly aware of its limits.
Dr Mahdi says, “You should rely on trusted sources when making healthcare decisions. Chatbot systems need proper testing before they’re safe for public use.”