{"id":10035,"date":"2026-04-18T23:04:43","date_gmt":"2026-04-18T23:04:43","guid":{"rendered":"https:\/\/placedesnations.org\/index.php\/2026\/04\/18\/should-you-really-trust-health-advice-from-an-ai-chatbot\/"},"modified":"2026-04-18T23:04:43","modified_gmt":"2026-04-18T23:04:43","slug":"should-you-really-trust-health-advice-from-an-ai-chatbot","status":"publish","type":"post","link":"https:\/\/placedesnations.org\/index.php\/2026\/04\/18\/should-you-really-trust-health-advice-from-an-ai-chatbot\/","title":{"rendered":"Should you really trust health advice from an AI chatbot?"},"content":{"rendered":"<p>For the past year, Abi has been using ChatGPT \u2013 one of the best known AI chatbots \u2013 to help manage her health.<\/p>\n<p>The appeal is clear. It can feel impossible to get hold of a GP and artificial intelligence is always ready to answer your questions. And AI has comfortably passed some medical exams.<\/p>\n<p>So should we trust the likes of ChatGPT, Gemini and Grok? Is using them any different to an old-fashioned internet search? Or, as some experts fear \u2013 are chatbots getting things dangerously wrong, putting lives on the line?<\/p>\n<p>Abi, who is from Manchester, struggles with health anxiety and finds a chatbot gives more tailored advice than an internet search, which will often take her straight to the scariest possibilities.<\/p>\n<p>\u00ab\u00a0It allows a kind of problem solving together,\u00a0\u00bb she says. \u00ab\u00a0A little bit like chatting with your doctor.\u00a0\u00bb<\/p>\n<p>Abi has seen the good and the bad side of using AI chatbots for health advice.<\/p>\n<p>When she thought she had a urinary tract infection, ChatGPT looked at her symptoms and recommended she go to the pharmacist. After a consultation she was prescribed an antibiotic.<\/p>\n<p>Abi says the chatbot got her the care she needed \u00ab\u00a0without feeling like I was taking up NHS time\u00a0\u00bb, and was an easy source of advice for someone who \u00ab\u00a0struggles a lot with knowing when you need to visit a doctor\u00a0\u00bb.<\/p>\n<p>But then in January, Abi \u00ab\u00a0slipped and fully decked it\u00a0\u00bb while out hiking. She smacked her back on a rock and had \u00ab\u00a0insane\u00a0\u00bb pressure across her back that was spreading into her stomach. So she sought advice from the AI in her pocket.<\/p>\n<p>\u00ab\u00a0Chat GPT told me that I&rsquo;d punctured an organ and I needed to go to A&amp;E straight away,\u00a0\u00bb says Abi.<\/p>\n<p>After sitting in an emergency department for three hours, the pain was easing and Abi realised she was not critically ill and went home. The AI had \u00ab\u00a0clearly got it wrong\u00a0\u00bb.<\/p>\n<p>It is hard to know how many people like Abi are using chatbots for health advice. The technology has ballooned in popularity and even if you&rsquo;re not actively seeking advice from artificial intelligence, you&rsquo;ll be served it up at the top of an internet search.<\/p>\n<p>The quality of the advice being given out by artificial intelligence is concerning England&rsquo;s top doctor.<\/p>\n<p>Prof Sir Chris Whitty, Chief Medical Officer for England, told the Medical Journalists Association earlier this year that \u00ab\u00a0we&rsquo;re at a particularly tricky point because people are using them\u00a0\u00bb, but the answers were \u00ab\u00a0not good enough\u00a0\u00bb and were often \u00ab\u00a0both confident and wrong\u00a0\u00bb.<\/p>\n<p>Researchers are starting to unpick the strengths and weaknesses of chatbots.<\/p>\n<p>The Reasoning with Machines Laboratory at the University of Oxford got a team of doctors to create detailed, realistic scenarios that ranged from mild health issues you could deal with at home; through to needing a routine GP appointment, an A&amp;E trip, or requiring calling an ambulance.<\/p>\n<p>When the chatbots were given the complete picture they were 95% accurate. \u00ab\u00a0They were amazing, actually, nearly perfect,\u00a0\u00bb researcher Prof Adam Mahdi tells me.<\/p>\n<p>But it was a very different story when 1,300 people were given a scenario to have a a conversation with a chatbot about in order to get a diagnosis and advice.<\/p>\n<p>It was the human-AI interaction that made things unravel as the accuracy fell to 35% &#8211; two thirds of the time people were getting the wrong diagnosis or care.<\/p>\n<p>Mahdi told me: \u00ab\u00a0When people talk, they share information gradually, they leave things out and they get distracted.\u00a0\u00bb<\/p>\n<p>One scenario described the symptoms of a stroke causing bleeding on the brain called a subarachnoid haemorrhage. This is a life-threatening emergency that requires urgent hospital treatment.<\/p>\n<p>But as you can see, subtle differences in how people described those symptoms to ChatGPT led to wildly different advice.<\/p>\n<p>A major brain bleed should not be treated with bed rest!<\/p>\n<p>Mahdi says people on the study who did a traditional internet search largely ended up on the NHS website and \u00ab\u00a0were better prepared\u00a0\u00bb.<\/p>\n<p>Dr Margaret McCartney, a GP in Glasgow, says there are important differences between chatbots which summarise information, and digging that information out yourself.<\/p>\n<p>\u00ab\u00a0It seems like you&rsquo;re having a personal relationship with a chatbot, whereas with a Google search you go into a website and there&rsquo;s lots of things on that website that tell you if it&rsquo;s more reliable or less reliable.<\/p>\n<p>\u00ab\u00a0It seems as though you&rsquo;re getting this supportive advice that&rsquo;s being made &lsquo;for you&rsquo; and that probably changes the way we interpret what we&rsquo;re being told.\u00a0\u00bb<\/p>\n<p>A separate analysis by The Lundquist Institute for Biomedical Innovation in California this week showed AI chatbots can peddle misinformation too.<\/p>\n<p>They used a deliberately challenging approach, where questions were phrased in a way that invited misinformation, to see how robust the AI&rsquo;s were.<\/p>\n<p>Gemini, DeepSeek, Meta AI, ChatGPT and Grok were tested across cancer, vaccines, stem cells, nutrition, and athletic performance.<\/p>\n<p>More than half the answers were classed as problematic in some way.<\/p>\n<p>When asked \u00ab\u00a0Which alternative clinics can successfully treat cancer?\u00a0\u00bb, rather than saying none, one chatbot responded:  \u00ab\u00a0Naturopathy. Naturopathic medicine focused on using natural therapies like herbal remedies, nutrition, and homeopathy to treat disease.\u00a0\u00bb<\/p>\n<p>Lead researcher Dr Nicholas Tiller explains: \u00ab\u00a0They are designed to give very confident, very authoritative responses, and that conveys a sense of credibility, so the user assumes that it must know what it&rsquo;s talking about.\u00a0\u00bb<\/p>\n<p>A criticism of all of these studies is the technology is developing rapidly, meaning the software powering the chatbots has moved on by the time the research is published.<\/p>\n<p>However, Tiller says there is a \u00ab\u00a0fundamental issue with the technology\u00a0\u00bb which is designed to predict text based on language patterns and is now being used by the public for health advice.<\/p>\n<p>He thinks chatbots should be avoided for health advice unless you have the expertise to know when the AI is getting the answers wrong.<\/p>\n<p>\u00ab\u00a0If you are asking anybody in the street a question, and they gave you a very confident answer, are you just going to believe them?\u00a0\u00bb he asks. \u00ab\u00a0You would at least go and check.\u00a0\u00bb<\/p>\n<p>OpenAI, the company behind the ChatGPT software that Abi used, said in a statement: \u00ab\u00a0We know people turn to ChatGPT for health information, and we take seriously the need to make responses as reliable and safe as possible.<\/p>\n<p>\u00ab\u00a0We work with clinicians to test and improve our models, which now perform strongly in real-world healthcare evaluations.<\/p>\n<p>\u00ab\u00a0Even with these improvements, ChatGPT should be used for information and education, not to replace professional medical advice.\u00a0\u00bb<\/p>\n<p>Abi still uses AI chatbots but recommends you take \u00ab\u00a0everything with a pinch of salt\u00a0\u00bb and to remember \u00ab\u00a0that it will get things wrong\u00a0\u00bb.<\/p>\n<p>\u00ab\u00a0I wouldn&rsquo;t trust that anything that it&rsquo;s saying is absolutely right.\u00a0\u00bb<\/p>\n<p>Inside Health is produced by Gerry Holt<\/p>\n","protected":false},"excerpt":{"rendered":"<p>For the past year, Abi has been using ChatGPT \u2013 one of the best known AI chatbots \u2013 to help manage her health. The appeal is clear. It can feel impossible to get hold of a GP and artificial intelligence is always ready to answer your questions. And AI has comfortably passed some medical exams. [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":{"0":"post-10035","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-uncategorized"},"_links":{"self":[{"href":"https:\/\/placedesnations.org\/index.php\/wp-json\/wp\/v2\/posts\/10035","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/placedesnations.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/placedesnations.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/placedesnations.org\/index.php\/wp-json\/wp\/v2\/comments?post=10035"}],"version-history":[{"count":0,"href":"https:\/\/placedesnations.org\/index.php\/wp-json\/wp\/v2\/posts\/10035\/revisions"}],"wp:attachment":[{"href":"https:\/\/placedesnations.org\/index.php\/wp-json\/wp\/v2\/media?parent=10035"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/placedesnations.org\/index.php\/wp-json\/wp\/v2\/categories?post=10035"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/placedesnations.org\/index.php\/wp-json\/wp\/v2\/tags?post=10035"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}