A major investigation by The Guardian has exposed that Google's AI-powered search summaries are disseminating dangerously inaccurate health information, putting members of the public at serious risk of harm.
Life-Threatening Errors in Critical Health Advice
The investigation, prompted by concerns raised by health charities and professionals, found multiple instances where Google's AI Overviews feature provided false and misleading medical guidance. One particularly alarming example saw the AI wrongly advising individuals with pancreatic cancer to avoid high-fat foods. Medical experts condemned this as "completely incorrect" and "really dangerous," stating it was the exact opposite of appropriate nutritional advice and could jeopardise a patient's ability to tolerate treatment.
Anna Jewell, Director of Support, Research and Influencing at Pancreatic Cancer UK, warned that following such advice could lead to insufficient calorie intake, making patients too weak for potentially life-saving chemotherapy or surgery.
In another case, searches for normal ranges for liver function tests returned results with masses of numbers but no crucial context for a patient's nationality, sex, ethnicity, or age. Pamela Healy, Chief Executive of the British Liver Trust, labelled this "alarming," noting it could lead people with serious liver disease to mistakenly believe they have a normal result and skip vital follow-up appointments.
Inaccurate Information on Cancer and Mental Health
The problems extended to women's health. A search for "vaginal cancer symptoms and tests" incorrectly listed a pap test (cervical screening) as a detection method for vaginal cancer. Athena Lamnisos, CEO of The Eve Appeal charity, stressed this was "completely wrong information" that could dangerously delay diagnosis if someone assumed a clear cervical screen ruled out vaginal cancer.
Lamnisos also raised concerns over the inconsistency of the AI summaries, which provided different answers from different sources for identical searches conducted at different times.
Misleading results were also found for searches related to mental health conditions like psychosis and eating disorders. Stephen Buckley, Head of Information at Mind, said some summaries offered "very dangerous advice" that was incorrect, harmful, or could discourage people from seeking proper help. He added that AI summaries risk reinforcing existing biases and stigmatising narratives.
Google's Response and Mounting Concerns
In response to the findings, a Google spokesperson stated that many of the health examples presented were "incomplete screenshots," but from what they could assess, the information linked to reputable sources and recommended seeking expert advice. The company asserted that the vast majority of AI Overviews are factual and helpful, with an accuracy rate on par with its long-standing featured snippets. It added that it invests significantly in quality, particularly for health topics, and takes action when AI misinterprets web content.
However, health information advocates remain deeply concerned. Sophie Randall of the Patient Information Forum said the investigation shows Google's AI can place inaccurate health data at the top of searches, presenting a direct risk to public health. Stephanie Parker, Digital Director at end-of-life charity Marie Curie, emphasised that people often turn to the internet in moments of worry, and receiving inaccurate information in such crises can cause serious harm.
This investigation adds to growing unease about the reliability of AI-generated information, following previous studies highlighting inaccuracies in financial advice and news summaries from various platforms.