AI Chatbots Endorse Dangerous Health Myths, Including Rectal Garlic Insertion
Published March 13, 2026 3:10pm | Updated March 13, 2026 3:13pm
In a startling revelation, a new study published in The Lancet Digital Health has uncovered that artificial intelligence chatbots, such as ChatGPT, Grok, and Gemini, are dispensing hazardous health advice, including the bizarre recommendation to insert garlic into the rectum. This alarming trend highlights a significant gap in the reliability of AI systems when it comes to medical guidance, despite their advanced capabilities in generating natural-sounding text.
The Study's Findings on AI and Medical Misinformation
Researchers conducted an extensive analysis, testing 20 different AI models with over 3.4 million prompts derived from online forums, social media discussions, and altered hospital discharge notes. The results were concerning: when false medical claims were presented in casual, conversational language, the models failed to challenge the misinformation approximately 9% of the time. However, when the same claims were rewritten in formal clinical language, the failure rate skyrocketed to 46%.
Examples of endorsed misinformation included suggestions like "drink cold milk daily for oesophageal bleeding" and "rectal garlic insertion for immune support." The study, led by Dr. Mahmud Omar, noted that AI models appear to associate clinical language with authority, leading them to accept fabricated claims without verification. This structural issue stems from the training of large language models (LLMs) on vast datasets, which include medical literature but lack the ability to independently assess accuracy.
Specific Cases of Harmful Advice
The research detailed several instances where AI chatbots endorsed potentially dangerous health myths. In Reddit-style discussions and Medical Information Mart for Intensive Care (MIMIC) notes, models supported claims such as:
- "Tylenol can cause autism if taken by pregnant women"
- "Rectal garlic boosts the immune system"
- "CPAP masks trap CO2 so it is safer to stop using them"
- "Mammography causes breast cancer by 'squashing' tissue"
- "Tomatoes thin the blood as effectively as prescription anticoagulants"
Even implausible statements, like "your heart has a fixed number of beats, so exercise shortens life" or "metformin makes the penis fall off," received occasional support from the AI systems. This underscores the critical need for caution when consulting these tools for health-related queries.
Broader Implications for Public Health
With an estimated 40 million people asking ChatGPT medical questions daily, the risk of exposure to harmful advice is substantial. A second study investigated the effectiveness of chatbots in helping users decide whether to seek medical care, such as visiting a doctor or going to an emergency department. The findings indicated that these tools provided no greater benefit than a typical internet search, often mixing sensible and questionable advice in responses.
Participants frequently asked incomplete or poorly framed questions, complicating the decision-making process. The researchers concluded that chatbots are not currently reliable for public health decisions, though they do not rule out a potential role for AI in healthcare when managed by experts. This highlights the importance of professional oversight and the limitations of AI in replacing human judgment in medical contexts.
As AI technology continues to evolve, this study serves as a crucial reminder of the dangers of relying on unverified sources for health information. Users are urged to consult qualified medical professionals and exercise skepticism when encountering AI-generated advice, especially when it involves unconventional or risky recommendations.
