ChatGPT Health Launch in Australia Sparks Expert Warnings Over Safety
ChatGPT Health Australia launch causes expert concern

The launch of OpenAI's ChatGPT Health platform in Australia has been met with significant concern from medical and consumer experts, who warn of serious risks if users take its advice at face value without proper safeguards.

A Cautionary Tale of AI Misinformation

The dangers are starkly illustrated by a recent case involving a 60-year-old man with no prior mental health history. He arrived at a hospital emergency department convinced his neighbour was poisoning him, later experiencing severe hallucinations and attempting to flee.

Doctors discovered the cause: the man had been consuming sodium bromide daily, an industrial salt he bought online. He had substituted it for table salt after ChatGPT advised him it was a suitable alternative, driven by concerns over dietary sodium. This led to bromism, a poisoning condition known to cause hallucinations, stupor, and loss of coordination.

Experts Call for Urgent Guardrails and Transparency

This incident alarms researchers like Alex Ruani, a doctoral scholar in health misinformation at University College London. She notes that while ChatGPT Health is presented as a tool to help interpret health data and offer general wellness tips—not replace clinicians—the line between information and specific medical advice is dangerously blurred for users.

"The challenge is that, for many users, it’s not obvious where general information ends and medical advice begins," Ruani stated, highlighting that confident, personalised responses can be misleading. She points to numerous "horrifying" examples where the AI omitted critical safety details on side effects, allergies, or risks associated with supplements and diets.

A core issue is the lack of independent oversight. ChatGPT Health is not regulated as a medical device or diagnostic tool, meaning there are no mandatory safety controls, risk reporting requirements, or published safety studies specific to the platform. Ruani questions which user prompts or integrated data sources might lead to harmful misinformation.

OpenAI developed the tool using its HealthBench system, where doctors evaluate AI responses. However, Ruani notes the full methodology and results are "mostly undisclosed" rather than published in independent, peer-reviewed journals.

Driving Factors and the Need for Consumer Protection

Dr Elizabeth Deveny, CEO of the Consumers Health Forum of Australia, identifies rising out-of-pocket medical costs and long GP wait times as key drivers pushing people towards AI for health answers. She acknowledges potential benefits, such as helping manage chronic conditions or providing multilingual health information for non-English speakers.

Nevertheless, her concern is profound. She warns that "large global tech companies are moving faster than governments," setting their own rules on privacy and data. This commercial dominance risks exacerbating health inequalities, with benefits flowing to the already resource-rich while risks fall on the most vulnerable.

"This isn’t about stopping AI," Deveny emphasised. "It’s about acting before mistakes, bias, and misinformation are replicated at speed and scale, in ways that are almost impossible to unwind." She calls for clear regulatory guardrails, greater transparency, and comprehensive consumer education to help people navigate this new landscape safely.

In response to concerns, an OpenAI spokesperson told Guardian Australia that the company collaborated with over 200 physicians from 60 countries to refine the models. They stressed that ChatGPT Health is a dedicated, separate space with strong default privacy protections and encrypted data, with third-party sharing requiring user consent.