OpenAI Launches ChatGPT Health: Big Tech Moves Deeper Into Personal Medical Data
ChatGPT Health Accesses Your Medical Records

In a significant move that underscores Big Tech's growing ambitions in healthcare, OpenAI launched its ChatGPT Health tool this month. The product, announced last week, allows users to connect their personal medical records and wellness applications directly to the AI chatbot.

Capitalising on Existing Demand

The launch formalises a behaviour already adopted by millions. Over 230 million people globally already use ChatGPT to ask health-related questions every week, according to the company. Separate research from GWI found that 26% of ChatGPT users sought health advice in the past month alone.

Sam Altman's firm states the tool is explicitly not intended for diagnosis or treatment. Instead, its stated purpose is to help users better understand their health information and prepare for conversations with medical professionals. Health data is often fragmented across GP portals, hospital letters, PDFs, and apps like Apple Health or MyFitnessPal. ChatGPT Health aims to bring this information into one place for questioning.

The company emphasises that users must opt-in to connect their data and can remove access at any time. Health conversations exist in a dedicated, encrypted space, separate from other chats, and will not be used to train AI models. OpenAI developed the product with input from over 260 physicians across 60 countries, evaluating responses against clinical standards.

The Critical Questions of Trust and Governance

Despite these assurances, the scale of the launch brings serious questions about trust and oversight to the fore. ChatGPT Health is a consumer product, not a regulated medical device. In the United States, it does not fall under HIPAA (Health Insurance Portability and Accountability Act), meaning users rely on company privacy policies rather than statutory protections.

"When a company asks hundreds of millions of people to upload medical records to a centralised platform, the question becomes why the architecture requires that level of trust in the first place," argues Eric Yang, CEO of AI lab Gradient. He stresses that health data is among the most sensitive information people possess.

Accuracy is another paramount concern. AI systems are known for occasionally generating confident but incorrect responses—a phenomenon known as hallucination—which carries far greater risk in a health context. Alex Ruani, a doctoral researcher in health misinformation at UCL, warns that ChatGPT Health is not subject to mandatory safety testing or post-market surveillance. No published studies specifically test its safety, and the presentation of responses could blur the line between general information and medical advice.

A Broader Big Tech Trend

OpenAI is not operating in a vacuum. Its rival Anthropic launched its own Claude for healthcare function this week, though it is more squarely aimed at clinicians and organisations, with integrations designed to meet HIPAA requirements. Google, having faced past backlash over health data projects, is proceeding more cautiously.

The commercial incentive for these moves is clear. Healthcare remains a data-heavy, fragmented, and expensive sector. AI tools that simplify administration and explanation have obvious appeal to both consumers and technology firms.

Max Sinclair, founder of consumer AI firm Azoma, sees a strategic shift. "ChatGPT is becoming a trusted intermediary," he said. "Once users rely on it to interpret health information, that trust can extend into lifestyle and purchasing decisions as well."

For now, the tool will likely be most useful for low-stakes tasks like deciphering medical terminology, spotting trends in wellness data, or formulating questions before a doctor's appointment. However, the launch on Friday 16 January 2026 marks another deliberate step by Big Tech into domains traditionally governed by strict regulation. As adoption grows, the balance between convenience, trust, and necessary oversight will face increasing scrutiny.