Ofcom Launches Formal Probe Into X's AI Tool Grok Over Child Safety Fears
Ofcom investigates X's Grok AI over child imagery reports

The UK's communications regulator, Ofcom, has initiated a formal investigation into X, the platform formerly known as Twitter, over concerns its artificial intelligence tool Grok has been used to create sexualised imagery of children.

Urgent Action on AI-Generated Content

This major regulatory move follows reports that the AI chatbot, developed by X's owner Elon Musk, was allegedly utilised to produce illegal and harmful content. Ofcom contacted X on Monday 5 January 2026, demanding an explanation of the steps taken to protect UK users and setting a firm deadline for a response by Friday 8 January.

The company reportedly replied by the stipulated deadline. Ofcom has since conducted what it describes as an "expedited assessment of available evidence as a matter of urgency", leading directly to the launch of this formal probe.

Scrutiny Under the Online Safety Act

The core of the investigation will focus on whether X has failed to meet its legal obligations under the UK's Online Safety Act. This landmark legislation places a duty of care on tech companies to protect users, especially minors, from illegal content. The watchdog has stated it will specifically examine whether people in the UK are being exposed to such material and if children are being put at risk.

The situation has escalated to the highest political levels. Ministers have indicated they would support Ofcom if it ultimately concludes that X should be banned from operating in the UK. However, the Conservative Party has expressed a differing view, stating it does not believe a ban would be the correct course of action.

Broader Implications for Tech Regulation

This case represents one of the first major tests of the Online Safety Act concerning generative AI tools integrated into social media platforms. The outcome could set a significant precedent for how UK regulators oversee AI safety and content moderation in the future.

The investigation underscores the growing tension between rapid technological innovation and the imperative for robust user protection. As AI capabilities become more sophisticated and accessible, regulators worldwide are grappling with the challenge of preventing their misuse while fostering innovation.

All eyes will now be on Ofcom's findings and the subsequent response from X and its owner, tech titan Elon Musk. The probe's progress is likely to be closely monitored by policymakers, child safety advocates, and the technology industry alike.