Ofcom Investigates X Over AI Bikini Deepfakes: A Test for UK Online Safety
Ofcom probes X over AI-generated bikini deepfake flood

The UK's communications regulator, Ofcom, has taken its most aggressive stance yet under new online safety laws by launching a formal investigation into X, the platform formerly known as Twitter. The probe was triggered by a flood of AI-generated images, created using X's own Grok chatbot, depicting women and children in bikinis, often in sexualised poses or with simulated injuries.

A Defining Moment for Tech Regulation

This move marks a pivotal test for the UK's Online Safety Act, with key provisions only recently coming into force. Ofcom has challenged other companies before, but none possess the global influence or political weight of Elon Musk's social media giant. The investigation seeks to define whether democratic oversight can effectively control some of the world's most powerful and wealthy technology firms.

The government has reacted with strong language. Downing Street labelled X's decision to restrict its image-making Grok AI tool to paying subscribers as "insulting," accusing the platform of turning the creation of abusive deepfakes into a "premium service." Technology Secretary Liz Kendall confirmed that a promised ban on creating non-consensual intimate images will be enacted this week, with so-called 'nudification' apps being outlawed swiftly.

Global Concerns and the Looming Threat

The UK is not acting in isolation. Countries including Indonesia and Malaysia have already restricted access to Grok in response to proliferating intimate deepfakes. Germany's media minister has urged the European Commission to confront what he calls the "industrialisation of sexual harassment." The concern is intensifying as OpenAI is expected to soon allow the creation of erotic material via ChatGPT, potentially opening the floodgates to a new wave of AI-generated pornography.

The risks extend beyond protecting children, though age verification remains a grave concern. Most 18-year-olds in the UK are still in education, and adults are equally entitled to protection from the profound harm caused by intimate deepfakes. Experts warn that AI could also amplify the dangers of violent online pornography by making such content more accessible.

Legal Gaps and the Need for Proactive Action

The current crisis has exposed a curious loophole in the law, which treats images of people in underwear more restrictively than those in swimwear, even when the level of coverage is identical. This highlights how tech firms have historically dictated the pace of change, releasing powerful tools before society can properly assess their impact.

While children's access to social media is a separate issue from AI tool design, the deepfake scandal has reignited debate about age limits, with senior politicians across the spectrum raising concerns. Ministers are now under pressure to formulate a clear stance on children's use of AI. However, the immediate focus remains squarely on X and Grok. Ofcom has shown it can bark; the world is now watching to see if it can bite.