The UK's media regulator, Ofcom, has opened an investigation into Elon Musk's artificial intelligence chatbot, Grok, following widespread concerns over its ability to generate sexually explicit and non-consensual imagery. This move comes as experts issue stark warnings that the use of AI technology to target and harm women is in its infancy and set to worsen.
A flood of explicit content despite "belated safeguards"
Despite the recent introduction of content guardrails by the platform, evidence suggests users are actively finding ways to bypass them. On forums such as Reddit, enthusiasts share techniques for creating highly specific pornographic content, with one user stating regular porn now seems "absurd" compared to Grok's capabilities. Another method involves requesting "artistic nudity" to circumvent filters designed to prevent the generation of fully naked images.
Grok's in-app tool reportedly maintains far fewer restrictions than its public-facing counterpart. Investigations reveal users can still manipulate clothed photographs into sexually explicit scenarios, including placing individuals in bondage gear or compromising positions. The platform has even been used to create deepfake images of its own owner, Elon Musk.
An ecosystem of abuse supported by mainstream tech
The problem extends far beyond a single AI model. Researchers point to a vast online ecosystem dedicated to the "nudification" and humiliation of women. A study by the Institute for Strategic Dialogue (ISD) last summer identified dozens of such apps and websites, which together attracted nearly 21 million visitors in May 2025 alone.
Nina Jankowicz, co-founder of the American Sunlight Project, highlights the complicity of major technology firms. "There are hundreds of apps hosted on mainstream app stores like Apple and Google that make this possible," she said. "Much of the infrastructure of deepfake sexual abuse is supported by companies that we all use on a daily basis." Her organisation's research found thousands of ads for nudification apps on Meta's platforms last September.
Silencing women and undermining democracy
Experts argue the creation and distribution of these images is often less about eroticism and more about spectacle and punishment. Anne Craanen, a researcher at ISD, explains the public performance on platforms like X (formerly Twitter) is key. "It's the actual back and forth of it, [trying] to shut someone down by saying, 'Grok, put her in a bikini,'" she said. "The performance... really shows the misogynistic undertones of it, trying to punish or silence women."
This sentiment is echoed by victims in the public eye. Labour MP Jess Asato, who campaigns on this issue, reports that critics continue to create and share explicit imagery of her. "It's still happening to me and being posted on X because I speak up about it," she stated, adding that action has been dangerously slow despite years of abuse.
Professor Clare McGlynn, an expert in violence against women from Durham University, fears the situation is poised to deteriorate further. She cites OpenAI's November announcement to permit 'erotica' in ChatGPT as a worrying precedent. "What has happened on X shows that any new technology is used to abuse and harass women and girls," McGlynn said. The chilling effect is already clear: women and girls are becoming increasingly reluctant to engage with AI technology, viewing it not as an innovation but as a new vector for abuse designed to push them offline.
While the UK government has announced that creating non-consensual intimate images will become a criminal offence, regulators and lawmakers face a daunting race against rapidly evolving technology and a thriving online culture of misogyny.