Inside the Telegram group where men use Grok AI to create explicit fakes of women
Men use Grok AI to create explicit fakes in Telegram group

A disturbing online community has been exposed where men are using artificial intelligence to create sexually explicit deepfake videos and images of women without their consent. The group, operating on the Telegram messaging platform, sees members sharing photos of women, including their wives, sisters-in-law, and strangers, to be digitally undressed and manipulated by AI tools like Elon Musk's Grok.

The Mechanics of AI-Facilitated Abuse

Metro gained access to the group, which was active in the days before X, formerly Twitter, was flooded with similar Grok-generated imagery. In one exchange, a man boasted about a three-second clip he created, showing his wife's friend lifting her skirt to reveal her genitals. He had made it by uploading a party photo to Grok Imagine, an AI tool from Musk's xAI company that generates short videos from pictures or text.

Other members eagerly asked for the prompt he used so they could replicate the process. The conversations revealed a pattern of users uploading screenshots from women's social media profiles and requesting the creation of sexualised fakes. Some participants claimed to be 'addicted' to making deepfakes of their spouses, with one admitting to doing it for months. Another invited the group to turn his girlfriend into a 's**t', sharing multiple photographs of her alongside the AI-generated results.

While Grok was frequently cited, members also used and recommended other AI platforms to produce nude or nearly nude content. There were even offers to 'trade' prompts or images within the group, facilitating the spread of this abusive material.

Campaigners Sound Alarm on 'Tech-Facilitated Abuse'

Women's advocacy organisations have expressed profound alarm at these findings. Isabelle Younane, Head of External Affairs at Women’s Aid, stated the activity amounts to clear 'tech-facilitated abuse'.

Jennifer Cirone, Service Director at the aid group Solace, emphasised the targeted nature of the abuse and the lack of recourse for victims. 'Women are clearly being targeted, yet there is no apparent or effective route for victims to get images removed,' she told Metro. 'This is yet another example of misogyny, entitlement and abuse forcing women to alter their behaviour and remove themselves from spaces in which they have every right to be.'

Regulatory Scrutiny and Platform Policies

The revelations come as the UK media regulator, Ofcom, has opened an investigation into X over concerns that Grok is being used to create sexualised images of women and children. X could face a substantial fine or even be blocked in the UK, a step already taken by two other countries.

Musk's xAI has deliberately positioned Grok as a more relaxed model. Its Grok Imagine feature includes a 'spicy mode' for generating 'less filtered, provocative' content. The platform's policy does not ban fictional, consensual adult material but claims to have safeguards against generating fully nude images of real people and children. X's own rules prohibit posting non-consensual intimate media and child sexual abuse material.

Cybersecurity expert Jake Moore of ESET warned about the virality of such abuse. 'When accessible technology is freely available, abusive material can spread like wildfire, so running it underground only slows the problem down rather than removing it at the core,' he said.

In response to the growing crisis, the UK government announced on January 12, 2026, that it will criminalise the creation of non-consensual AI-generated intimate images. When contacted for comment by Metro, xAI replied with an apparent automated message: 'Legacy Media Lies.' Telegram has also been approached for comment.