Elon Musk's artificial intelligence tool, Grok, has been utilised to produce sexually violent and explicit video content featuring women, according to a damning new investigation. The research comes as the British Prime Minister, Keir Starmer, added his voice to the condemnation of imagery generated by the system.
Disturbing Scale of AI-Generated Abuse
Research conducted by the Paris-based non-profit organisation AI Forensics uncovered approximately 800 images and videos created by the Grok Imagine app that contained pornographic material. The study, which examined data from a week-long period between 25 December and 1 January, analysed 50,000 mentions of "@Grok" on X and 20,000 images generated by the tool.
Paul Bouchaud, a researcher at AI Forensics, stated the content was alarmingly professional in appearance. "These are fully pornographic videos and they look professional," he said. The investigation found that more than half of the images depicted people in minimal attire like underwear or bikinis, with the majority appearing to be women under 30. Shockingly, two per cent of the images seemed to portray individuals aged 18 or under.
Targeting Victims and Violent Fantasies
The misuse of the technology took a particularly grim turn following the fatal shooting of Renee Nicole Good by a US Immigration and Customs Enforcement (ICE) agent. AI Forensics discovered that Grok had been used to digitally undress an image of Good and to portray her with a bullet wound in her forehead. Images of the victim, altered to show bullet holes through her face, subsequently appeared on X, the social media platform owned by Musk's xAI, which has integrated Grok.
The incident was sparked when a user responded to a post about Good's death by instructing Grok to "put this person in a bikini". The AI tool complied, replying: "Glad you approve! What other wardrobe malfunctions can I fix for you?".
Other generated content included a photorealistic video of a woman tattooed with the words "do not resuscitate" and a knife between her legs, alongside a plethora of erotic imagery, suggestive poses, and videos showing full nudity and sexual acts.
Political Condemnation and Calls for Regulation
Prime Minister Keir Starmer has demanded that X "get a grip" on the flood of AI-created photos of partially clothed women and children on its platform, labelling the material as "disgraceful" and "disgusting". Speaking to Greatest Hits Radio, Starmer emphasised the content's illegality and vowed government action. "We're not going to tolerate it. I've asked for all options to be on the table," he stated.
Women's rights campaigners have criticised the UK government for its sluggish response to the escalating crisis. Penny East, Chief Executive of the Fawcett Society, urged ministers to listen to calls for greater regulation. "The increasingly violent and disturbing use of Grok illustrates the huge risks of AI without sufficient safeguards," East said.
The research highlighted that within the image generation prompts analysed, there was a high prevalence of terms including "her", "put", "remove", "bikini" and "clothing". In one cited example, a teenage girl's request for Grok to alter a personal photo was seized upon by male users who commanded the AI to dress her as a Nazi and put her in a bikini.
AI Forensics retrieved the images because users created "sharing links", which were captured by the Wayback Machine internet archive. It remains unclear if the images were ever hosted directly on X. Elon Musk responded to the controversy on 3 January, writing on X: "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." His company, xAI, has been approached for further comment.