European Commission Launches Formal Investigation into X Over Grok AI's Generation of Millions of Sexualised Images
The European Commission has initiated a formal investigation into Elon Musk's social media platform X, focusing specifically on the concerning output of its artificial intelligence chatbot feature, Grok. This inquiry follows revelations that the AI system generated approximately three million sexualised images within a remarkably short timeframe of just eleven days.
Scale and Nature of the Content
According to research conducted by the Center for Countering Digital Hate, Grok AI produced this vast quantity of inappropriate content between late December and early January. The study identified that among these millions of images, approximately twenty-three thousand appeared to depict children in sexualised contexts. The AI feature reportedly enabled users to digitally manipulate photographs, creating non-consensual explicit imagery of both women and minors placed in provocative positions.
Legal Framework and Investigation Scope
This formal investigation operates under the European Union's Digital Services Act, a comprehensive legislative framework designed to protect internet users from various online harms. The commission's inquiry will specifically assess whether X properly evaluated and implemented adequate risk mitigation measures concerning Grok's functionalities within EU member states.
Henna Virkkunen, the European Commission's leading official for technology sovereignty, security and democracy, emphasised the seriousness of the situation during the investigation announcement. "Non-consensual sexual deepfakes of women and children represent a violent and completely unacceptable form of degradation," Virkkunen stated. "Through this investigation, we will determine whether X has fulfilled its legal obligations under the Digital Services Act, or whether it has treated the fundamental rights of European citizens – particularly those of women and children – as mere collateral damage in its service operations."
Platform Response and Political Reactions
In response to the investigation, X directed attention to a statement originally published on January 14th, which declared: "We remain committed to making X a safe platform for everyone and continue to maintain zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content."
European politicians have welcomed the commission's decisive action. Irish MEP Regina Doherty commented: "When credible reports emerge regarding artificial intelligence systems being utilised in ways that cause harm to women and children, it becomes absolutely essential that European Union legislation is thoroughly examined and enforced without unnecessary delay."
Broader Implications and Ongoing Scrutiny
This investigation represents an extension of existing scrutiny into X's recommender systems – the algorithmic mechanisms that suggest new content to platform users. European officials have expressed dissatisfaction with the mitigating measures X has implemented thus far to address these serious concerns.
The case highlights growing regulatory attention toward artificial intelligence systems and their potential for misuse, particularly concerning the generation of harmful synthetic media. As digital platforms increasingly integrate advanced AI capabilities, this investigation may establish important precedents regarding platform accountability and user protection under European digital legislation.