Elon Musk's social media platform X has announced a major policy shift, blocking its Grok artificial intelligence tool from generating non-consensual sexualised images of real people. The move comes after a significant public and political backlash against the feature, which had allowed premium subscribers to manipulate photos into revealing attire like bikinis.
AI Pioneer Warns of an 'Unconstrained' Industry
The controversy has sparked a stark warning from one of the world's leading AI experts. Yoshua Bengio, a computer scientist often described as a modern "godfather of AI", told The Guardian that the scandal highlights how the artificial intelligence sector is "too unconstrained".
Bengio, who won the prestigious 2018 Turing Award alongside Geoffrey Hinton and Yann LeCun, argued that frontier AI companies are developing increasingly powerful systems without implementing adequate technical and societal safeguards. "This is starting to have more and more visible negative effects on people," he stated.
Building Moral Guardrails for AI Safety
As part of the solution, Bengio emphasised the need for better governance, including placing figures with strong moral standing on company boards. He is putting this philosophy into practice at his own AI safety lab, LawZero, which launched last year with $35 million (£26 million) in funding.
Bengio has appointed high-profile figures to LawZero's board to guide its mission. The new members include:
- Yuval Noah Harari, the historian and author of 'Sapiens', who has been a prominent voice cautioning about AI risks.
- Sir John Rose, the former chief executive of Rolls-Royce.
- Maria Eitel, founder of the Nike Foundation, who will serve as chair.
Furthermore, former Swedish prime minister Stefan Löfven will join the NGO's global advisory council. "The whole construction of the board has been guided by the idea that we need a group of people who are extremely reliable in a moral sense," Bengio explained.
A Technical Solution for AI Agents
LawZero is developing a technical system named Scientist AI, designed to work alongside autonomous AI systems, or 'agents'. Its purpose is to monitor and flag potentially harmful behaviour, contributing to the creation of safe-by-design AI.
Bengio stressed that the conversation about AI's future must extend beyond technical circles. "It also comes down to what choices are made about AI that we consider to be morally right," he said, framing the issue as a profound ethical challenge for society.
The Grok scandal on X serves as a recent, concrete example of the negative effects Bengio describes. It underscores the urgent need for the AI industry to implement robust guardrails as its capabilities grow, balancing innovation with ethical responsibility and public safety.