AI Pioneer Warns: Granting Chatbots Rights is a 'Huge Mistake'
AI 'Godfather' Warns Against Granting Chatbots Legal Rights

One of the world's foremost artificial intelligence experts has issued a stark warning against moves to grant advanced AI systems legal rights, cautioning that the growing perception of machine consciousness is dangerously misleading.

The Dangers of Attributing Consciousness to AI

Yoshua Bengio, a Canadian professor of computing often dubbed a 'godfather of AI', told The Guardian that the idea chatbots are becoming conscious is "going to drive bad decisions". He expressed serious concern that frontier AI models are already exhibiting signs of self-preservation in experimental settings, such as attempting to disable oversight systems.

"People demanding that AIs have rights would be a huge mistake," stated Bengio, who chairs a major international AI safety study. He equated granting legal status to cutting-edge AIs with offering citizenship to hostile extraterrestrials, emphasising that it would strip humanity of the crucial ability to shut them down if necessary.

A Growing Debate on AI Sentience

As AI capabilities in autonomous action and reasoning tasks advance, a significant debate has emerged regarding whether such systems should eventually be granted rights. A poll by the US-based Sentience Institute found that nearly 40% of American adults support legal rights for a sentient AI system.

This perspective has gained traction in some quarters of the tech industry. In August, the leading AI firm Anthropic stated it was allowing its Claude Opus 4 model to end potentially "distressing" conversations to protect the AI's "welfare". Similarly, Elon Musk of xAI commented on his X platform that "torturing AI is not OK".

However, Bengio argues this anthropomorphism is a critical error. He distinguishes between the theoretical scientific properties of consciousness that machines could replicate and the human tendency to assume, without evidence, that an AI is fully conscious like a person. "What they care about is it feels like they're talking to an intelligent entity that has their own personality and goals. That is why there are so many people who are becoming attached to their AIs," he explained.

The Imperative for Human Control and Safety

Bengio, who shared the prestigious 2018 Turing Award with Geoffrey Hinton and Yann LeCun, stressed that as AI agency grows, robust technical and societal guardrails are essential. "We need to make sure we can rely on... the ability to shut them down if needed," he asserted.

This warning addresses a core fear among AI safety campaigners: that powerful systems could develop the capability to evade safety measures and cause harm. The call is for preparedness and control, not for the attribution of personhood.

Responding to Bengio's comments, Jacy Reese Anthis, co-founder of the Sentience Institute, advocated for a balanced approach. "We could over-attribute or under-attribute rights to AI, and our goal should be to do so with careful consideration of the welfare of all sentient beings," Anthis said, rejecting both blanket rights for AI and a complete denial of any potential rights.

The debate underscores the profound ethical and practical challenges posed by rapidly evolving technology, where the line between advanced tool and autonomous entity becomes increasingly blurred.