The UK's election watchdog is racing to deploy new technology designed to spot AI-generated deepfakes ahead of crucial votes in Scotland and Wales this year. The Electoral Commission is working with the Home Office on a pilot project aimed at protecting candidates from sophisticated digital disinformation.
AI Detection Tools Ahead of Campaigns
Officials have confirmed they are working "at speed" to have software operational before election campaigns begin in late March. This technology is specifically designed to identify AI-manipulated videos and images that could be used to mislead voters or target candidates.
Sarah Mackie, the Electoral Commission's head in Scotland, outlined the planned response. If the system flags a suspected deepfake, officials will immediately contact the police, alert the affected candidate, and inform the public. However, Mackie acknowledged the technology cannot always provide 100% certainty in its assessments.
The Push for Legal "Takedown" Powers
A critical gap in the current system is the commission's lack of legal authority to remove harmful content. At present, officials can only urge social media platforms to take down hoax material—a voluntary process.
"What we don't have at the moment, and what we want, is called takedown powers," Mackie stated. The commission is urging the UK government to introduce legally enforceable powers that would require platforms to remove confirmed election-related disinformation.
This call comes amid growing criticism of platforms like Elon Musk's X and its AI tool, Grok, for failing to adequately remove fake and harmful content. Senior politicians have demanded government and regulator Ofcom take urgent action.
Protecting Candidates from Abuse
The threat of deepfakes is part of a broader concern about candidate safety. The commission is also running a separate "safety and confidence" project with the Scottish parliament and police. This initiative focuses on supporting women and black, Asian, and minority ethnic candidates who often face gender or race-based abuse.
A 2022 study found about half of all female election candidates had experienced abuse, with many saying they would not stand again. Candidates from minority-ethnic backgrounds reported similar experiences, undermining diversity in representation.
Mackie highlighted that the rise of AI-driven "undressing" technology and pornographic deepfakes would fall into this category of abuse if deployed during an election and would be reported to police.
While there have been no confirmed cases of deepfakes in UK election campaigns yet, their use has surged in elections abroad. UK votes have previously been targeted by state-sponsored fake accounts from countries like Russia, Iran, and North Korea, aimed at spreading discord.
A Home Office spokesperson said: "Protecting elections from the threat of sophisticated deepfakes is vital to maintaining public trust in our democratic system." They added that the Online Safety Act already requires companies to remove unlawful content.
The pilot project, if successful, could be rolled out for all future UK elections. Mackie described the current regulatory landscape as having "an empty space," with the commission now stepping into the ring to test what actions are possible to safeguard the integrity of the electoral process.