A multi-billion pound scandal comparable to the Payment Protection Insurance (PPI) mis-selling crisis could occur within just two weeks as UK financial institutions rapidly deploy artificial intelligence without adequate governance standards, a new report has claimed.
Capability Gap in AI Deployment
The report from regulatory compliance firm Zango highlights a significant 'capability gap' in the UK's financial services sector when it comes to deploying AI. The research is based on interviews with 27 C-suite senior leaders from major banks including Lloyds, Santander, Monzo, and Revolut, as well as four industry roundtables involving 60 additional senior practitioners.
Leaders warned that a lack of operational guidance has left the UK trailing behind the United States, which published its practical Financial Services AI Risk Management Framework in February 2026. The report states that a 'mismatch' between AI outputs and traditional compliance monitoring could allow problems to 'compound significantly before anything visibly goes wrong'.
PPI Scandal as a Warning
The report cites multiple leaders referencing the £38 billion PPI scandal, where banks sold unsuitable insurance policies to millions of customers over two decades, as a 'mis-selling problem that built up over years'. However, with AI, that timeline could be dramatically compressed. One unnamed compliance boss at a major UK wealth manager is quoted as warning that 'PPI could happen in two weeks with AI'.
These warnings echo concerns from the Treasury Select Committee that institutions were 'not doing enough to manage the risks presented by AI'. The Financial Conduct Authority (FCA) has previously stated that the current regime gives the watchdog 'enough regulatory bite that we don't need to write new rules for AI'.
Anthropic's Mythos Tool Raises Fears
Industry fears have been elevated by the power of new AI tools. British banks are expected to gain access to Anthropic's Mythos tool in the near future. Anthropic has limited its release, deeming it too dangerous for the public. Only a small handful of US businesses, including Apple and Microsoft, have obtained the new model. The AI giant says the tool holds unprecedented risk due to its ability to expose flaws in IT systems.
The release triggered closed-door meetings with top finance firms and regulatory officials. US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned top Wall Street bosses to an urgent meeting over fears that the tool could turbocharge a new wave of cyber attacks.
Call for Shared Implementation Standards
Ritesh Singhania, chief executive of Zango, said: 'Compliance teams are trying to keep pace with AI systems their own colleagues have deployed, and with criminal networks scaling faster than anyone's defences. Weak governance doesn't just create individual risk – it creates systemic vulnerability across the entire sector. What's missing is a shared implementation standard that gives firms a consistent basis for governing AI as they adopt it.'
Barclays, Lloyds, and UBS were among the second cohort of firms selected to join the FCA's next AI Live Testing initiative, which aims to provide a safe space for financial firms to test AI in a controlled environment without regulatory breaches.



