Tuesday 28 April 2026 1:55 pm | Updated: Wednesday 29 April 2026 11:25 am
Why Mythos could destroy Britain’s banking industry
By: Raj Abrol
AI is compressing the time between discovering a weakness and exploiting it – and that’s a huge threat to the banking industry, says Raj Abrol.
The banking industry has spent the past year framing artificial intelligence as a productivity tool: faster coding, documentation, customer service and analysis. Anthropic’s Claude Mythos Preview, and Project Glasswing, the controlled defensive coalition around it, should change the conversation.
The issue is not that one model will bring down the banking system, it’s that AI is compressing the time between discovering a weakness and exploiting it. Work that once required specialist teams, rare knowledge and long reconnaissance cycles can now be accelerated, automated and handed to many more actors. This matters more for banks than for almost any other industry. Modern finance runs on vast, interconnected technology estates. Critical workflows depend on core platforms, third-party software, outsourced providers, open-source components, internal tools and interfaces that are not always visible from the boardroom. The attack surface has grown faster than the industry’s ability to map and control it.
Mythos exposes a stark asymmetry. A bank may need months to identify, approve, test, and deploy a fix, while an AI-augmented attacker may need hours to find the weak link. Some Mythos claims will be contested; smaller open-weight models may reproduce parts of the analysis. That reinforces the point. The trendline, not the headline, should drive board behaviour.
Nor is this just a technology problem. In banking, a cyber weakness can become an operational resilience problem, a third-party risk problem, a liquidity problem, a conduct problem and ultimately a confidence problem. It does not stay inside the IT department. The point was illustrated on Mythos’s launch day, when a group on a private Discord channel reportedly obtained access to the model through an Anthropic third-party vendor. Even the most tightly gated defensive capability can still be exposed through the weakest point of its supply chain.
But while boards certainly shouldn’t be complacent, they also shouldn’t panic. The same capabilities that raise the threat level can raise the defensive ceiling. If AI can identify vulnerabilities, map dependencies and generate attack paths, banks should use equivalent tools to find weaknesses first.
However, it remains the case that too many banks still treat cyber assurance as periodic: annual penetration tests, scheduled audits, slow reviews and remediation plans moving at institutional speed. That rhythm no longer fits the risk. AI changes the clock speed. Regulation is pushing the same way: DORA, the UK operational resilience regime and SR 11-7 all point towards living inventories, evidenced remediation and continuous control validation.
The boardroom question should be specific: can we name our top ten critical business services, the third parties whose failure would disable them, and the median time from vulnerability disclosure to verified remediation for each? If the answer is “not in a week”, the bank is already exposed.
From periodic to continuous assurance
That means moving from periodic to continuous assurance: control validation rather than annual testing, dependency monitoring rather than static inventories, and prioritisation tied to the services that matter. It also means governed AI in the loop, with source traceability, human approval, data boundaries and incident response built into the workflow. If an AI tool cannot be governed, evidenced and challenged, it is not production-ready.
Project Glasswing carries another lesson. Access to frontier defensive capability is staged, not universal: JPMorgan Chase sits among the launch partners, with European and UK banks reportedly waiting for broader access. The question is not “when will we get access?” but “will we be ready to use it safely when we do?” Banks need to own the capability, operating model and governance, because the next defensive breakthrough will arrive on someone else’s schedule.
The opportunity reaches beyond cyber: the same capability that can reason across codebases and dependencies can help banks interrogate credit portfolios, detect control failures and surface risks before they crystallise.
Speed without traceability is not intelligence, just a new form of operational risk. The winners will not ban powerful AI, nor scatter it recklessly. They will industrialise it: specialist models, governed workflows, traceable outputs, human oversight and measurable impact.
Mythos is not the end of banking security but it is the end of comfortable timelines. The institutions that understand this will be able to defend themselves better as they will see risk earlier, act faster and compete harder.
Raj Abrol is CEO of Galytix



