A Double-Edged Revolution
It’s rare for a central bank to speak with urgency—and rarer still to sound almost conflicted. But the Bank of England’s April 2025 Financial Stability in Focus report does both. At its core is a paradox: AI promises faster, smarter, more personalised finance. It also risks blowing holes in the foundations of financial stability.
On the upside, AI is already streamlining operations, boosting productivity, and unlocking previously hidden insights. UK financial firms are leaning on AI for everything from underwriting to fraud detection to code generation. The Bank sees AI as a potential growth engine, especially as the UK emerges as a global AI hub.
But beneath the productivity hype lies a tangle of systemic risks. The Bank is especially wary of “agentic AI”—systems that learn, adapt, and act autonomously. If widely adopted without strong controls, these models could trigger collective misjudgements. Think of dozens of institutions, all using similar AI tools, all making the same call to buy or sell at the same time. The result? Fire sales, flash crashes, and liquidity vanishing in an instant.

Fragile Infrastructure, Hidden Vulnerabilities
The Bank also flags a subtler, but no less dangerous, risk: operational dependence on third-party AI providers. Most financial institutions don’t build their own language models or training pipelines. Instead, they rent them. That’s efficient—but it also concentrates risk. If one cloud vendor, model provider, or dataset goes down, so might critical services like real-time payments or fraud screening.
The July 2024 CrowdStrike debacle—where a single flawed software update caused global outages—offered a vivid case study in how quickly things can unravel. In the AI context, such outages could be more sudden, more opaque, and harder to fix.
Then there’s the cyber threat. AI helps firms detect threats, yes—but it also enables criminals to mount smarter, faster, more convincing attacks. Deepfakes, data poisoning, and prompt injection attacks are no longer sci-fi. They’re here, and financial institutions are scrambling to keep up.
Strikingly, many firms don’t fully understand the tools they’re deploying. Nearly half of respondents to the Bank’s AI Survey admitted they only have a “partial understanding” of the AI technologies they use. That’s not a reassuring starting point.
Can Risk Keep Pace with Innovation?
To its credit, the Bank isn’t waiting for a crisis. It’s building out an entire ecosystem—regular surveys, an AI Consortium, supervisory engagement—to monitor risks in real-time. The Financial Policy Committee (FPC) is working to ensure that the adoption of AI supports, rather than undermines, financial stability.
But the Bank is also frank about its limitations. The pace of AI adoption is outstripping the development of regulation. System-wide exercises are being considered, but the reality is that regulators are chasing a moving target. The opacity, speed, and scale of AI create blind spots no amount of policy foresight can eliminate entirely.
The message is sobering: AI is not inherently a threat. But treated casually—or worse, blindly—it might turn out to be finance’s biggest vulnerability since the subprime mortgage. And unlike last time, the risk won’t come from human greed alone. It’ll come from code that no one—not even its creators—fully understands.