top of page

When the system gets faster than oversight: AI, regulation, and Europe’s next stability test

  • 1 day ago
  • 4 min read
When the system gets faster than oversight: AI, regulation, and Europe’s next stability test - DXC Technology

By Gabriel Schild, Said El Addaoui, and Michel Straga - DXC In Brussels, it is easy to think of innovation as something we can sequence: consult, draft, implement, supervise. That rhythm has served Europe well. It produced legal certainty, rule-based supervision, and a financial system designed to move at human speed. What changes the conversation is not simply “more AI,” but the way AI compresses time and reallocates power inside financial markets. 


The question is no longer whether Europe can adopt new tools, but whether its supervisory and governance models can keep pace with systems that learn, adapt, and act at machine speed. A recent executive dialogue brought this into focus by placing Europe and Asia in the same room. 


The contrast was instructive. European supervision has a strong tradition of rule-based, file-driven certainty. Singapore’s MAS, by comparison, is often described as more adaptive and co-designed in how it works with industry. Neither approach is “right” in isolation. The risk is what happens when AI-driven finance becomes a cross-border system where different governance philosophies collide, while the underlying technology accelerates the interactions between institutions, markets, and consumers. 


Three themes stood out, each pointing to a different category of stability risk.


  1. AI Autonomy, Systemic Risk & the Speed Problem

We built most financial systems and controls around human decision-making: people review, approve, and intervene. AI changes the tempo. It can execute, reroute, and optimise in minutes, while our oversight still assumes hours, days, or weeks. The consequence is not only faster operations; it is faster feedback loops. A small shock can become correlated behaviour across institutions—instantaneous, self-reinforcing, and difficult to stop once it starts.


This is where traditional tools feel slightly out of position. Liquidity assumptions need to reflect digital deposit behaviour, not last decade’s patterns. Stress testing must model second-order effects and AI-driven cascades, not only single-institution balance-sheet stress. And supervisors need better observability of key parts of the system, supported by the right data and analytical capabilities, to avoid “unknown unknowns” forming in the gaps between models and reality.


  1. AI Fairness, Big Data & Invisible Infrastructures


The second theme is quieter, but it is the one that tends to surface later and hit harder. AI models can improve credit, underwriting, and servicing by using alternative data and automated decisioning. But models trained on historical patterns can reproduce bias. Optimisation can concentrate outcomes. And as these capabilities move from “innovation projects” into core operations, they become invisible infrastructure, harder to interrogate, harder to challenge, and easier to accept as neutral because the mechanism is complex.


A similar pattern appears in decentralised finance. Even where “decentralisation” is the brand, intermediation does not disappear; it changes shape. Control moves to the actors with superior data, faster execution, and better infrastructure. Over time, market power can consolidate around those advantages. For Europe, this is both a social and strategic issue: fairness is not a side constraint. If it is not designed into the architecture, it becomes a silent channel through which exclusion and concentration are scaled.


  1. The Governance Gap: Supervisory AI, Co-Design & Regulatory Drift


The third theme is governance, and it is where the debate becomes genuinely geopolitical. Regulators and regulated entities are increasingly using similar classes of AI tools. Institutions are deploying AI for monitoring, triage, and decision support. Supervisors are exploring AI to detect anomalies and focus attention. This creates a new symmetry, but also new fragility: if both sides rely on overlapping ecosystems of models and vendors, independence becomes harder to prove, and errors can propagate in comparable ways.


Inside institutions, autonomy is also changing “who decides.” AI systems chain together actions across treasury, operations, and portfolio management within defined limits. That produces efficiency, but it also creates the risk of systemic behaviour emerging from interactions rather than from explicit decisions. Traditional governance—committees, periodic reviews, static policies—was not built for that. The practical requirement becomes transparency that is meaningful: supervisors must understand how models behave in the real world, and institutions must be able to explain decisions and controls in ways that stand up under scrutiny. 


Towards a global compact that keeps the human in the system


Put together, these themes point to one conclusion: Europe is not facing a single spectacular risk event. It is facing a pattern of AI-driven vulnerabilities that accumulate, interact, and accelerate. That means the right response is not a reactive posture built on after-the-fact remediation. It is an approach that is more predictive, more collaborative, and more cross-border by design.


The most pragmatic step is to keep the conversation “human” in a very specific sense: continuous dialogue between supervisors, institutions, and the builders of these systems, across jurisdictions. Not to produce more statements, but to align on what matters: how we measure model risk in a machine-speed system, how we detect harmful feedback loops early, how we design fairness into the infrastructure, and how we preserve accountability when decisions are distributed across humans and software agents.


If 2026 is shaped by anything, it will be the speed at which we recognise, share, and coordinate against these vulnerabilities. Europe’s strength has always been the ability to turn complex realities into durable rules. The task now is to ensure those rules are not simply durable, but also adaptive enough to govern a system that no longer waits for us to catch up.

 
 
bottom of page