top of page

Walk Before You Run: Paul Donnelly’s Anti-Hype Strategy for AI in Insurance

  • 22 hours ago
  • 4 min read
Walk Before You Run: Paul Donnelly’s Anti-Hype Strategy for AI in Insurance

By Paul Donnelly, Global Head of Insurance at Version 1.


An underwriter can spend a third of the day answering emails that never become policies. Not pricing risk or structuring terms or making the judgment calls carriers pay for, but responding to informal broker queries about what might happen if a case were submitted. For Paul Donnelly, that reality explains why so many AI strategies in insurance feel upside down.


Donnelly has spent more than 20 years in financial services and insurance technology, including 13 years at Munich Re, and he has watched several waves of “transformation” arrive with familiar promises; automation, intelligence, straight-through processing. Only to leave behind pilots that never quite escape the sandbox. His view isn’t anti-AI. It’s about the sequence. Before you run with AI, he argues, you have to walk through the workflow that actually produces outcomes.


The problem, in his telling, is rarely the model. It’s the system. Insurers operate on core platforms that are mission-critical and decades old, which makes change inherently risky. As a result, innovation often happens around the edges: bolt-on fraud tools, analytics dashboards, and now AI copilots layered on top of processes that haven’t fundamentally changed. Meanwhile, the deepest cost drivers remain buried in underwriting and claims, where work is still shaped by handoffs, exceptions, rework, and administrative drag.


Accenture’s underwriting research, AI Underwriting: Beyond the Hype, echoes the point: even after years of digital investment, more than a third of an underwriter’s time can still be consumed by non-core work like data gathering and administration. If experts are still doing clerical tasks, Donnelly says, the first question isn’t which large language model to deploy.


One example is a practice the industry has largely normalized: “informals.” In intermediated life insurance, brokers frequently ask underwriters for guidance before formally submitting an application. They’re managing client relationships, and a decline after submission can be reputationally costly, so they seek an early read on likely outcomes. The result is a parallel channel of work probabilistic advice delivered over phone and email that, in some environments, Donnelly estimates can consume up to a third of underwriting capacity. What makes it more striking is the contrast: a carrier may have a sophisticated automated underwriting engine for straightforward cases, yet still depend on a human backchannel to answer pre-submission questions.


Donnelly’s proposed fix isn’t to remove human judgment. It is to productize it. Instead of relying on informal conversations, he suggests giving brokers a structured “cone of probability”, a range of likely outcomes based on disclosed information, so uncertainty is explicit, quantified, and consistently communicated. The aim is to reserve underwriter time for genuinely ambiguous cases, rather than spending expert hours repeating the same preliminary guidance. In his framing, this is not a moonshot AI project. It is workflow design, and it is where many carriers would see near-term returns.


That is why his “walk” starts with deterministic automation. Rules-based decisioning, he argues, has not run its course; in many organizations, it was never fully embedded in the operating core. Robotic process automation has helped in places, but often as an external layer that drives legacy systems instead of reshaping the underlying process. The larger opportunity, Donnelly says, is to make underwriting and claims decisions explicit, auditable, and continuously improvable, so the organization learns, case by case, where automation holds and where it breaks.


From there, his test for AI readiness is less about fashionable prerequisites, having data, hiring an AI team and more about whether the carrier has a working continuous-improvement loop. Do you measure straight-through processing rates? Do you track precisely where cases exit automation and why? Do you update the rulebook based on outcomes, and can you do it on a cadence that matches the business? If the rulebook is static, AI tends to amplify its flaws. If it’s living, AI can become an accelerator rather than a mask.


Donnelly is blunt that the people who improve the rules cannot be generic technology staff without domain authority; they need senior underwriters who own the decisions, supported by technologists and analytics teams. If the rulebook is hard to change, improvement dies by friction. If it’s easy to change but no one in the business truly owns it, it stagnates anyway. Transformation fails, in his view, less often because the model was weak and more often because nobody owned the system the model was meant to improve.


His caution sharpens in life and health insurance, where the data is deeply personal, retained for decades, and subject to intensifying scrutiny around fairness, explainability, and discrimination risk. In that environment, importing playbooks from other industries/ jurisdictions without respecting the regulatory and ethical constraints can put carriers in serious trouble. “Walk before you run,” in that context, reads less like conservatism and more like duty of care: build systems that are explainable and controllable first, then layer machine learning on top of stability, not the other way around.


AI will keep advancing. Vendors will keep selling. Boards will keep asking what the strategy is. Donnelly’s bet is that the quiet winners may not be the carriers with the biggest models, but the ones whose underwriting and claims operations learn and improve every quarter, long after the press release has faded.

 
 
bottom of page