P&C Insurance's AI Problem Isn't What You Think

Insurers direct 72% of AI spending to technology and just 28% to change management, creating a critical architecture mismatch.

Futuristic AI

Budgets have grown, pilots have multiplied, and AI is now a fixture in virtually every P&C strategic plan. Yet 42% of insurers track no AI metrics at all, which means they have no way to validate what works, no playbook to scale it, and no mechanism to stop what doesn't work. Insurers' investment pattern confirms that this is an organizational constraint, rather than a technology one: on average, 72% of AI spending goes to technology and only 28% to change management.

Technology creates capability. But change management determines whether that capability becomes performance. That imbalance is the first signal of what Capgemini identifies in the 19th edition of its 2026 World Property and Casualty Insurance Report as an "architecture mismatch." This is a structural gap that runs deeper than the technology stack, and that no amount of additional AI investment will close on its own.

Three dimensions, one ceiling

The first dimension is a strategy and talent gap. Among the top 20 global P&C insurers, only 35% have explicitly linked their AI strategy to business outcomes beyond efficiency. That narrow framing has consequences: Strategy tends to direct investment toward quick wins rather than the capabilities AI needs to grow over time. In most cases, the result is an incomplete strategy that optimizes the present while leaving the future underbuilt.

The second dimension is technical constraints. Legacy architectures fragment data across functions, making it harder for AI to reason across underwriting judgments, claims assessments, and distribution decisions that depend on context-rich, unstructured information. The barrier is less about the AI itself and more about the environment it must operate in – one that was not designed with AI in mind and does not easily accommodate it.

The third – and arguably most decisive – dimension is organizational. Over half (55%) of insurers cite unclear ownership of AI initiatives as a key constraint. Without clear accountability, programs stay dependent on individual champions rather than building institutional capability. And despite all the work underway, 47% of employees report no meaningful change in their day-to-day work after 18 months of using AI. That points less to a deployment failure than a design flaw.

The problem with fixing one thing at a time

These three dimensions are entangled, which is precisely what makes the conventional response insufficient. Assess, prioritize, sequence: Fix strategy first, then technology, then organization. In practice, addressing one while leaving the others untouched tends to limit progress, rather than unlock it.

Our research identifies the emergence of intelligence trailblazers – the top 10% of P&C insurers – who treat AI as a core operating capability rather than a program to be managed, aligning strategy, technology, and organizational adoption in tandem. Over three years, trailblazers have achieved 21% higher revenue growth and 51% greater share price increases compared with the rest of the industry.

Despite their growth, this group has also not fully solved the problem. AI still largely operates at the task level, workflows remain built for human execution, and the organizational model that closes those gaps – one where human expertise and synthetic execution are deliberately organized around where each creates the most value – is still being built. The opportunity to redesign is real. But it remains an opportunity, not yet an achievement, even for those furthest ahead.

The harder conversation

An uncomfortable question to raise is why this is so difficult to change, even for organizations that understand the problem.

The answer is that the architecture mismatch was not built through bad decisions. Legacy systems were the right investment at the time. Prioritizing technology over change management made sense when AI was unproven, and the organizational implications were unclear. It is not evidence of poor judgment, but the accumulated consequence of individually rational choices made in a different context.

Moving forward requires asking a more challenging question: Do the investments already made, and the ones being considered now, still pay back on the original terms? Most organizations haven't asked that question systematically, because who defines success, who is accountable for outcomes, and how progress is measured beyond deployment were all designed for a time when decisions were quintessentially human. And until that question gets asked, the architecture underneath the pilots stays unchanged – regardless of how many new tools are deployed on top of it.

Trailblazers are not ahead because they have solved the problem or because they run better pilots. They are ahead because they made a different decision earlier: to address the architecture underneath the pilots, not just the pilots in isolation. The next decision is harder: to redesign the organization itself. That decision has not yet been fully made by anyone. But the insurers who make it first will define what competitive advantage looks like in the intelligence era.

Read More