Download

The Customer Revolution in Insurance

Insurers sit on data goldmines yet fail to leverage customer insights like tech giants, missing trillion-dollar opportunities.

View of the top of a globe of Earth with data points coming off the globe represented with white verticle lines

Today's digital giants didn't just change the game, they rewrote the rules. They turned customer insight into capital, behavioral data into billion-dollar products, and user experience into enduring brand loyalty. They've built trillion-dollar empires by knowing their customers better than the customers know themselves.

It's mad to think that there's only a handful of these ecosystem drivers that include the likes of Amazon, Alibaba, Apple, and Google. But that's not the craziest part. What's incredible to me is that these ecosystems don't exist in insurance. After all, what these established ecosystems do well is simply to maximize the value of a customer by maximizing their value to a customer. This is achieved through continuous, data-driven innovation and activated through a well-orchestrated ecosystem of partners.

Now consider insurance: an industry that holds more data than most tech platforms could dream of. Not just consumer data but also operational, behavioral, environmental, and risk data. To top it all off, even more data is only an arm stretch away and available from connected cars, smart home devices, wearables, and IoT systems.

The insurance industry collects fresh, high-value insights from millions of interactions every single day, yet most of it sits idle, trapped in outdated systems, fragmented across silos, and rarely used to its full potential. This is a massive missed opportunity, and it's not a stretch to say that the sector really does have the opportunity to emulate e-commerce's proven, multitrillion-dollar, customer-centric business model.

However, the issue is far more than a technology change. This shift and the huge commercial upsides that accompany it require a business model and mindset change. Rather than seeing customers as policyholders, insurers must recognize them as the central product. By harnessing the extensive data sets at their disposal, the sector can create hyper-personalized experiences, optimize pricing strategies, and drive entirely new revenue streams.

This customer-centric shift isn't just about meeting consumer demand for digital services, it's about fundamentally reshaping profitability by applying a successful, established approach.

There are many ways that these business models drive growth through value creation. Building around the customer means you integrate experiences, partners, products and services around people. However, insurers are not typically built this way. Most are built in legacy policy administration systems with data models that sit atop, trying to abstract that policy-focused data into a customer cohort, drive insight and then reapply it back into experiences.

This is far too slow. Like a hot sales lead, customer data is a perishable asset. Its value fades fast if not acted on in the moment. Customers want buying insurance to be fast and frictionless, without being dragged through a long list of opaque, hard-to-answer questions. When we are in a claims process, we need insurers to see and respond to our data in real time. Even when we are being sold new coverage, we ideally need it a few clicks away, or, better still, embedded and in context. And when we move to experiences centered on helping us understand, navigate or even mitigate risks, we need that real-time, too. Tomorrow is nearly always too late.

But this isn't just about speed or seamless claims. It's about making sure the cover we receive fits our individual needs, now and as they evolve. It's about removing stress from the claims experience, not adding to it. And it's about transforming renewal from a transactional moment into a meaningful interaction.

That means having the data to offer genuine advice, based on how a customer's life has changed, or, better still, eaching out when a change is detected through partner data. That's what it means to value a customer - using insight to anticipate their needs, build trust, and position the insurer as a true partner - not just a silent presence that reappears at renewal time with a price hike.

In essence, insurers must massively increase their knowledge of their customers, not just acquire their data. Insurers must then act on this knowledge through embedded, adaptive, and risk-mitigating propositions that meet the demand of dramatically changed demographics, economics and lifestyles.

This requires a business model change, enabled by a new technology foundation and driven by an evolving culture. Core technology built for insurers - especially when built on MACH foundations and designed to function like a true ecosystem driver - can only realize its full potential if it's matched by changes in mindset, structure, and culture.

  • No more silos. Everyone in your organization needs to be customer only, not just customer first. Teams must look and act more like agile software development squads than artificial clusters of mixed functions. Actuaries, developers, product owners, experience designers, data engineers, etc., must all work together constantly with clear goals and outcomes.
  • Change must become a constant, and roles must move from operational management to customer experience improvements. Claims handling becomes claims optimization.
  • Experimentation needs to rise dramatically. Learning fast means never failing. Where all data is mined as a perishable asset and acted on, this includes ways of making people's understanding, use, and experience better, as well.
  • Technology must become an enabler of new, unimagined futures, not just an operating entity and IT constraint. Any line of insurance and even complementary non-insurance products needs to be managed and operated from one core platform. There can be no IT bottlenecks or downtime for any reason. Where interoperating partners aren't just about application programming interface (API) models, the issue is about how quickly those partnerships can be applied to experience outcomes.

All of this needs to happen in a business model where the time to generate value from new insights is attainable in minutes, not days.

There's a compelling commercial imperative behind all of this. Happy customers, who easily self-serve through digital interactions and access human support when it counts, are more satisfied and more loyal. Ultimately, they form more trusted relationships and will buy and do more with their insurer.

When your customers buy multiple things from you, their value over risk will start to look far more interesting. We aren't just talking about "multi-car" type propositions, as useful as they can be, we are talking about insurance portfolios or relationship products.

Take life insurance, a market set to transform over the next 18 months to five years. It suffers from increasingly low relevance and low penetration rates. Lifestyles, demographics and life stages have changed dramatically. The propositions this market offers should change as well, adapting as people's financial and health profiles change. Current products, sold once and then engaged when someone dies, need to give way to more holistic protection and life models.

Perhaps underwriters and actuarial roles will finally be fused with customer experience and analytics functions, creating holistic models that when combined will stretch far beyond "policy" thinking.

However, as a result of this need for technological and business model shifts, insurers with their current legacy and modern legacy footprints will struggle. As it is, adaptivity is too slow and expensive. New insurers are emerging, and the market dynamics are now forcing legacy insurers to change -- from regulation asking them to treat customers better, all the way through to new digital and intelligently orchestrated experiences.

Insurance has a new battleground. Deeper relationships, more loved products and services, generating more value through more propositions. This has to replace price-led competition.

The value chain model is broken. Ecosystems aren't optional, and customers aren't things you bolt on to your technological core. They should sit at the center, and everything should interoperate around them.

The reality is that even if you want to operate as a "value chain" business, your best way of minimizing costs and maximizing distribution still lies in being able to value customers and service them in any channel, 24/7, in an increasingly intelligent and personalized manner.

This is the new commercial battleground for insurers. It seems most don't realize it yet. But the emergent competitive forces are beginning to bite, and we see many new emergent forces acting on the industry. Shareholders will start to see this gap, along with capital investors who are already diving in.

We are in exciting times for an industry long held at a tipping point.


Rory Yates

Profile picture for user RoryYates

Rory Yates

Rory Yates is the SVP of corporate strategy at EIS, a global core technology platform provider for the insurance sector.

He works with clients, partners and advisers to help them jump across the digital divide and build the new business models the future needs.

September 2025 ITL FOCUS: Resilience and Sustainability

ITL FOCUS is a monthly initiative featuring topics related to innovation in risk management and insurance.

itl focus resil

 

FROM THE EDITOR

Doing a major remodel on a home for the first time, I was struck by the builder’s comments when he saw the architectural drawings—comments along the lines of, “Oh, why did he specify this material, or take this design approach? If he had just done X or Y, he’d have saved you a lot of money.”

At that point, we could have gone back to the architect, but that would have meant more fees and caused a long delay as we restarted the approval process with the city, so we went with the original plans.

With our second major remodel, we knew better but were still trapped by the sequential nature of the process: An architect does the design, and then you put the project out to bid with builders. We finally succeeded on introducing cost to the design process the third time around, but only because I had formed a partnership with a builder to buy and remodel a home on spec. The builder would earn a share of the profits, so he happily dove into the design discussions.

In this month’s interview, Francis Bouchard, managing director of climate at Marsh McLenna, says efforts to make property more resilient in the face of escalating dangers must move toward the collaborative approach that worked in my third remodel. And, happily, he sees real progress.

Historically, someone built a building, a house or a community, then insurers came in and priced the risk. Instead, Francis says, the issue of “insurability” should be baked in from the beginning of the development of a property.

“Focusing on insurability allows us to enlist other critical players in the housing space to adopt this same, shared accountability approach,” he says.

“When you aggregate this approach across every player in the value chain, you create transformative results. You get architects incorporating resilience, developers considering wildfire protection, fully certified contractors who understand requirements, and properly prepared supplies that don't cause delays.”

He offers a long list of ways that the “insurability” conversation is taking hold. I think you’ll find it encouraging, even as we all see the headlines about soaring damages from natural disasters—perhaps especially as we all see those headlines.

Francis pointed me toward Nancy Watkins, a principal and consulting actuary at Milliman, who is building a “data commons” on what works and what doesn’t work when it comes to reducing risk in the wildland-urban interface (WUI), where so much of the risk from wildfire sits. Mitigating the risk for existing homes obviously has to be a huge part of any resilience effort.

She and her colleagues have completed the first two phases of the project (he report on Phase 2 is here) and are embarking on Phase 3, which will see them shepherd major mitigation efforts in 30 to 50 communities in as many as seven states. (She says she’s “trick or treating” for sponsors, so contact her if you’re interested in getting involved.)

I’m sure there will be lots of disappointments. As she noted to me, it’s not enough just to have the data on what works, you have to get it out to people and have to get them to act on it, both as individuals and as a community. And getting good data is hard enough.

But I’m more encouraged than I was before talking with Francis and Nancy and think you will be, too, once you read this month’s interview and check out the recent ITL articles I’ve shared on resilience and sustainability.

Cheers,

Paul

 
 
An Interview with Francis

The New, Much-Needed Conversation on Resilience

Paul Carroll

It was almost exactly a year ago that I attended a gathering you helped put together in Atlanta for a group that helps universities and insurers collaborate on research concerning climate risk, so this feels like a great time to catch up. What would you say are the major advances in the past year in making the world more resilient, and in the insurance industry’s efforts on that front?

Francis Bouchard

Things are starting to coalesce. As someone who's been active in this space almost exclusively for four years, I'm starting to see some real positive signs. Some of that is from insurers themselves, who are leading efforts on risk reduction opportunities, whether through IBHS [the Insurance Institute for Building & Home Safety] or other standards.

I see more industry activity—concrete, real activity—than I've seen at any other time in the last four years. Kudos to those companies that are really starting to look at these challenges in new and different ways. I see more and more non-insurers looking at insurance as a viable part of the solution and wanting to create an environment where homes and communities are insurable.

read the full interview >

 

 

MORE ON RESILIENCE AND SUSTAINABILITY

Transforming CAT Modeling: The LLM Imperative

Large language models are transforming insurance risk management from reactive assessment to proactive, real-time catastrophe mitigation.
Read More

Managing Hyper-Volatility in the Modern Age

Climate change intensifies geopolitical risk. How can organizations protect themselves against extreme, rapid and unpredictable changes?
Read More
phones

3 Key Steps for Climate Risks

83% of insurers view predictive analytics as "very critical" for the future of underwriting, but just 27% say they have the necessary capabilities. 
Read More
hands in a meeting

Lessons for Insurers From the LA Fires

California wildfire survivors battle insurers over systematic underinsurance while navigating complex recovery efforts.
Read More

Secondary Perils Are Now a Primary Threat

Outdated catastrophe classifications hinder insurers' ability to effectively manage escalating threats from all perils.

Read More

 

megaphones

Role of ILS In Traditional Risk Transfer

The insurance-linked securities market reaches the $50 billion milestone as investors seek uncorrelated returns amid increasing catastrophic risks.

Read More

 

 
 
 

FEATURED THOUGHT LEADERS

Jaimin Das
 
Ester Calavia Garsaball
Lance Senoyuit
Biswa Misra
Jack Shaw
 
Rory Yates
Garret Gray
Amir Kabir

 

 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Cut Costs & Strengthen Security by Tackling Technical Debt 

Unify risk systems to reduce costs, boost resilience, and improve oversight. 

man touching screen

eBook | Is Technical Debt Holding Back Your Risk Strategy? 

 Is your organization weighed down by fragmented risk systems and rising IT costs? Origami Risk’s latest guide reveals how integrated risk management (IRM) can help you overcome technical debt, reduce your total cost of risk, and improve operational efficiency. 

Discover how leading organizations are:   

  • Consolidating risk, compliance, and audit tools
  • Reducing vendor complexity and licensing costs
  • Enhancing visibility and response times across the enterprise 

  Download the eBook to start building a scalable, secure, and cost-effective risk management strategy. 

Download the eBook Now  

 

Sponsored by: Origami Risk


Origami Risk

Profile picture for user OrigamiRisk

Origami Risk

Origami Risk delivers single-platform SaaS solutions that help organizations best navigate the complexities of risk, insurance, compliance, and safety management.

Founded by industry veterans who recognized the need for risk management technology that was more configurable, intuitive, and scalable, Origami continues to add to its innovative product offerings for managing both insurable and uninsurable risk; facilitating compliance; improving safety; and helping insurers, MGAs, TPAs, and brokers provide enhanced services that drive results.

A singular focus on client success underlies Origami’s approach to developing, implementing, and supporting our award-winning software solutions.

For more information, visit origamirisk.com 

Additional Resources

ABM Industries

With over 100,000 employees serving approximately 20,000 clients across more than 15 industries, ABM Industries embarked on an ambitious, long-term transformation initiative, Vision 2020, to unify operations and drive consistent excellence across the organization.  

Read More

Webinar Recap: Leveraging Integrated Risk Management for Strategic Advantage

The roles of risk and safety managers have become increasingly pivotal to their enterprises' success. To address the multifaceted challenges posed by interconnected risks that span traditional departmental boundaries, many organizations are turning to Integrated Risk Management (IRM) as a holistic approach to managing risk, safety, and compliance. 

Read More

The MPL Insurance Talent Crisis: A Race Against Time

Managing Medical Professional Liability (MPL) policies has never been more complex — or more critical. With increasing regulatory demands, growing operational costs, and the ongoing talent drain, your team is expected to do more with less.  

Read More

MGA Market Dominance: How to Get & Stay Ahead in 2025

Discover key insights and actionable strategies to outpace competitors and achieve lasting success in the ever-changing MGA market. The insurance industry is transforming rapidly, and MGAs are at the forefront of this change. Adapting to evolving technologies, shifting customer needs, and complex regulatory demands is essential for staying competitive.

Read More

Automating the Garbage Can

Despite $30 billion to $40 billion in AI investment, 95% of organizations achieve zero return, MIT study finds.

Digitized image with blocks and a camera lens tinted blue overlayed across cars on a street in a city

MIT's NANDA Project—established to help drive AI integration in enterprise settings—recently released its mid-year report. The key finding is stark: Despite $30-$40 billion in enterprise investment into generative AI, 95% of organizations are getting zero return.

From the report: "The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time."

This admission departs sharply from the GenAI industry's long-held narrative that scale—more infrastructure, more training data—is the key to success. Thus, Big Tech has funneled over $500 billion into new AI datacenters over the past two years, betting that technical expansion alone would lead to better outcomes.

Blaming the technology and the technology alone for the 95% failure rate would be a mistake. Organizational realities must also be considered.

The Garbage Can theory—a seminal framework introduced by Michael D. Cohen, James G. March, and Johan P. Olsen in the early '70s—sees organizational decision-making as a random, chaotic process where problems, solutions, and decision-makers mix like garbage in a can. Decisions are often made not through linear analysis, but when a pre-existing solution (a technology, a pet project) goes looking for a problem to solve, and they connect at the right moment.

In "organized anarchies"—such as the insurance enterprise—decisions surface more from political realities, business urgencies, happenstance, and fragmented routines than from structured analysis.

MIT NANDA's findings reveal that AI pilots frequently reflect this "garbage can" environment. Rather than deploying disciplined, contextualized programs, organizations launch generic AI tools with unclear goals, disconnected stakeholders, and insufficient governance. High failure rates stem from this context vacuum: Solutions chase problems but lack clarity on objectives or pathways for integration.

Where measurable success emerges, automation is tightly linked to specific workflow tasks—especially in finance, HR, and operations. In these areas, context and routine enable AI to deliver quantifiable savings and efficiencies, making back-office automation a financial standout.

In contrast, customer-facing applications often attract investment due to hype but rarely deliver robust returns. These projects suffer most from the garbage can effect: fragmented pilot teams, fluctuating requirements, and poorly defined goals.

The lesson is not that AI lacks potential but that organizational learning and context are prerequisites for meaningful automation. The prevailing narrative in AI casts it as a source of algorithmic precision, promising to banish organizational mess. But the garbage can will abide. The deeper challenge of AI adoption is organizational, not technological.

Deployed naively, AI becomes just another item in the garbage can—an expensive tool in search of an application, championed by some departments and ignored by others. The outcome: fragmented initiatives and wasted investment.

The best results always come when humans and AI collaborate, with humans providing context and ethical nuance, and AI bringing data-scale and pattern recognition. Ultimately, the strategic imperative is not simply to "implement AI" but to orchestrate its confluence. Consider these three recommendations:

  • Ask: "What does it improve, and by how much?" Focus on business outcomes before technology. Pick a metric and desired result, first.
  • Frame problems, not just solutions. Rather than asking "What can AI do?" define critical business problems, then determine how human-AI collaboration can address them.
  • Create deliberate choice opportunities. Design forums—cross-functional teams, innovation labs, strategy sessions—where problems and solutions connect intentionally, reducing randomness and supporting strategic adoption.

Human catalysts—those with fusion skill sets—are the drivers. Investments in training and culture change should always exceed spending on the technology itself.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.   

Google's AI Nailed Its Hurricane Erin Forecast

Google's machine learning approach will likely keep improving hurricane forecasts, too.

Image
hurricane storm on the earth

For the longest time, the basic approach to developing an AI was for the humans to teach the machine everything they could, then have the software take it from there. That approach worked. It's how IBM's Deep Blue defeated world chess champion Garry Kasparov in a six-game match in 1997 and how Google's DeepMind's Alpha Go defeated arguably the world's top Go player in five games in 2016. 

Then the scientists had a different idea: What if they let the AI learn entirely on its own, without regard for any human preconceptions, after just being given the rules of a game? That worked even better. By playing millions of games against itself, what DeepMind called Alpha Go Zero learned Go so well in three days that it defeated Alpha Go in 100 straight games. 

DeepMind then went the next step and developed an AI that hadn't even been taught the rules of Go. It trounced Alpha Go Zero. 

DeepMind is taking that sort of approach with hurricane forecasting. Rather than use the traditional approach — feeding massive amounts of data to supercomputers loaded with physics equations that spend hours and hours calculating forecasts for storms — DeepMind left out the physics equations piece, as well as all other guidance. Basically, DeepMind says: Here is all the data we have on hurricanes. You figure out what it means for future storms. 

The approach has shown promise with earlier storms, and DeepMind's AI just nailed the forecast for Hurricane Erin, outperforming both the official, supercomputer-based forecast and other commonly used models.  

Let's have a look at how far the AIs have come, so very fast, as well as where they can go from here. 

The promises of the deep learning approach first showed up on my radar not quite two years ago. In September 2023, I wrote a commentary lauding what advancements in supercomputing and satellite imagery were doing for forecasting. Just a month later, I found myself writing about AI models that, according to the Washington Post, had shown during that hurricane season that they "portend a potential sea change in how weather forecasts are made."

Now, Ars Technica reports that Google's AI outperformed the official forecast and numerous other of the best physics-based models on both intensity and the storm track, even after the other models were corrected for known biases. 

The article notes that the outperformance occurred with predictions reaching out to as much as three days ahead, while the most important forecasts are those three to five days ahead, because that's when many key decisions about evacuations and other preparations are being made. 

"Nevertheless," Ars Technica says, "the key takeaway here is that AI weather modeling is continuing to make important strides. As forecasters look to make predictions about high-impact events like hurricanes, AI weather models are quickly becoming a very important tool in our arsenal.

"This doesn't mean Google's model will be the best for every storm. In fact, that is very unlikely. But we certainly will be giving it more weight in the future.

"Moreover, these are very new tools. Google's Weather Lab, along with a handful of other AI weather models, has already shown equivalent skill to the best physics-based models in a short time. If these models improve further, they may very well become the gold standard for certain types of weather prediction."

Let's hope that the AIs continue their remarkable progress and, if so, that the public comes to trust them. A lot of damage and injury could be avoided.

In the meantime, fingers crossed that this year's hurricane season stays relatively quiet. 

Cheers,

Paul

How AI and Data Analytics Are Reshaping Risk

From predictive underwriting to real-time claims processing, AI is transforming insurers from reactive loss payers to proactive risk partners.

An artist’s illustration of artificial intelligence

In the ever-evolving landscape of the insurance industry, 2025 marks a transformative year where artificial intelligence (AI) and data analytics have emerged as indispensable tools in redefining how risk is understood, assessed, and managed. This shift is not just incremental—it's foundational, changing the DNA of insurance products, operations, and customer experiences.

From predictive underwriting to hyper-personalized policies, the integration of smart technologies is enabling insurers to become more agile, customer-centric, and resilient in a rapidly changing risk environment. Let's explore how AI and data analytics are reshaping the concept of risk in the modern insurance landscape.

The Age of Predictive Risk Management

Traditional insurance models largely relied on historical data and actuarial tables to price risk. But in 2026, these models are being outpaced by predictive analytics powered by real-time data and machine learning algorithms.

Using vast amounts of structured and unstructured data—from IoT devices, social media, telematics, wearables, and third-party sources—insurers are now predicting not just what might happen, but when and why. This allows for real-time, dynamic risk modeling that is far more nuanced and accurate than ever before.

For example, AI models can now detect subtle behavioral cues from driver telematics to assess real-time accident risk. Health insurers, too, are using biometric data and lifestyle tracking to anticipate chronic illnesses, enabling earlier interventions and better risk pricing.

Hyper-Personalization of Insurance Products

The "one-size-fits-all" approach is quickly becoming obsolete. Thanks to AI-driven customer segmentation and behavioral analysis, insurance in 2025 is increasingly tailored to individual lifestyles, preferences, and risk profiles.

Usage-based insurance (UBI) for auto, pay-as-you-go travel insurance, or real-time-adjusted health policies are just the tip of the iceberg. Smart homes equipped with IoT sensors offer property insurers insights into how risk fluctuates over time, enabling micro-adjustments to premiums or coverage on the fly.

This not only improves customer satisfaction by offering transparency and fairness but also ensures better alignment between risk exposure and insurance coverage, reducing adverse selection and fraud.

Claims Processing Gets an AI Makeover

Claims management, historically a manual and paper-heavy process, is now being revolutionized by AI and automation. In 2025, the average claims cycle is significantly shorter thanks to robotic process automation (RPA), AI image recognition, and natural language processing (NLP).

Take, for instance, an auto accident claim. AI tools can analyze photos of vehicle damage, match them to repair estimates, and process payouts within minutes—all without human intervention. Virtual assistants, powered by NLP, handle routine customer queries, schedule inspections, and provide status updates.

Beyond speed, AI also helps reduce fraudulent claims by identifying anomalies or unusual patterns in real time, flagging suspicious activity for human review. This drives down loss ratios and builds more trust with policyholders.

Dynamic Underwriting and Real-Time Pricing

The role of the underwriter has evolved from a periodic evaluator of risk to a continuous manager of it. Thanks to AI, underwriting is no longer a static function. Instead, it is a living process, informed by real-time data and adaptive learning systems.

Underwriters in 2025 are equipped with intelligent dashboards that integrate multi-source data feeds—climate models, market trends, cyber threat intel, etc.—to adjust risk scores dynamically. AI suggests optimal pricing strategies and recommends policy changes, minimizing exposure while maximizing profitability.

In commercial lines, particularly for complex risks like cyber insurance, AI is helping insurers offer real-time risk assessments and conditional coverage models that change based on threat landscapes or company behavior.

The Rise of Explainable AI in Insurance

As AI models become increasingly complex, the demand for transparency and regulatory compliance grows. Explainable AI (XAI) is a key focus in 2026, helping insurers understand and justify decisions made by algorithms.

Whether it's denying a claim, adjusting a premium, or flagging a high-risk policyholder, insurers must now provide clear, human-readable explanations. This is crucial for customer trust, regulatory compliance (especially under data protection laws like GDPR or India's DPDP Act), and internal governance.

XAI frameworks are embedded in most insurance platforms, ensuring every decision is auditable and fair—an essential step toward ethical AI deployment in risk management.

Mitigating Emerging Risks With AI

The 2025 risk environment is marked by volatility—from climate change and geopolitical instability to cybercrime and supply chain disruptions. Insurers are turning to AI not only to assess but also to mitigate these emerging risks.

For example, AI-powered climate models help property insurers predict flood zones and wildfire risks with unprecedented precision, allowing for risk avoidance strategies. In cyber insurance, machine learning monitors clients' digital infrastructure for vulnerabilities and offers real-time recommendations to harden systems.

Thus, insurers are no longer passive responders to risk—they are becoming active partners in risk prevention and resilience.

Ethical and Workforce Implications

As smart technologies take over routine tasks, the role of human workers is evolving. The insurance workforce in 2025 is increasingly focused on strategic, ethical, and creative responsibilities—interpreting AI insights, ensuring fairness, and maintaining the human touch in digital experiences.

However, there are also challenges. Data privacy, algorithmic bias, and the digital divide raise ethical concerns. Insurers must invest in responsible AI governance and continuous upskilling of their workforce to balance innovation with integrity.

Final Thoughts

Smart insurance in 2025 is not just a digital facelift—it's a fundamental rethinking of how risk is perceived, priced, and managed. AI and data analytics are enabling insurers to shift from reactive loss payers to proactive risk partners.

The winners in this new era will be those who combine technological prowess with ethical foresight and human empathy. In doing so, they won't just reshape risk—they'll reshape trust in the insurance industry for generations to come.

The New, Much-Needed Conversation on Resilience

As natural catastrophes intensify, Marsh's Francis Bouchard says the focus should shift away from how to price risk and toward "insurability." 

 Resilience and Sustainability itl focus interview

Paul Carroll

It was almost exactly a year ago that I attended a gathering you helped put together in Atlanta for a group that helps universities and insurers collaborate on research concerning climate risk, so this feels like a great time to catch up. What would you say are the major advances in the past year in making the world more resilient, and in the insurance industry’s efforts on that front?

Francis Bouchard

Things are starting to coalesce. As someone who's been active in this space almost exclusively for four years, I'm starting to see some real positive signs. Some of that is from insurers themselves, who are leading efforts on risk reduction opportunities, whether through IBHS [the Insurance Institute for Building & Home Safety] or other standards.

I see more industry activity—concrete, real activity—than I've seen at any other time in the last four years. Kudos to those companies that are really starting to look at these challenges in new and different ways. I see more and more non-insurers looking at insurance as a viable part of the solution and wanting to create an environment where homes and communities are insurable.

There are discussions happening with builders that weren't happening a year or two ago. There are discussions happening with architects that weren't happening a year or two ago. This system-level awareness that's growing is really encouraging because this is not an insurance problem—it's a risk problem and an insurability problem.

Many sectors are accountable for reducing risk before a home presents itself to an insurance company to be insured and priced. The fact that meaningful discussions about what other players in the value chain could do to reduce the risk of these homes is wildly encouraging. Some of that's happening in the context of the California rebuild, while some is happening with organizations trying to coalesce stakeholders to pursue a national or larger-scale solution.

I'm encouraged because people are talking, more people are acting, and people are starting to see the connection points more clearly than perhaps they had before.

Paul Carroll

What other programs, similar to IBHS’s FORTIFIED, are making strides in promoting resilient construction?

Francis Bouchard

I'd point to the LA Delta Fund, dedicated to the 12,000 homes burned in the Eaton fires. It focuses on closing the gap between what insurance proceeds will pay for and what it takes to achieve a truly resilient construction level. We often debate who should bear this cost—consumers or insurers. This organization has found a way to attract both return-bearing capital and philanthropic capital to create a blended capital fund that pays the difference—the delta—between insurance proceeds and the cost of resilient construction. They are close to launching the fund and beginning to facilitate a much higher level of resilient reconstruction in LA following the fires.

This initiative is, in many ways, epic. It's never been done before, certainly not at this scale. The fact that they can raise money from markets indicates that the interest in ensuring resilient rebuilding extends well beyond the insurance sector.

Paul Carroll

Any other examples leap to mind?

Francis Bouchard

There's the Triple-I project with PwC in Dallas that is aligning stakeholders to facilitate the rebuilding or retrofitting of homes to the IBHS standards. This is another concrete example of insurers coalescing to change the risk profile of a community.

Then you have individual firms pushing the envelope. Milliman is doing an immense amount of work, with Nancy Watkins focusing on the WUI [wildland-urban interface], where the interaction between communities and wildfire is the most extreme.

Mercury Insurance is engaging with communities about what it takes to convince them to take steps that would make them insurable. We're starting to see a shift from thought leadership to community engagement.

Paul Carroll

What industry-academia research projects have generated the most interest, and where do they stand?

Francis Bouchard

Nothing has been launched yet, as we are still waiting on a funding announcement from the NSF [National Science Foundation] and corresponding funding from industry partners. We’re cautiously optimistic about the NSF and think industry funding will follow. 

The project that generated the most interest last September was a platform to facilitate dialogue between the atmospheric science community and the insurance underwriting community and help both sides better understand the value and use of available data sources. Considering the recent changes and, in some cases, wholesale dismantling of government departments or capabilities, this issue has become even more pressing and will likely appeal to numerous companies.

Dialogue is already occurring in multiple forums. We're hoping to coalesce these discussions and create a trusted pipeline of information flowing between federal data sources and the insurance sector.

Another well-received proposal focused on improving decision-making by narrowing uncertainties and addressing them differently. This proposal will likely garner attention from the insurance industry as companies seek to systematically understand and address uncertainties from weather, policy, and FEMA perspectives. The uncertainties simply accumulate.

The community-based catastrophe insurance project is another initiative we'll likely pursue. This topic is particularly ripe given the need for more innovative risk-bearing solutions.

Paul Carroll

What about developments at major insurance industry players?

Francis Bouchard

We [Marsh McLennan] recently announced our participation in a carbon trading mechanism to derisk the issuance of carbon credits. You're seeing more insurers and brokers focusing on this as a way to facilitate the projects that generate the credits.

There's also a more macro-level shift emerging—a growing awareness around shared accountability for the insurability of homes. The debate today typically centers on the technical nature of pricing risk. What we're trying to do is use this notion of insurability to reframe the conversation.

The right question isn't about pricing; it's about understanding the thousand decisions made that led to a home having its particular risk profile. We in the insurance industry are not the end-all, be-all. We are simply reflecting the thousand decisions made prior to receiving the submission.

Focusing on insurability allows us to enlist other critical players in the housing space to adopt this same, shared accountability approach. Non-insurance professionals often expect mind-numbing analytics and modeling. When you simply ask, "What can you do to reduce the risk that a house faces when it's finally built?", people respond with, "Oh, that's it? That's doable." And it should be doable.

When you aggregate this approach across every player in the value chain, you create transformative results. You get architects incorporating resilience, developers considering wildfire protection, fully certified contractors who understand requirements, and properly prepared supplies that don't cause delays.

When all these stakeholders understand their role in reducing risk, it makes our role significantly easier.

Paul Carroll

Thanks, Francis

About Francis Bouchard

francis headshot

Francis Bouchard is an accomplished global public affairs professional who has served as an advisor, catalyst and contributor to a series of climate resilience and insurance initiatives. He is currently the managing director for climate at Marsh McLennan, and earlier served as the group head of Public Affairs & Sustainability for Zurich Insurance Group, where he focused on aligning the group’s government affairs, sustainability and foundation activities. He originally joined the insurance sector in 1989 and since has held a series of industry-focused advocacy, communications, sales, citizenship and public affairs roles, both in the U.S. and in Switzerland.

Francis also chairs the board of directors of SBP, a national non-profit focused on disaster resilience and recovery, serves on the board of the climate-focused insurtech incubator InnSure, and is a member of the advisory council of Syracuse University’s Dynamic Sustainability Lab.


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Can Insurtech Fix Homeowners Insurance?

Even a 20% gain in operational efficiency doesn't move the needle enough. The real opportunity is in "connect and protect."

Beige-green one-story house with big windows and a lot of lights on against a blue sky with clouds and hills

A few weeks ago, this LinkedIn post about homeowner insurance, along with its comments, triggered my nerdy attitude to crunch numbers. So, below you find the results of my deep dive.

Now, let's talk about the economics of homeowner insurance in the U.S market.

The source of the figure used in that post is the “Analyses of U.S. Homeowners Insurance Markets, 2018-2022: Climate-Related Risks and Other Factors” published in January 2025. The comments range from the incumbents being inefficient and ripe for disruption, to homeowner insurance having complexities in underwriting and claims that structurally limit the efficiency that can be achieved, to it being all about acquisition costs. 

Let’s start with a look at the profitability of this line of business over the past decade, based on the NAIC data. At the industry level, the home insurance business has barely made any money. The underwriting profits have been on average at -1.6%, meaning that claims and expenses have been higher than the premiums collected. Thanks to investment gains, the sector has, on average, made 0.7% in profits. 

Article content

Source: Naic data

The sector has suffered in the past few years, and it returned to technical equilibrium with a 99.7% COR only in 2024 .

Article content

Source: Elaboration on IEE NAIC data

Obviously, these figures aggregated at the market level are the result of heterogeneous performances among the carriers in the market. As shown in the figure below representing market share and loss ratio of the top 25 homeowners insurance writers, there are carriers achieving a significantly better loss ratio – so profitability - than the market.

Article content

Source: Naic data

Now, let’s look at the composition of the almost 40% of the premiums not paid as claims. The costs to operate the business (loss adjustment expenses and the general expenses) account for about 15 points on the combined ratio.

Article content

Source: Naic Data

Comparing the home insurance business line with the other top 10 business lines (representing more than 90% of the total P&C premiums), we can see that its cost profile is similar to personal auto and significantly less costly than the commercial lines.

Article content

Source: Naic data 

At least relative to other insurance business lines, homeowners insurance doesn't appear inefficient. However, let’s take a closer look at the details of the costs for some of the top carriers. Here are the 2023 expenses, as reported in the insurance expense exhibit, by two of the top 10 U.S. homeowners writers (both have been more profitable than the market average both in 2023 and in most recent years). 

Article content

Elaborations on 2023 Statutory Statements

Excluding commissions and taxes, the operating costs of the homeowner business for these two payers are about 17-18 points on the combined ratio.

Nowadays, at any insurtech conference, you hear the announcement of some AI agents able to increase the efficiency of insurance processes and run the business with fewer people. Assuming a cost allocation split on the homeowner business similar to the one these companies have at the group level, only about half of these 17-18 points are linked to personnel.

Even an impact, net of AI costs, of 20% efficiency will not result in a two-point reduction on the combined ratio.

Article content

Source: elaborations on Naic data

The left side of this bill is the biggest opportunity. Avoiding claims from happening, and mitigating situations that have already occurred, is the big insurtech opportunity in homeowners insurance. Connect and protect is the name of this opportunity.

Connect & Protect is the biggest insurtech opportunity in homeowners insurance

At the recent Insurance Innovator USA in Nashville, a leading insurer has shown the split of the claims in their portfolio by peril.

Article content

 

Applying this split to the average loss ratio of the market for 2018-2024 yields 15 points on the combined ratio that can be addressed with fire protection solutions, and 21 points that can be addressed with water escape prevention solutions.

Whisker Lab and Ondo, both members of the IoT Insurance Observatory, are at the forefront of this connect-and-protect transformation of the homeowners insurance industry.

Whisker Lab has already partnered with more than 30 U.S. homeowners insurance carriers to offer its Ting device – a solution for preventing electrically generated fires – to their policyholders, and is already protecting a portfolio of more than one million connected homes.

Ondo is instead focused on preventing and mitigating water damage. My exchange (below) with their CEO, Craig Foster, over a recent weekend illustrates the results achieved so far and his vision about how insurtech can fix homeowners insurance 

Matteo: Ondo's mission is to reduce the expected losses on a homeowners insurance portfolio. Could you share the risk prevention impact you have been able to bring to your partners and the ROI of this Insurtech approach?

Craig: Absolutely. At Ondo, we quantify ROI in two core ways:

  1. Documented Claims Savings — When LeakBot plumbers visit a home based on a leak alert, 15% of the time they find and fix a leak that is actively damaging or potentially damaging to the home.   In each of those instances our plumbers create a Claims Mitigation Report which include pictures and moisture scans giving insurer partners a complete picture of the leak scenario. This gives our insurer partners the ability to quickly estimate the loss avoided, giving an immediate and tangible read on savings potential and ROI. For every dollar an insurer partner spends on LeakBot they see two to four dollars in loss savings.
  2. Control Group Analysis — As programs scale, partners compare performance against a statistically matched control group. This allows for a rigorous actuarial view of the impact on claim frequency and severity.

This dual approach is well-established across our partner base and has been refined over years of implementation.

While most partners keep exact ROI data confidential for competitive reasons, Swedish insurer Lansforsakringar publicly reported a 45% reduction in escape-of-water claim costs. In aggregated analyses across multiple markets and cohorts, we've seen savings of up to 70%, driven by both fewer claims and materially lower severity. LeakBot tends to intercept the largest claims before they spiral - such as catching leaks spraying into crawl spaces before structural damage occurs. One US partner recently resolved such a case for under $3,000 - where the same loss could have easily exceeded $30,000 had it gone undetected.

Our ROI performance has proven consistent across geographies. Older partners like Hiscox (UK) and Topdanmark (DK) continue to scale based on long-term savings. In the US, we’ve signed nine insurer partners recently. Those who launched early - such as Nationwide, PURE, Mutual of Enumclaw and Selective - have already expanded their programs from a single pilot state into a total of 23 US states after seeing strong early impact. That ROI from the analysis of Claims Mitigation Reports is what continues to drive adoption and long-term renewal.

Matteo: I recall the event in London about nine years ago, when LeakBot was presented to the insurance community. Connect and Protect has undertaken a long journey in our sector since then, and we are finally starting to see significant momentum. What are the three most relevant changes you have seen collaborating with large insurance incumbents?

Craig: We’ve definitely seen a major shift in how insurers view connected home technology. Here are three of the biggest changes:

  1. From “Will This Ever Scale?” to “What’s the Right Tool for the Job?” Nine years ago, many insurers were still debating whether IoT could ever move beyond pilot stage. Today, the question is no longer if, but how best to deploy it. We’re seeing real strategic commitment - especially as solutions like Ting (fire risk) move toward 2 million homes in the US. In water, we’ve emerged as the go-to prevention partner, and our discussions now focus on the right methods to get as many homes protected as possible.
  2. From Claims Savings to Tangible Customer Experience Initially, partnerships focused purely on the actuarial ROI. That remains key - but insurers now also value the customer experience impact. LeakBot turns an intangible product into a proactive service, with an NPS consistently above 80. Policyholders love it - and insurers see improved retention and brand loyalty. For many partners, that CX story becomes as important as the claims data.
  3. From Plug-and-Play to Deep Integration In the early days, insurers opted for zero-integration turnkey rollouts. Today, the most forward-thinking carriers are building full-stack platforms that integrate with our APIs from day one. A standout is Nationwide in the US, who built a proprietary smart home backend that allows seamless integration with solutions like ours. This level of IT and data maturity unlocks greater scalability, efficiency, and personalization.

Matteo: What is your vision for the future of insuring homes? How will it look in 2035?

Craig: By 2035, home insurance will evolve into a cognitive home protection service - not just a policy, but an intelligent system actively working to prevent losses in real time. Powered by ambient computing, ubiquitous connectivity, and edge AI, the home will become both self-monitoring and insurer-integrated.

  1. 1. Insurance Will Live Inside the Cognitive Home The connected home will become the cognitive home - a space where devices like LeakBot quietly monitor risks, interpret signals, and take action without the homeowner needing to intervene. This is AI-powered ambient computing in practice: invisible, automatic protection woven into the fabric of daily life.
  2. Insurers Will Become Real-Time Decision Engines With ubiquitous connectivity and richer data from IoT devices, insurers will pair these insights with AI to make smarter decisions - on renewals, claims, pricing, and service triggers. The most advanced insurers will effectively become cognitive risk mitigation machines - constantly adapting and optimizing in real time to help their customers avoid loss, not just recover from it.
  3. Claims Will Be Pre-empted, Not Just Paid Edge AI enables instant decisions directly on the device, cutting latency and enabling proactive service. A leak doesn’t become a claim - it becomes a service call. Risk is neutralized early, affordably, and invisibly. As these systems mature, we’ll see a steep drop in claim severity - and a new standard for what home protection means.

At Ondo, our vision is to be the leading global provider of claims prevention technology for home insurance. We expect LeakBot to become the default standard for mitigating water damage claims — first in the United States, then globally. As the insurance industry shifts from reactive to cognitive, we’re building the core infrastructure to power that future.

Cities Are Getting Smart

Chicago is installing sensors that can warn drivers of flash flooding, and GM has developed technology that uses its cars to track road conditions. 

Image
city with tech

We talk a lot about smartphones and even smart homes, but let's not sleep on smart cities. They are getting more wired all the time, in ways that don't just provide convenience — such as alerting drivers to open parking spots — but that make people safer and reduce insurance claims.

The two latest examples that caught my eye are the deployment of sensors on Chicago streets that can detect flooding and a General Motors patent application for a way to use its cars to sense road condition. The Chicago sensors will relay warnings to property owners and to authorities. The GM data will flow to drivers to let them avoid trouble and to local governments that can fix the roads. 

But there's a lot more besides, and progress will likely accelerate from here.

The Chicago and GM stories demonstrate both the promise of innovations in cities and what pieces still need to fall into place so those innovations can be deployed widely and deliver major benefits.

In Chicago, 50 sensors will be deployed over the next 18 months on bridges, above roads and in sewers. Wireless and powered by solar, the cylindrical sensors will use sonar to measure the depth of water beneath them. If levels are rising at a threatening pace, the sensors will create instant flood maps for city authorities, who will alert property owners in affected areas. 

Chicago is a big place, so a lot more than 50 sensors will eventually need to be deployed. Bugs will also surely need to be worked out of the technology. Costs will need to come down — each sensor currently costs about $1,500.

But all those issues feel manageable, based on the cost and performance curves that are normal for this type of technology. Sensors for water leaks in homes, for instance, began as elaborate devices, shrank to about the size of hockey pucks, and now, according to an announcement from Hartford Steam Boiler, can be as thin as four credit cards stacked on top of each other.   

As it happens, as I was about to publish this commentary, the New York Times published a piece this morning about a system that is similar to Chicago's, that is further along and that underscores the need for the final, key piece: getting the word out, and rapidly. 

Spurred on by damage from Hurricane Ida four years ago, New York City has installed 250 sonar-based sensors whose cost is just $300 apiece and plans to double the number of sensors by 2027. The Times reports a big improvement in understanding flash flooding in real time — previously, authorities only learned of problems based on emergency calls, social media posts and news reports.

But the notification system is passive. You have to monitor a city website to see where flooding may be happening. If you're alert, likely because you've suffered damage in the past, you're fine. If not, you're as vulnerable as ever. 

The leaders of FloodNet, the sensor network New York uses, say they're piloting a system that can alert people via email, which will mark a huge improvement. An even bigger one will be when FloodNet and similar sensor operators can ping the cellphones both of those who've signed up for flooding alerts and of any others in an area, such as drivers, who might be affected.

The GM system has much further to go than the flooding sensors but could also deliver major safety benefits if GM can, in fact, gather useful information about road conditions from sensors that track the traction that a car's tires get and the movement in its suspension system. I grew up driving in Pittsburgh, where potholes seemed to show up everywhere during the spring thaw, and would have loved to know to be prepared to skirt a big one just ahead. A whole lot of drivers suffered damage to their cars and put in insurance claims that could have been avoided if a system like GM's were in place.

GM will face the same notification issue that the flood sensors do. It's one thing to have one car detect an anomaly. It's quite another to collect data from enough cars to be sure there's a problem with a road surface, and even more difficult to communicate that finding to a driver in real time. 

I'm optimistic that the connection challenges can be addressed relatively quickly because of a system called Sidewalk that Amazon has introduced that is an inexpensive backbone for communication with sensors. 

One of the challenges for "connected cars" has been that they typically communicate with their hosts via cellular networks, requiring lots of relatively expensive bandwidth. Sidewalk, by contrast, is low-bandwidth and low-cost. 

It operates via a mesh concept: A sensor doesn't need to send a signal so strong that it will reach a cell tower miles away. The signal just needs to be strong enough to reach any other device that is in the mesh network and within half a mile. That next device can then forward the data to any other device and so on until it reaches a major node that can send the data to its final destination. 

A mesh network can get overloaded if lots of data needs to be sent, but something such as a water-depth sensor is just sending a single number, perhaps every few seconds. The system Hartford Steam Boiler announced is, for instance, transmitting its data via Sidewalk. 

As Sidewalk and potentially similar communication backbones are developed, sensor networks will no longer need to worry about how to transmit their data. They just have to get it to a Sidewalk/etc. node. Similar standards will develop on the back end, handling the notifications to those who want them. 

So we can let our imaginations run wild. What else, beyond warnings about flooding and bad roads, should be sensed in a city and relayed to interested parties in real time?

If you step back a bit, you can see that smart cities have made real progress in recent years and decades. Some of that progress is mostly convenience. Traffic lights are synchronized so you generally don't have to stop on a main road if you're driving at the speed limit. Signs or phone alerts tell you just when that bus or subway will arrive. Your phone lets you know of traffic jams and can reroute you. Sensors in the pavement and in streetlights can monitor parking spaces and let you know when one is empty. 

Some of that convenience leads to safety. Knowing that there is an accident ahead of you makes you less likely to plow into something. Getting people into parking spaces faster reduces traffic in cities — a remarkable amount of which is people looking for spots — and decreases the number of accidents. 

And with the flood sensors and, perhaps, GM's sensing of road conditions, we're seeing even more opportunities for safety. 

What's next?

Let's have at it.

Cheers,

Paul

 

AI Everywhere, But Nowhere in Your Captive?

As AI liability lawsuits multiply and regulations evolve, captives offer businesses flexible coverage for emerging risks.

A woman looking afar with binary projected on her face

Artificial intelligence has moved past the proof-of-concept phase. Businesses are integrating AI into operations at a record pace, from customer service and logistics to medical diagnostics and HR decision-making. But as the benefits of AI grow, so do the risks, and most companies have not adequately addressed who will bear the legal and financial consequences when things go wrong.

The problem isn't the potential for harm alone. It's that the liability landscape for AI is undefined, shifting and increasingly litigious. When an algorithm produces biased results or a chatbot dispenses incorrect medical advice, it's not always clear who should be held responsible: the business deploying the tool or the developer behind the code. For companies that own or rely heavily on AI, especially those with captive insurance companies, now is the time to scrutinize these risks and evaluate how captives can help fill a widening gap in risk management.

AI failures already have consequences — and lawsuits

The assumption that AI risks are futuristic or theoretical no longer holds. In 2024, a federal judge allowed a class action to proceed against Workday, a major provider of AI-driven hiring software, after a job applicant claimed the platform rejected him based on age, race, and disability. The suit, backed by the EEOC, raises thorny legal questions: Workday argues it merely provides tools that employers configure and control, while plaintiffs claim the algorithm itself is biased and unlawful. 

The case highlights the growing legal gray zone around AI accountability, where it's increasingly difficult to determine whether the fault lies with the vendor, the user, or the machine. In another case, an Australian mayor threatened to sue OpenAI after ChatGPT incorrectly named him as a convicted criminal in a fabricated bribery case. The mayor wasn't a public figure in the U.S., and the false output had real reputational consequences.

These incidents are no longer rare. In 2023, the New York Times sued OpenAI and Microsoft for copyright infringement, claiming their models used protected journalism content without permission or compensation. The lawsuit reflects a growing concern in creative and publishing industries: generative AI systems are often trained on datasets that contain copyrighted material. When those systems are then commercialized by third parties or used to generate derivative content, the resulting liability may extend to businesses that integrate those tools.

More recently, the Equal Employment Opportunity Commission issued guidance targeting the use of AI in hiring decisions, citing a spike in complaints tied to algorithmic bias. The guidance emphasized that employers, not vendors, would typically bear responsibility under civil rights laws, even when the discriminatory impact stems from third-party software.

These examples reveal a pattern. AI is being used to make decisions that carry legal weight, and the consequences of failure (reputational, financial and regulatory) often fall on the business deploying the system, not just the one that created it.

A legal and regulatory framework is forming

The global regulatory environment is evolving quickly. In March 2024, the European Union formally adopted the EU AI Act, the first comprehensive legal framework for artificial intelligence. The law classifies AI systems into four risk categories--unacceptable, high, limited and minimal--and imposes stringent obligations on businesses using high-risk systems. These include transparency, human oversight and data governance requirements. Noncompliance (related to high-risk AI systems) can lead to fines of up to 7% of a company's global annual revenue.

While the U.S. lacks a national AI law, states are moving ahead with sector-specific rules. California's proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act would require companies to test for dangerous capabilities in large language models and report results to state authorities. New York's Algorithmic Accountability Act aims to address bias in automated decision tools. Several federal agencies, including the FTC and the Department of Justice, have also made it clear that existing laws, from consumer protection to antitrust, will apply to AI use cases.

In Deloitte's Q3 2024 global survey of more than 2,700 senior executives, 36% cited regulatory compliance as one of the top barriers to deploying generative AI. Yet less than half said they were actively monitoring regulatory requirements or conducting internal audits of their AI tools. The gap between risk awareness and preparedness is widening, and businesses with captives are in a unique position to act.

The role of captives in addressing AI liability

Captive insurance companies are not a replacement for commercial insurance, but they provide an essential complement, particularly for complex, fast-evolving risks that the traditional market is hesitant to underwrite. AI liability falls squarely into that category.

For example, a captive can help finance the defense costs and potential settlements tied to AI-generated errors that fall outside the scope of cyber or general liability policies. This might include content liability for marketing materials created using generative AI, or discrimination claims stemming from algorithmic hiring tools. In some jurisdictions, captives may even fund regulatory response costs or administrative fines where allowed.

Captives can also provide coverage when a third-party AI vendor fails to perform as promised and indemnification clauses prove insufficient. In such cases, a captive can reimburse the parent company for business interruption or revenue losses that stem from the vendor's failure: a growing risk as more companies integrate third-party AI into core workflows.

Because captives are owned by the businesses they insure, they offer flexibility to craft tailored policies that reflect the company's actual AI usage, internal controls and risk tolerance. This is particularly valuable given how little precedent exists in AI litigation. As case law develops, businesses with captives can adjust coverage terms in near real time, without waiting for the commercial market to adapt.

Building AI into captive strategy

To incorporate AI risk effectively, captive owners must begin with a clear-eyed assessment of their own exposure. This requires collaboration across legal, compliance, IT, risk management and business units to identify where AI is in use, what decisions it influences and what harm could result if those decisions are flawed.

This analysis should include:

  • Inventorying all internal and third-party AI systems
  • Mapping potential points of failure and legal exposure
  • Quantifying financial impact from regulatory enforcement, litigation or reputational damage
  • Evaluating existing insurance coverage for exclusions or gaps
  • Modeling worst-case outcomes using internal data or external benchmarks

Once this assessment is complete, captive owners can work with actuaries and captive managers to design appropriate coverage. This may include standalone AI liability policies or endorsements to existing coverages within the captive. It may also involve setting aside reserves to address emerging risks not yet fully insurable under traditional models.

Risk financing alone is not enough. Captives should also be part of a broader governance strategy that includes AI-specific policies, employee training, vendor vetting and compliance protocols. This aligns with the direction regulators are taking, particularly in the EU, where documentation, explainability and human oversight are mandated for many high-risk systems.

Boards are paying attention

AI is no longer just a back-office issue. In 2024, public companies and shareholders sharply increased their focus on artificial intelligence, especially on board-level oversight and shareholder proposals. According to the Harvard Law article AI in Focus in 2025: Boards and Shareholders Set Their Sights on AI, the percentage of companies providing some disclosure of board oversight grew by more than 84% year over year and more than 150% since 2022. This trend spans all industries. Meanwhile, shareholder proposals related to AI more than quadrupled compared with 2023, mostly calling for greater analysis and transparency around AI's impact.

This intensifying scrutiny signals a clear mandate for risk managers and captive owners to deliver solutions. Captives offer companies a flexible tool to fund, control and adapt their responses to the rapidly evolving AI risk landscape and regulatory environment.

Conclusion

AI is changing how businesses operate, but also how they are exposed. As regulatory frameworks tighten and litigation accelerates, businesses must prepare for the reality that AI-related liability is no longer speculative. Captive insurance companies offer a powerful tool to manage that exposure, not by replacing traditional coverage, but by addressing what lies outside its bounds.

For companies that rely on AI, the question is no longer whether liability will emerge–it's whether they are positioned to handle it. Captives provide a path forward, giving businesses the ability to design, fund and control risk management strategies that evolve as fast as the technology they are built to protect.


Randy Sadler

Profile picture for user RandySadler

Randy Sadler

Randy Sadler is a  principal with CIC Services, which manages more than 100 captives.

He started his career in risk management as an officer in the U.S. Army, where he was responsible for the training and safety of hundreds of soldiers and over 150 wheeled and tracked vehicles. He graduated from the U.S. Military Academy at West Point with a B.S. degree in international and strategic history, with a focus on U.S.–China relations in the 20th century.