Download

Insurance 2026: Progress Via Technology, Collaboration

P&C insurers face a more predictable 2026 landscape with profitable growth expected amid AI transformation.

3D image of square blocks in turquoise and orange

The 2026 P&C insurance environment may be much easier to forecast compared with the last several years. Many experts have already said 2026 will be a year of profitable growth. Fitch suggests a combined ratio of 96% to 97%, and even the volatile homeowner market is anticipated to "stabilize." Similarly, it is obvious the new year will be filled with AI in insurance, building off widespread hype and numerous announcements of huge, multi-year investments by the likes of Travelers, Nationwide, GEICO, and Chubb. Just how much and deeply AI will affect insurance remains far less visible, with some areas of attention coming into focus, e.g., pre-binding, underwriting data, claims/FNOL analysis.

Looking back to "see" forward

We also know that each year can bring some unexpected and unwelcome surprises, such as the record-setting $40 billion-plus Palisades wildfires burning some 23,000 acres and everything in its path a year ago. On the flip side, not a single hurricane hit the U.S. Atlantic coastline in 2025. Global CAT losses still amounted to $107 billion in 2025, per SwissRe, but to put things in perspective, Hurricane Katrina in 2005 was around $105 billion alone. Severe CAT exposure has become more visible and thus accepted. In turn, risk models have presumably adjusted over the recent years, and, optimistically, the industry is better prepared.

Throughout 2025, M&A remained vibrant, with insurtech funding just over $1 billion per quarter and an increase in early-stage funding toward the end of year, per Gallagher. A closer look at some specific trends:

  • Commercial rate softening, with much variation, e.g., property lines down, commercial auto up and workers' compensation flat
  • New car sales slumping by 7.5% at year-end
  • Car loan terms elongating beyond 60 months and car loan payments reaching a record of $760 monthly average, per JD Power
  • Total loss auto rates rising, to 23%
  • Overall auto claim repair volumes declining roughly 8%
  • Deductibles for both auto and home climbing, shifting the financial burden from insurer to consumer. According to a study by MATIC, home deductibles are up 22%

As we look to 2026, the following trends help bring rationale for how we see things shaping P&C insurance into a year of progress and collaboration, including:

  • Accelerating digitization
  • Climate change
  • Pressure on globalization
  • Rising economic and social inequities
  • Major demographic shifts
  • Layoffs
AI in Insurance

Investments and practical insurance industry applications of AI will continue to expand even as a large number of AI startups fail and regulators try unsuccessfully to catch up to developments.

  • AI will eliminate even higher numbers of less skilled employee positions, including transactional, customer service and document management. At the end of 2025, Chubb announced "radical" cuts of 20% over the next three years due to AI deployment. It is highly likely that other carriers will follow suit.
  • A new breed of AI entrepreneurs will emerge to invent highly specialized consumer services delivering instant hyper-personalized gratification for information, retail therapy, mental wellness and unique "experiences."
  • AI-enabled photo inspection will gain greater adoption across the insurance, automotive and transportation segments using computer vision and machine learning to automatically analyze images for defects, anomalies, damage, or authenticity, replacing or assisting manual visual operations. This technology will be applied across various industries including insurance to enhance efficiency, consistency, and fraud prevention.
Other Key Factors

Affordability will reverberate beyond a broad consumer/political cost-of-living issue, circling back to P&C insurance where unaffordability chants arguably began. Cost of coverage, availability, pricing and rating methodology will draw even greater attention from consumer groups and regulators. Protection gaps and total cost of home or vehicle ownership will become primary concerns replacing 2024/25 inflation and supply chain worries.

Sustainability will gain adoption across the insurance value chain, especially the North American ecosystem, influenced by global re/insurers. Areas of early focus will include risk and claims management such as auto physical damage and property. Sustainable insurance will reduce risk, develop innovative solutions, improve business performance, and contribute to environmental, social and economic sustainability. Lessons learned from consecutive catastrophic events may serve as a tipping point, taking holistic approaches to predict, harden and prevent.

Climate risk modeling will gain energy. Demand is high and growing for accurate, usable climate information, particularly data that can help assess risk more accurately and contextually. This will drive carriers and others to probe every level of risk, from neighborhoods exposed to more frequent flooding, and to test if proposed atmospheric cooling approaches can work safely, if at all. Interest and investments in climatetech that can benefit insurance will grow significantly.

Cyber threats and fraud losses will continue to expand as digitalization spreads around the world. Recent cyber claims frequency trends remained low while severity increased, presenting the insurance industry with a huge—yet hugely challenging—opportunity.

Insurtech consolidation will accelerate as partnerships and acquisitions become de rigueur and single-point, stand-alone solutions continue to lose favor. The future of insurtech is shifting from rapid disruption to sustainable, AI-driven integration, with the market projected to reach up to $254 billion by 2030. Key trends include AI-powered automation for claims and underwriting, hyper-personalization, embedded insurance, and a focus on profitability over pure growth. Agentic AI platforms will automate routine tasks, potentially cutting outsourced insurance roles in half by 2028. AI will also enhance risk assessment and, in some cases, replace traditional underwriting.

Embedded insurance will continue to emerge as a significant distribution channel as discreet insurance offerings are packaged with the related product/service purchase at the point-of-sale in a singular transaction. Auto insurance packaged with new and used cars will grow as OEMs and dealers seek new profitable revenues.

Open platforms and marketplaces will continue to proliferate, and closed systems will face greater headwinds. Core insurance system platforms (e.g., Guidewire, Duck Creek, Majesco) now host hundreds of popular third-party product and service providers supporting claims, policy administration, billing and payments. Even the leading auto claims and repair information providers (e.g., CCC Intelligent Solutions, Enlyte/Mitchell and Solera/Audatex) have begun to pivot from proprietary closed systems to partnerships with emerging physical damage solution providers.

Consulting firms will restructure for agility, such as moving toward "one firm" models to blend technology and consulting, exemplified by PwC's 2024 internal leadership changes. Major firms like McKinsey, Accenture, and the Big Four have implemented layoffs amid shifting demand for services. AI is cited as a major driver, with a growing percentage of McKinsey's work involving AI-related projects and ultimately affecting the need for traditional roles.

Tech talent will be at a premium. All companies will scramble to retrain, upskill and upgrade staff to fully leverage new and emerging technologies. Change management principles will be dusted off and applied to the numerous influences AI will bring to enterprises seeking to excel. A recognition of the importance of people leveraging AI as much as replacing work functions will continue.

Regulatory risk management will require agility in a fragmented landscape. Navigating a changing regulatory patchwork will require investments in legal expertise and compliance infrastructure, which can drive up operating costs. Some insurers will likely cut their losses and focus on less risky markets. That could increase return on investment for organizations that try to take a broader approach with a longer strategic horizon.

Caveats/The Future

If we have learned anything from recent history it is that the future is inherently uncertain. Not all of our predictions will materialize. Black swan events may occur, reshaping markets, nations and the insurance industry in dramatic, unpredictable ways. But we are confident that most of what we have forecast will become reality. Either way, our projection of optimism for a healthy and vibrant P&C insurance industry in 2026 tops the list.

We look forward to seeing the industry move forward, making progress through technology and collaboration. 2026 will be another exciting year for the industry in both expected and unexpected ways.


Stephen Applebaum

Profile picture for user StephenApplebaum

Stephen Applebaum

Stephen Applebaum, managing partner, Insurance Solutions Group, is a subject matter expert and thought leader providing consulting, advisory, research and strategic M&A services to participants across the entire North American property/casualty insurance ecosystem.


Alan Demers

Profile picture for user AlanDemers

Alan Demers

Alan Demers is founder of InsurTech Consulting, with 30 years of P&C insurance claims experience, providing consultative services focused on innovating claims.

What the Steelers Just Taught Us on Innovation

The coverage of the epic, crazy game between the Steelers and Ravens offers a microcosm of the misconception that holds back so many innovation efforts.

Image
skyline

Moments after the Baltimore Ravens kicker missed a near-gimme field goal attempt with no time left and let my Pittsburgh Steelers escape with a two-point victory that won them the AFC North and got them into the NFL playoffs, ESPN sent out a notification. The headline read, "The North Belongs to Ravens."

Oops. 

I'm sorry I was too busy whooping and hollering at my TV to quickly click on the notification, and ESPN had fixed the story by the time I got to it, but I can imagine the tone it took about my Steelers. After all, I've read all the vitriol aimed at the Ravens. 

Ravens head coach John Harbaugh would have been a hero in Baltimore if he'd won the game against his archrivals and snuck into the playoffs, but now there is even speculation he will lose his job. He had his team in a position where it was so obviously going to win that I had my finger on the remote, ready to turn the TV off the moment the ball went through the uprights, so I could go out into the yard and scream. But his kicker missed the field goal, so, boom, let's get rid of the bum.  

The coverage is not only wrong-headed — as much as I'm happy to see people beat up on the Ravens and Harbaugh — but demonstrates the sort of mistake that makes executives, including in insurance, evaluate innovation efforts poorly.

The problem is that we don't think in percentages, or at least not well. We may know that the Baltimore kicker had a nearly 90% chance to make his 44-yard field goal try, but we don't quite get that the percentage means he'll miss one time in 10. We know it, but we don't really believe it. So when the kicker misses, we just see failure and look for someone to blame. 

The Yahoo Sports writer acted as though the Ravens failure to make the playoffs was foreordained, even though they began the season as one of the top-ranked teams in the league. His article began: 

"From the beginning of the Baltimore Ravens’ season, when they had an epic collapse and lost to the Buffalo Bills, right to the end when their season ended with a loss to the Pittsburgh Steelers, nothing was good enough." He added that the Ravens "will be searching for answers after a season that went horribly wrong. There will be immediate questions about John Harbaugh’s future as the team’s coach."

The Steelers and head coach Mike Tomlin would have come in for exactly this sort of treatment if the field goal had been good, even though the Steelers had even a stronger gripe about the kicking gods being against them. The Steelers were only two points ahead, and thus in danger of losing to a field goal, because their kicker had missed an extra point with 55 seconds to go in the game, after not missing one all season. The pressure on Tomlin isn't entirely gone, given that he hasn't won a playoff game in eight years, but there will be a lot less of it in the off-season, especially if we beat the Texans in Pittsburgh on Monday night. 

The Steelers even came in for misconceived criticism despite winning the game. I always liked Trey Wingo when he was an analyst on ESPN, but he posted a Tweet that was downright silly. He said Tomlin made a terrible decision by going for a field goal at the end of the first half rather than trying for a touchdown from the Ravens one-yard line. He tried to back up his claim, but he was clearly just Monday morning quarterbacking — because the Steelers failed to score a touchdown, they shouldn't have tried.

The truly goofy part of his argument was his statement that, had the Steelers converted a chip-shot field goal, they would have been five points ahead of the Ravens at the end of the game, not two, and wouldn't have had to worry about a field goal. But come on. If the Steelers had scored a field goal, the dynamics of the game would have changed. The coaches would have made different decisions, and the players would have done different things in the different circumstance. You can't just take the three points and add them to their score for the rest of the night. 

The other part was nearly as bad. He offered an adage: "You always take the points." But the NFL has moved beyond a lot of adages and started to apply real, live math to questions such as when to go for it on fourth down, as I wrote a year ago. And here's the math:

A field goal would have been almost a 100% chance, so the Steelers could have counted on almost three points. The attempt at a touchdown had a slightly better than 70% chance, based on league averages, and an extra point is almost a lock. 70%-plus of seven points is roughly five points. Yes, the Steelers' attempt was embarrassingly incompetent, but I'm still going to take a likely outcome of five points over one of three points in almost every circumstance. 

This is the Ravens we're talking about. I need every edge I can get.

What does this mean for insurers?

Insurers need to think more like venture capitalists and less like Yahoo Sports or Trey Wingo. Venture capitalists not only know that 90% of the startups they invest in will fail but act on that knowledge. If they think an area is promising, they make a number of bets rather than assuming they're going to pick the one winner. They don't label their entrepreneurs as losers just because they lose — if you had a 90% chance to win but lost, they count that as a win and blame the circumstance, not you. As a result, VCs often invest in serial entrepreneurs, who have either previously failed or only had modest successes. 

A line from the mother of a girl who ski raced with my younger daughter has stuck with me. Stephanie, who build a financial software business she sold for $2.5 billion, told me: "I like people with scars."

Insurers also need to think of early innovation efforts as opportunities to learn, rather than trying to immediately shape them into pilot projects designed to scale and go to market. As I've been saying for almost 30 years now, the key is to Think Big, Start Small and Learn Fast — the latter two points meaning that you need to do lots of inexpensive projects, even though they're in the service of a grand vision, and move on quickly to the next set of tests once you've learned what you can. The vast majority of these projects won't get anywhere near the market, but they aren't failures if you've learned something important.

I'm hardly arguing for no accountability. There are still bad ideas, and they may be executed poorly by people who should no longer be part of your organization. I'm just arguing that we all take a more sophisticated view of success and don't decide that failing to convert a 90% chance merits firing — even though I'd love to see Harbaugh out of Baltimore.

Go, Steelers! Beat those Texans!

Cheers,

Paul

 

January 2026 ITL FOCUS: Life & Health

ITL FOCUS is a monthly initiative featuring topics related to innovation in risk management and insurance.

itl focus
 

FROM THE EDITOR

Life insurance is having a moment.

At the start of the insurtech movement, some dozen years ago now, property/casualty took the lead on innovation, to the point that some brave folks even set up full-stack carriers that they claimed would turn the market on its head. Life insurance was the poor cousin. Yes, carriers pushed toward fluidless underwriting and reducing the number of questions on application forms, but life insurance pretty much stuck with traditional products and the same old, same old ways of selling coverage.

No longer. Based on the articles thought leaders are publishing on ITL, life insurers have their foot on the gas pedal.

Much of the reason is, of course, generative AI. It is creating the sorts of opportunities for radical improvements in efficiency and for coaching agents on selling tactics that Brooke Vemuri, vice president of IT and innovation at Banner Life and William Penn, describes in this month’s interview. Gen AI is also allowing for a sharp increase in personalization, based both on how agents want to sell and on how and what prospects want to purchase, as Brooke explains.

But more than Gen AI is afoot.

The growth of the “sandwich generation”—people caring both for elderly parents and for their own children—is creating an opportunity for product innovation. So are all the young people entering the work force, many of whom are more interested in “living benefits” rather than the death benefit. The wave of Baby Boomers retiring, together with a strong stock market, is creating opportunities for annuities and for disability and long-term-care insurance.

Meanwhile, private equity is increasingly demanding innovation from life insurers, as Mick Moloney of Oliver Wyman explained in a lengthy conversation I had with him. PE firms are buying insurers partly to gain access to their investment funds, which the firms then use to make acquisitions—a la what Warren Buffett has done with Berkshire Hathaway. The PE firms also believe that insurers they buy will gain an advantage, because the firms have historically outperformed the stock and bond markets, where life insurers have traditionally parked their funds. Whatever their reasoning, the PE firms squeeze efficiencies out of the companies they buy, and other life insurers have to keep up. (One caveat is embodied in a recent New York Times article, which says PE firms are going through a bad spell. The industry has become so large and bought so many companies that the low-hanging fruit has been harvested, so outstanding returns are harder to come by.)  

I think you’ll be intrigued and heartened by the interview with Brooke and by the six articles I’ve included in this month’s ITL Focus.

And stay turned. There is a lot more coming.

 

Cheers,

Paul

 
 
An Interview

The New Look for Life Insurance

Paul Carroll

You’ve written for us about the need for hyper-personalization in life insurance. Would you start us off by describing what that looks like in practice, as well as how it differs from how life insurance has historically been handled?

Brooke Vemuri

I've been in the life insurance business for 23 years, working in both technology and operations, so I've seen firsthand the evolution from moving physical forms around the building—we had folders, and we put them in carts, and we drove them around to our underwriters—to where we are today. We've crossed a lot of hurdles over the last 20 years.

Now we're in a place where we have intelligent automation and a digital application process. Over the past few years, this has meant having an event-driven, rules-based system that allows us to capture the right application questions, then run rule sets underneath, call third-party data, and do all the things we need to amalgamate and come together on a decision. That change was hard to get across the line, but we've arrived and are doing very well as a result.

read the full interview >

 

 

MORE ON LIFE & HEALTH

Living Benefits Must Redefine Life Insurance

by Luca Russignan

Life insurers face declining relevance among under-40 consumers, who demand living benefits over traditional death coverage.
Read More

 

Mortality Impact of GLP-1 Drugs

by Richard Russell, Andrew Gaskell, Raman Lalia, Craig Armstrong, Chris Falkous

RGA study finds incretin drugs could reduce mortality up to 8.8%, so insurers should reassess assumptions.
Read More

 

phones

Disability Planning Creates Growth Opportunity

by Chris Taylor

Traditional disability planning approaches are inadequate, as carriers confront a rapidly expanding market demanding specialized products.
Read More

 

hands in a meeting

An Untapped Life Insurance Market

by Denise McCauley

The sandwich generation's dual caregiving burden creates substantial insurance opportunities while exposing critical coverage gaps nationwide.
Read More

 

This Is Not How Insurance Should Be Sold

by Bruce Elkins

Final expense call centers prioritize speed over service, creating predatory practices that target vulnerable senior populations.
Read More
megaphones

Strong Growth for Life-Annuity Forecast Through 2027

by Scott Hawkins

Strong earnings forecast through 2027 gives life-annuity insurers opportunity to adapt strategy, not just enjoy conditions.
Read More

 

 

 

 

 

 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

The New Look for Life Insurance

Hyper-personalization is revolutionizing life insurance as carriers tailor applications, products and pricing to individual customer and agent preferences.

Interview Banner

Paul Carroll

You’ve written for us about the need for hyper-personalization in life insurance. Would you start us off by describing what that looks like in practice, as well as how it differs from how life insurance has historically been handled?

Brooke Vemuri

I've been in the life insurance business for 23 years, working in both technology and operations, so I've seen firsthand the evolution from moving physical forms around the building—we had folders, and we put them in carts, and we drove them around to our underwriters—to where we are today. We've crossed a lot of hurdles over the last 20 years.

Now we're in a place where we have intelligent automation and a digital application process. Over the past few years, this has meant having an event-driven, rules-based system that allows us to capture the right application questions, then run rule sets underneath, call third-party data, and do all the things we need to amalgamate and come together on a decision. That change was hard to get across the line, but we've arrived and are doing very well as a result.

The next evolution is moving away from considering the application as one static process or one way to get to an outcome. Instead, we're moving toward tailoring the experience by distribution, by agent, by customer. Personalization means accommodating the way our agents and agencies like to do business.

Just to give you a small example: We have some agents who say, "Don't waste my time collecting beneficiary information. I'm going to get you everything I need to get a decision on the case. Then once I have a decision and the customer accepts, I'll gather that beneficiary information—it's more of an administrative piece of work because it's not influencing the decision." I have another distribution partner that says, "We start with beneficiaries. We connect the sale to the reason why you're making the purchase."

That's a high-level example on the easier side of personalization—how do we tailor or reorder the journey? 

Next comes determining how many questions we ask based on the product, and how we tailor that experience based on the product, the partner, and the customer. You start to get all these different variations of how you would flow a digital application process to collect the right information at the right time, make the right decision, and end up with a case you can place in force—in a way that works with every agent and partner you have.

You can start to see that there are going to be lots of permutations and combinations of how all of that evolves and comes together. From a technology perspective, that's all about creating the right, flexible architecture to make that happen, to allow that configuration, and to support our agents in the way they want to sell the business.

Paul Carroll 

Is this personalization extending beyond the sales process to policy offerings and features, as well?

Brooke Vemuri

Yes, that's a good question. Over the last 20 years or so, you typically entered the journey when you already knew what product you wanted. For example, "I know I want a term 20 policy. It's for a 50-year-old male with two children.” Now we're heading toward a different approach—someone starts the journey as that same person, but they really don't know what product or combination of features they want until they move through the journey.

Whether that means recommendations based on how many beneficiaries you have, what stage of life you're in, what your income levels are, or what job you have—as the system starts to understand who the customer is in the journey, we can start to make the right recommendations for a product.

I think you're going to see both tailoring of products and features. One of the things we're working on is how to come out with an offer for every applicant. Because in a lot of cases, there are still declines in our environment, even on term business. So how do you enter the process saying, "I have a desire to have life insurance" and be sure to end up with something—whether that's an accidental death product or a final expense product?

So yes, you're going to start to see tailoring of offers, cross-sell, counter-offers—that's what we're calling it. How do we come up with another product that might be viable for both the customer and the life insurance company so that, at the end of the journey, you still get some product that is available to you.

Paul Carroll

How is AI being incorporated into this process, particularly in terms of gathering customer information and providing real-time recommendations to agents during customer interactions?

Brooke Vemuri

Intelligence is coming into play primarily in our agent-facing capabilities. Our journey encompasses both customer-agent interactions and internal, employee-facing systems. As people interface with the system, it makes certain recommendations to the agent based on how the journey is progressing. 

You might not do that directly with a customer, though. If a customer is going through the process unassisted, it's a little bit more complex for them to navigate some of those things. So our thought process is a lot like TurboTax—in the upper right-hand corner as you progress, it shows what you're accruing, what we know now, and where we might go in this journey, and starts to forecast and give options for next best steps.

For now, we’ll just focus on agent-facing capabilities, because that feels more like advisory-type activities. That's at least our early thinking.

Paul Carroll

What innovation can we expect to see over the next few years?

Brooke Vemuri

I think we'll be very much focused on what we're talking about now, which is tailoring our apply processes—for both customer and agent. We're looking at getting some cues or indicators into the journey about how a case is tracking and how agents might handle what's going to happen in terms of that sale, whether it's going to be a decline or whether we can pivot to another product.

I think you're going to see all of that in the next two to three years—tailoring the counter offers or the alternate offers, tailoring the journey to align with how that agent wants to sell their business.

Beyond that, we're imagining a lot more variation on the product. We're going to see more things in terms of what we call table ratings today: How do we get more price variability in the product so we can accommodate more and more applicants? I think that's probably on the three- to five-year horizon.

Paul Carroll

What trends are you seeing around younger generations being more interested in living benefits rather than death benefits?

Brooke Vemuri

That's a good point. Things are being added on and tailored into the product that allow more utility out of the product. It's not a policy that you print and put on the shelf and wait till you die, and then someone gets the benefits of it. It's about how the policy can give you benefits while you're living—tapping into that face amount for critical illness or accidents and things like that.

I agree that living benefits are giving more value to the term product. I mean, there's still a need for your basic term product out there that has no bells and whistles. But to reach some of the younger generations, we're finding that living benefits have been valuable for them.

Paul Carroll

Life insurance policies represent long-term commitments—20 years for term policies, often longer for permanent ones—making it challenging to validate underwriting assumptions quickly. How is the recent industry shift toward fluidless evaluation and fewer application questions working in practice?

Brooke Vemuri

It does take years to know if your assumptions are right. I think the biggest value of the data right now is being able to look at historical data and model it in a way that makes us more comfortable with innovations—whether that's leveraging other third-party evidence sources or forgoing exams altogether, like with fluidless underwriting.

We're using a combination of historical data—looking at how it would perform based on our back book of hundreds of thousands of cases at this point—and other third-party evidence that we can leverage, like medical claims data and labs data. We're leaning heavily into claims data specifically.

It's really about putting those pieces of the puzzle together alongside the self-declarations from the application so that you have a complete picture to underwrite from and make a decision. We're going after alternative evidence sources to eliminate some exams while also looking retrospectively at the data to see what it's telling us. Through our modeling, we can apply new assumptions to that data, see what it might look like, and price accordingly.

Paul Carroll

What other trends should our readers be aware of in life insurance?

Brooke Vemuri

I think you've probably captured a lot of them. There's going to be a lot of tailoring—tailoring on the journey and tailoring on the products, tailoring on the prices, tailoring on the variation of features and capabilities that exist in the term product space.

How do we alternate out of term if we need to? How do we counter out of that if we need to? I think that's the real progress on the horizon.

Paul Carroll

Thanks, Brooke.
 

About Brooke Vemuri

headshotBrooke Vemuri is vice president, IT and innovation at Banner Life and William Penn. She leads people and cross-functional teams to reimagine the future of life insurance from lead generation, through apply and underwriting, to offer, pay, and in-force. Her team drives transformation and change to the business and distribution through the development and execution of business propositions focused on growth and cost reduction through a digital business strategy.

Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Flood Risk Demands New Insurance Approach

A $255 billion flood protection gap exposes outdated risk models, pushing the industry toward parametric insurance and captive structures.

Man standing between benches in flooded area

Flood risk is no longer a peripheral climate concern. It is fast becoming one of the most underestimated balance-sheet threats facing businesses and insurers globally. Over the last five years alone, flooding has caused an estimated $325 billion in economic losses worldwide, yet only $70 billion was insured (source: Munich Re), exposing a widening protection gap that the industry can no longer ignore.

This is not merely a story of rising water levels. It is a story of outdated assumptions.

Traditional flood models, rooted in historical event catalogues, are increasingly unfit for purpose in a world of volatile weather patterns, rapid urbanization, and climate-driven extremes. As Hamid Khandahari of Descartes Underwriting says, historical data "cannot fully account for events beyond anything previously recorded." The implications for underwriting, pricing, and capital allocation are profound.

The new reality: unpredictable, underinsured, unprepared

The scale of the challenge is stark. In the U.K. alone, surface-water flood risk could affect 6.1 million properties by 2050—a 30% increase compared to previous projections (source: NaFRA). In the U.S., flood events jumped nearly 30% year-on-year between 2022 and 2023, with several states seeing events quadruple (source: Lending Tree).

Yet, despite mounting evidence, risk perception remains dangerously muted. Many organizations still operate under a flawed logic: "We haven't flooded before, so we probably won't." This mindset is actively reinforced by commercial insurance dynamics. When losses do occur, the response is typically capacity withdrawal, higher deductibles, exclusions, or outright non-renewal—exactly when resilience is most needed.

This has created a vicious cycle: low perceived risk leads to underinsurance; the first major loss triggers rate shock and restricted coverage; risk then becomes both more expensive and harder to transfer.

Technology is changing what's possible

The industry now has the tools to break this cycle—but only if it evolves how it uses them.

Advanced flood forecasting, hydrodynamic modeling, and IoT sensor networks are changing the economics of risk. Leading platforms such as Previsico's can now provide 36-48 hours of warning, allowing businesses to move assets, shut down operations safely, and materially reduce losses.

The Balfour Beatty Vinci HS2 case illustrates this shift in practice. After suffering multimillion-pound flood losses, the company used predictive flood intelligence and sensors to protect sites, relocate critical equipment, and avoid repeat losses when the next event occurred.

Crucially, parametric solutions are not constrained by the same capital bottlenecks that plague traditional catastrophe underwriting. They can also be structured to cover deductibles, gaps, or even function as primary protection where conventional policies fail.

Yet adoption remains strikingly low. Despite 43% of U.K. organizations reporting flood impact, only 7% currently use parametric insurance in their flood risk financing strategy. That disconnect represents both a risk and an opportunity for the market. 

The strategic role of captives

This is where captives emerge as the industry's most underused strategic asset.

Captives are no longer simply about premium arbitrage or tax efficiency. They are fast becoming risk laboratories—vehicles for innovation, structured retention, and long-term resilience.

More than 1,700 new captives have been formed since 2020, bringing the global total above 7,000. Many are now absorbing flood risk by necessity, not choice—particularly in the U.S., where obtaining flood risk coverage is often incredibly difficult. These captives are then highly motivated to encourage operating divisions to manage flood risk effectively.

When combined with parametric structures, captives unlock a powerful model:

  • The captive retains frequency risk.
  • Parametric reinsurance absorbs severity risk.
  • The business benefits from faster liquidity and reduced earnings volatility.

This architecture also helps address "basis risk"—the mismatch between actual loss and parametric payout—by allowing the captive to smooth inconsistencies and manage retained exposures.

In practice, this makes flood risk more insurable, more predictable, and more strategically manageable.

From risk transfer to risk resilience

The industry stands at an inflection point.

Flood is no longer just a peril to be transferred; it is a systemic risk that must be actively managed, predicted, and financed in new ways. The combination of advanced forecasting, real-time data, parametric triggers, and captive-backed structures represents a shift from exposure to resilience.

The winners in this market will not be those who wait for traditional models to catch up. They will be the insurers, reinsurers, brokers, and risk managers who accept that the future of flood insurance is not about pricing the past—but engineering resilience for a climate-altered future.


Jonathan Jackson

Profile picture for user JonathanJackson

Jonathan Jackson

Jonathan Jackson is CEO at Previsico.

He has built three businesses to valuations totaling £40 million in the technology and telecom sector, including launching the U.K.’s longest-running B2B internet business.

Our 10 Most-Read Articles From 2025

It was an AI kind of year. No surprise there. But there was also great interest in social inflation, drones, the Predict & Prevent model and even lessons from the NFL playoffs.

Image
2025 calendar

Of the scores of articles we published this year on AI, five, in particular, struck a chord with you, our esteemed readers: on how AI is reshaping workers' comp, compliance and fraud, on how to unlock ROI (a tricky task) and on how AI's progress is accelerating, with no end in sight. 

Social inflation remained a hot topic, with two pieces in the top 10 on how verdict sizes in insurance cases have tripled since COVID and on how insurers are losing billions of dollars before cases even get to trial. 

Rounding out the top 10 are articles on how drones are profoundly changing how property claims are handled, how misconceptions about electrical fires lead to disasters that could be prevented and (appropriately for this time of year) what the NFL playoffs can teach us about innovation.

Herewith the highlights from 2025, as determined by your interest in them:

Artificial Intelligence

The most-read article, AI and Automation Reshape Workers' Comp, says "67% of organizations expect over 80% of claims to be automatically triaged and assigned in the future — without any manual intervention." The piece explores other efficiencies that AI offers, describes tools that are detecting fraud and offers advice on how to encourage adoption of AI.

At #2 is Why AI Is Game-Changer for Insurance Compliance. It says: "90% of small business owners are unsure about the adequacy of their coverage. AI serves as an intelligent assistant, quickly surfacing important information and providing context when needed.... The impact includes faster verification, fewer coverage and requirement gaps left unaddressed, and faster time to compliance. As Gartner predicts a doubling in risk and compliance technology spending by 2027, companies recognize that AI solutions that enhance collaboration deliver the greatest returns."

At #3 is The Key to Unlocking ROI From AI. It states its thesis starkly: "Your AI and automation initiatives will fail. Not because of bad code. Not because your data scientists aren't smart enough. But because you'll lack the one thing that determines whether any AI initiative succeeds: observability." It then explains at length how "you can't see what your automation is doing — how it's affecting business processes, where it's breaking down, and what value it's delivering."

How AI Can Detect Fraud and Speed Claims was the fourth-most read. It warns that "today's fraudsters have access to AI-generated medical records, synthetic identities, and eerily convincing deepfake videos, allowing them to construct entirely fabricated incidents with alarming precision." But, on a hopeful note, the article then explains how, "with the ability to process billions of data points in real time, AI-powered fraud detection systems can do what human analysts cannot: instantly cross-reference claims against vast datasets, identify inconsistencies, and flag suspicious activity before payouts occur. This technology enables insurers to detect deepfake-generated documents and videos, analyze behavioral patterns that suggest fraudulent intent, and shut down scams before they drain company resources."

At #6 was my summary of an exhaustive research paper published mid-year on the state of AI and where it would go from there: Mary Meeker Weighs in on AI. Among many (many) other things, the prominent analyst laid out startling detail (the cost of using AI has declined 99% just in the past two years), offered useful examples (more than 10,000 doctors at Kaiser Permanente use an AI assistant to automatically document patient visits, freeing three hours a week for 25,000 clinicians) and made some bold projections (by 2030, AI will run autonomous customer service and sales operations, and by 2035 will operate autonomous companies).

Social Inflation

At #5 was We’re Losing Billions—Before We Ever Get to Court, and at #7 was The Tripling of Verdict Size Post-COVID. Both were written by Taylor Smith and John Burge, who also wrote two of the three most-read articles of 2024, on what I broadly think of as social inflation (including third-party litigation funding and other aggressive tactics by plaintiff lawyers). 

In "We're Losing Billions," they write that property/casualty carriers have a blind spot in how they negotiate: "In an era where 99% of litigated claims settle, the cultural instinct on the defense side to 'hold back' our strongest arguments has become a billion-dollar blind spot. We ration key negotiating points, fearing we’ll run out of ammo. We save key arguments to “surprise them at trial.” We frame less, anchor less, and persuade less. Meanwhile, the plaintiff bar is doing the opposite—and it’s working."

In "The Tripling of Verdict Size," Taylor and John describe data they've collected on 11,000 P&C verdicts, across the industry, to address the fact that carriers typically just see their own slice of the verdicts. They argue that only by amassing better data can insurance lawyers keep up with the plaintiff bar in understanding how a case is likely to play out in a certain venue, in front of a certain judge, against a certain lawyer -- and fashion settlement offers accordingly.

(If those articles appeal to you, I'd encourage you to watch the webinar I recently conducted with Taylor and with Rose Hall: "Modernizing Claims: Competing Against AI-Powered Plaintiff Attorneys.")

Predict & Prevent

Hazardous Misconceptions on Electrical Fires was #8 on the top 10 list, highlighting how the insurance industry can help prevent many of the "approximately 51,000 fires annually in the U.S. [that result] in over $1.3 billion in property damage." The piece describes how we can educate policyholders about the fact that circuit breakers don't catch all electrical problems, that even new homes can have electrical issues and that there very often aren't warning signs of electrical problems before they start a fire. (The piece was written by Bob Marshall, CEO of Whisker Labs, which makes a device, the Ting, that detects electrical problems and that I think of as the poster child for the Predict & Prevent movement. I recently interviewed him here.) 

Drones 

Drones Revolutionize Property Insurance Claims, at #9, shows how drones have "emerged as a powerful tool for addressing some of the industry's most persistent challenges, including the need for increased accuracy, faster speed, and more cost-effectiveness" in property inspections during the claims process. 

Lessons From the NFL

It amused me to reread the final article to make the list: What NFL Playoffs Say About Innovation in Insurance. I wrote it following the conference championship games last January and opened by saying: "My main takeaway from the NFL conference championship games over the weekend was that I'm soooo ready to move on from the Kansas City Chiefs — anyone with me?" I've heard in the past 11 months about plenty of folks who are tried of looking up at the Chiefs in the standings — and, lo and behold, we don't have to worry about the Chiefs in the playoffs for the first time in 11 seasons.

After venting my spleen (I'm a frustrated Steelers fan), I got into how coaches were finally following the data and going for it on so many more fourth downs than they used to, on why it took them so long and on how insurers can learn from NFL coaches and throw off even deeply entrenched bad habits.

Wishing you all a healthy, happy and prosperous New Year!

(While desperately hoping that my Steelers beat the Ravens on Sunday.)

Cheers,

Paul

What If Manufacturers Provide Insurance for Free?

As embedded insurance takes hold, what if manufacturers heavily discount coverage or give it away so they can sell more product? How do insurers compete? 

Image
working woman with headset on

During the internet boom of the late 1990s, I heard a term that stuck with me: "the Las Vegas business model." The term was used by a Harvard Business School professor on a panel I moderated -- and he said the results aren't pretty for any competitor caught in the cross-hairs.

The Las Vegas business model involves someone giving away a product -- YOUR product, if you're unlucky -- to sell more of something else. The professor called this the Las Vegas model because he said it's tough to sell run-of-the-mill hotel rooms or meals in Las Vegas when casinos will give away rooms and food to people deemed likely to leave enough money behind at the gambling tables.

The same problem could hit at least some parts of insurance, especially as embedded insurance gains steam. Apple doesn't need to make money off warranties, for instance; it just needs to keep your devices running so you can keep buying things through the Apple Store -- and Apple can keep collecting its tens of billions of dollars of commission each year. Many car makers have started offering insurance, but they're mainly in the business of selling cars. What if they start bundling insurance at a steep discount to help dealers persuade prospective customers to buy their car and not a competitor's? 

This could get ugly. 

The Las Vegas business model springs to mind because of a smart piece Tom Bobrowski published with us last week: "Tech Giants Aim to Eliminate Insurance Costs." The summary warns: "Technology companies view insurance as a cost to eliminate, not a business opportunity to pursue." 

He walks through some examples, including how Tesla is trying to minimize insurance costs as a way of bringing down the total cost of ownership so it can sell more vehicles. He also looks at cybersecurity, where huge software vendors such as Microsoft are doing their utmost to reduce vulnerability and reduce the need for insurance. 

I'd add the liability insurance Amazon offers. Amazon has every incentive to make it as cheap and convenient as possible for sellers to operate on its site -- and keep paying those hefty commissions to Amazon. Amazon doesn't even have to earn a profit on that insurance, so good luck to any insurer trying to compete. 

Tom says tech giants have four major advantages over insurance companies: 

  • Superior data, which comes from the ability to continuously monitor behavior, as Tesla can do with its cars and their drivers
  • Direct customer relationships, which eliminate distribution costs that constitute 15-25% of premiums
  • Technology infrastructure that can automate claims, detect fraud and model risks
  • Brand trust: Customers already trust them with payments, personal data, and critical services

For me, the first two are much more formidable than the last two. Tech giants can have a major advantage on data. So can other manufacturers, such as the big car companies, given all the sensors now being built into products. And any company that can embed insurance into the process of selling something else takes a huge chunk out of customer acquisition costs. 

As for the last two, I'd say insurers have extensive technology, too -- even if there are always complaints that it's dated. Insurers also have the sort of experience with processing claims, detecting fraud and modeling risks that requires all sorts of nuance and that tech companies would have to develop from scratch. Tech giants do seem to have brands deemed more trustworthy than those of insurers, in general, but that gap would surely narrow if the tech companies get into insurance in a big way, because that would put them into the business of denying lots of claims.

Despite the fears at the start of the insurtech wave a decade ago that insurance would be "Amazoned," as retail commerce had been, tech giants have mostly stayed away. Google tried car insurance but found it could sell leads for more than it would earn by selling insurance. Amazon is experimenting with telehealth and pharmacy services but has shied away from any major moves in healthcare.

In general, tech companies didn't want to commit the capital or have to deal with the extensive, state-by-state regulation that insurers face. Those reservations will continue, I believe. Besides, many giants from outside the industry are talking, at least for now, about insurance as a business opportunity, not as a cost to be eliminated. General Motors has said it hopes to generate $6 billion of insurance revenue by 2030, and Elon Musk has said insurance could account for 30-40% of Tesla's business. 

But I think Tom is right when he says the Las Vegas business model represents a major trend, even if different parts of the insurance industry will be affected at different rates and even if it will take, in his estimation, five to 20-plus years to play out.

An ugly trend for insurers, but one we should all keep in mind.

Cheers,

Paul

 

 

What Would You Do With $1 Trillion?

Record $14.6 billion fraud highlights an urgent need for entity resolution technology in P&C operations.

Silver case with stacked 100 dollar bills in rubber bands

For the first time ever, direct premiums in P&C exceeded $1 trillion in 2025. Also a first in 2025: a $14.6 billion alleged fraud ring was exposed. (The prior record was $6 billion.)

The watchword for industry executives should be: "entity."

Fraud risk, customer experience, and effective AI? They're all keyed to entity. The money you make, the money you keep, and the faster you grow? Entity, again.

That total of direct premiums means there are now more than one trillion reasons to understand who is paying you and who you are paying. That "who" is an "entity" -- people, businesses, and organizations.

Entities have identity – names, addresses, phone numbers, etc. In logical fashion, there are only three kinds of entities – trusted, unknown, and untrusted. If you can't distinguish among these three kinds, then you are reading the right article.

With interaction, entities also have history, behavior, and outcomes. Entities may be related to each other. Sometimes those relations are very transparent, like parent-and-child or employer-employee. Sometimes they are hidden, like in an organized crime ring or in a conspiracy and collusion affiliation. Entities may be multifaceted – driver, renter, business owner, group leader, member of an organization, neighbor, volunteer, relative, known associate. These relationships all change over time, yet there is still the same entity.

Reflect on this for a pause here. Consider yourself for example as EntityONE. Now quickly list all the roles and relationships you have in the physical world at your home, office and neighborhood, and then online as an emailer, shopper, commentator, reader. Your identity in all those real and digital places may take different forms, but it is always you, EntityONE.

The everyday entity

In the day-to-day of insurance and business life, there is always a concern about fraud and abuse. From application through claims payment, your need to know your business extends from your new business funnel through, third parties, vendors, customers, agents, and even staff.

A new person applies for car insurance, a business makes a claim involving a third party, an invoice arrives from a new address, an agent makes a submission, finance issues a payment – to trust or not to trust?

Names, addresses, phone numbers, etc. are the data vestiges of ways to describe an entity. Either physical or digital in origin, these data are typically scattered across various boxes in an organization chart and different core, ancillary, and API-accessed third party systems.

We store identifier elements like names and address with varying lengths, spellings, inaccuracies, and levels of incompleteness, and in unstructured and semi-structured data entry fields and free form text like notes and templates.

Then we store them again and again over time, moving between systems, between carriers, between vendors, and of course, across multiple CRM applications, which are additionally stuffed with all manner of duplicate and partial records.

Think of yourself as EntityONE

If you tried to have your own self, hereafter called EntityONE, appear the same in every field in every system in every organization over time, you would fail. Even if you never moved and never changed your name, random data entry error alone would ruin your ambition.

One data exercise to try at home: If you have address data from northern California – find a system where "city" is collected as part of an address. Then see how many ways "San Francisco" appears. At one large carrier with tens of thousands of transactions across five years of data entry there were 97 unique entries.

The correct answer was the dominant response, "San Francisco." Shorthand like "SF" and nicknames like "SanFran," "Frisco," and "San Fran" were next. A lower-case version of the correct answer was next, "san francisco." All sorts of typos and transpositions followed. An unthought-of case was a space key entry as a valid character – "S F" is now different than "SF." And those space key values could be leading, trailing, or in the middle. Another very frequent response, when permitted by system data field edit logic, was "blank," no entry at all, or in some cases any number of space key entries.

If you ran a literal matching algorithm on the "city" field, in theory EntityONE could have 97 different data "cities" yet is still only a single unique entity.

Some other factors might also contribute to your failure to have perfect EntityONE data.

One system has separate fields for first name and last name, with no field for middle name and no fields for title/prefix, or suffix. Another system has one long field where all of that is supposed to be entered. Is it Dr. or Mrs. or Ms or Miss with suffix MD, PhD, DO?

Generally, the simplest of contact information – name, address, phone number – can be entered and stored so inconsistently in so many multiple places over time that EntityONE would not exist as a whole and unique name-address in the best of cases.

When it comes to legal entity, the EntityONE Family Trust, or your business version, EntityONE., it's still you, but you now may also have shared rights and not be the only decisionmaker. So enough of thinking of just yourself.

Think of how difficult it might be to search for your customer as their data is entered and maintained across different systems in different ways. Your decades-old processes still treat paper and data as if they were entities, not as entities that have related paper and data. 

This work process of literal data computing is at the core of delivering customer experience but allows an opening for fraudsters and is the bane of AI.

Let this sink in: Data are not entities; entities have data.

Entities have data. You as EntityONE are unique. All the aliases, name changes, addresses, business titles, partnership and shareholder situations, and your honorifics aside, you are still you. Even after you pass away, the estate of EntityONE will persist.

Resolving the many ways to identify you is now what you need to turn inside out.

Every other person, business, group, and organization has the same issues. When you encounter any identity, you need to resolve it down to the core entity, or you will not know who you are dealing with.

Whether an entity is legal or not legal or illegal or foreign or even sanctioned, as we think on the identity data we see every day, many entities present as if their data is thin, with seemingly little to none. Some appear squeaky clean. Some have long years of history. Some look like they popped out of thin air. Some, like a bad penny, keep popping up after we have decided not to interact with them. Synthetic, assumed, straw man, take over, hacked, phished, fraudulent, and other forms of malfeasance also exist.

Keeping tabs on entities (e.g. people and organizations), and the hidden relationships among them in real time is now practical with advanced analytics powered by a technology known as entity resolution. Entity resolution brings all the snippets of various identifiers around an entity into focus.

Entity resolution may involve several efforts, all claiming to do the same thing across your data and computer laden landscape. In the earliest days of computing, crazy sounding technical terms sprouted to try to address this existential data identity issue around keeping EntityONE clearly in focus. It started field by field in databases and has modernized to complex multi-attribute vector and graphical analytics.

These geeky but incomplete early algorithms left a lot undone while still showing some value – they had names like Levenshtein (an edit distance formula for suggesting a typo was made in text similarity), Hamming distance, and more recently in AI terms, tokens with Jaccard and Cosine TF-IDF similarity approaches. There are dozens upon hundreds of challenger approaches. But an analytic or a technique is not a product or a solution.

An early inventor created a combination of steps and orchestrated a set of code he called "fuzzy matching." (In memory of Charles Patridge, here is a link to a seminal paper he wrote.) Many data analytic communities shared that code and subsequent innovations to make progress on name and address standardization and name and address matching. The postal service benefited greatly with more deliverable mail, and database marketing boomed, while customer analytics and lifetime value ascended, as did provider and agent and vendor scorecards with more ambitious service level monitoring.

As with many other business problems, necessity is the mother of invention. Almost every company now has inventions that come from do-it-yourself, homegrown efforts. It is the only way forward before a workable, scalable solution is created.

Also likely installed are several versions and half attempts of making the problem better inside an application or between systems. First, companies used data quality checks, then field validation efforts, then more hardened data standards. For all that work, the human data entry staff invented "99999" and other bypass work hacks. You can see that still today.

This data is what you are training your AI models on.

The largest legacy problem today is this data pioneer spirit turned hubris. IT pros and data science teams do the best they can with what they have – full stop. The satisficing behavior limits their contribution. It also injects unneeded error into all the models they are building and operationalizing. Much of the AI risk is self-inflicted poor entity resolution management. Actuary staff feel largely immune at the aggregated triangle and spreadsheet point of view, but that is a false sense of security, since they cannot see into the granularity of transactions beneath a spreadsheet cell. This is changing dramatically fast with the emergence of the machine learning and AI wielding actuarial-data_scientist corps of employed professionals, academicians, and consultants.

New techniques like large language models (LLM) are making short work of text data in all forms to create new segmentation and features for existing models, while also enabling new modeling techniques to iterate faster. The next phase of workflow improvement is almost limitless. All these great breakthrough efforts need an entity level of application to have their highest value.

The rise of industrial-grade entity resolution

The financial stress indices are high. The sympathy toward companies is low. The opportunity to use AI and seemingly anonymous internet connections makes people think they can't get caught – a presumption with a lot of truth to it these days.

A shout out to our industry career criminal counterparts enjoying the status "transnational criminal organizations": Terms like straw owners, encrypted messaging, assumed and stolen credentials, synthetic identities, and fake documentation are now everyday occurrences.

And that's just what relates to money. For truly awful perpetrators, anarchists, drug dealers, arms dealers, human traffickers, hackers, terrorists, espionage, traitors, nation state actors, and worse, the problem space of entity resolution is mission critical.

Keeping tabs on entities (e.g. people and organizations), and the hidden relationships among them in real time is possible today. It elevates internal "good enough'" learned implementations to "never finished being done, continuously adapting, and real time' data driven implementations."

What you should do about entity

The most capable solutions sit around existing efforts in place, so no need to rip and replace anything. This makes entity resolution prioritization easier, as it can be adopted with what you do now. This extends to your analytic ambitions in cyber resilience and digital modernization, as it can interact seamlessly with additional identifiers like digital entity resolution – emails, domains, IP addresses, that have an address corollary to a street address in a neighborhood. (Here is an earlier article I wrote for ITL on "Your Invisible Neighbors and You.")

Do yourself, your board, your customers, and your future AI successes a favor and get serious about entity and entity resolution as the nearest thing to a single truth as you can get.

Some Background

The author has built matching and fuzzy matching applications multiple times with multiple technologies over a four-decade career and advises that benchmarking is essential for understanding fit for use in entity resolution. A four out of five, or 80%, accuracy might be fine for some use cases and considered corporately negligent in others.  Getting to the high 90s takes much more data and resources than most internal teams can dedicate on a sustained basis. 

A practical example from the author’s experience is Verisk Analytics, where they have billions of records of names and addresses coming from hundreds of carrier systems, all needing attribution to an entity level for highest business value. They have instituted an industrial solution to supplement or replace methods the author’s team built originally for fraud analytics. 

The vendor they give testimonials for is one that is now being adopted in insurance after widespread use in governments and security, customer management, financial integrity, and supply chain use cases globally. It is called Senzing. Their methodology creates the capability to recognize relationships across a number of data attributes and features shared across disparate records and systems, e.g.  names, addresses, phone numbers, etc. in real time. 

Modern entity resolution systems can deploy inside your company as an SDK, so you never need to share any data to move forward. Multiple use cases around your enterprise can also derive benefit from improving entity resolution management so it is reliable on the first shot. 

Was the Fed Rate Cut a Mistake?

Michel Léonard, chief economist for the Triple-I, says the Fed's statement downplaying the possibility of future rate cuts will keep key interest rates high.

Interview Banner

Paul Carroll

We've had a prolonged dance with the Federal Reserve over whether they would cut rates again this year, and they finally did, on Dec. 10, right as you and I began this conversation. They also signaled they’re probably done for a while. Where do we go from here?

Michel Léonard

First, I think the Fed made a policy mistake by cutting rates and changing monetary outlook from easing to holding. Setting expectations is more impactful on growth than actual rate changes. By saying “don’t expect rate cuts” they took the wind out of the current easing’s impact. We’re lucky the stock market didn’t drop by 4-5% in the days since. 

Instead, the proper policy would have been, in my and many economists’ opinion, to skip the cut but keep easing expectations alive. That would have a strong multiplying impact on GDP. 

Had the Fed stuck to easing, we would have started to see decreases in mortgage and auto loan rates by Q3 2026. We needed those lower rates to fuel homeowners and personal auto insurance premium volume growth. Instead, we’re likely to face historically high mortgage and auto loan rates through Q1 2027.  Most likely, we’re stuck with weak housing starts, weak existing home sales, and lower auto sales, and without that homeowners and personal auto premium volume driver. 

Commercial property, especially, needed the Fed’s help. We have all these commercial Class A downtown conversions into housing sitting still. This is Q4 2023 all over again: The Fed said, Don’t expect more rate cuts – and took the wind out of economic activity throughout 2024. It was just starting to recover by now. The Fed took the wind out of Class A conversions then, and it’s going to do it again. Conversions were starting to recover – now expect no significant changes until Q4 2026.

It’s likely the Fed just caused another soft year of overall U.S. GDP growth and P&C insurance underlying growth, especially when it comes to economic premium volume growth drivers. 

I was just looking at premium volume growth for homeowners, personal auto, and commercial property in 2025. Typically, actuaries build in a baseline for premium volume growth by adding net GDP growth and CPI.  For 2025, that would bring us to about 7%. But premium volume growth for those lines is below 5%. The argument can be made that, at that level, premium volume growth was flat to negative in 2025. 

Paul Carroll

You make a compelling case, as always. So why do you think the Fed cut rates again?

Michel Léonard

I was surprised that the Fed would cut once this year. I was surprised when they cut twice, and I was speechless when they cut a third time. 

The Fed's estimate is for real GDP growth to decrease to about 1.7% by 2027. That's starting to be at the lower end of their goal. They do not see inflation picking up significantly, which is probably why they felt comfortable with the statement about further cuts.

But they’re totally flying blind here.

There’s the diminishing growth multiplier impact of rate cuts by changing expectations from easing to holding. Perhaps even more so, the Fed decided to do this with no GDP numbers since June, and no CPI and employment numbers since September. For GDP, getting data for Q3 was critical because of inventory depletion in Q2. The same for getting CPI and unemployment numbers through November. You can’t make decisions about monetary policy without those three.  How about without even one?

Paul Carroll

With Trump expected to name his next nominee to run the Fed in January, does that introduce another layer of uncertainty into the equation?

Michel Léonard

There’s a lot of noise in the market asking why the Fed made the statement about the direction of monetary policy. It did not need to.  One view is that it did so to preempt rate cuts-galore next year with Trump’s new appointment(s). I don’t think that’s the case.

First, there are many governors other than the chairman who get to vote on rates. 

Second, the Fed has already altered its inflation target. A rate cut with CPI at 3.0% means the current board of governors already tolerates annual inflation up to 3.5% (significantly more than the former 2.0% goal). 

Third, I was surprised by how mainstream the president’s leading candidate for Fed governor, Stephen Miran, is. He’s a consensus candidate, even though he might put more emphasis on growth than price stability when it comes to the Fed’s dual mandate. Personally, I see that shift, within reason, as beneficial to the overall economy. That said, tolerating inflation up to 3.5% is not the same as up to 4.0%. That would ring alarm bells even from me. 

Now keep in mind that an increase of one percentage point in tolerable annual inflation is a significant number.  For context, 1% compounded over a 35-year career means U.S. households have to increase their annual savings by 21% just to keep up. 

Paul Carroll

What dates should we keep in mind for releases of economic data, so we know whether we’re getting a nice present or a lump of coal in our stocking?

Michel Léonard

The next key date is Dec. 16, for unemployment data. A couple of days later, we get CPI, then GDP on the 23rd. Let me walk through these in chronological order, starting with unemployment.

The recent ADP numbers were a bit worse than expected but certainly within an acceptable range. We're currently at 4.40% unemployment in the U.S., and the consensus is that the new number will be 4.45%. If we get anywhere above 4.45% or 4.5%, I think the market may start reacting. [Editor’s note: The unemployment rate came in at 4.6%.]

The market consensus for the CPI number right now is 3.05%. I think we can be fine up to 3.2% or 3.25%. If we get above that, if we get to 3.5%, that might not be catastrophic, but it would certainly be the last nail in the coffin of further rate cuts. [Editor's note: The CPI number came in at 2.7%. There were, however, anomalies in data collection because of the government shutdown, so the number is being treated with some caution.]

Now we get to GDP. The market consensus expectation for Q3, at 2.48% growth annualized, is much more than I and the Fed think is feasible, which is around 1.9% and 2.0%. The market consensus is likely overly optimistic because Q2 GDP reached 3.8% on a quarterly basis. Again, we’re flying blind. [Editor's note: The number for Q3 growth turned out to be 4.3%.]

Paul Carroll

We’ll have another of these conversations in January, and there’s so much uncertainty now, even about the economic numbers, that I can imagine you’ll want to hold your thoughts about next year until then, but can I tempt you into making any projections about 2026?

Michel Léonard

Market reaction to the Q3 and November economic releases will be critical in determining the course of the economy in the next six months, which makes that Dec. 23 release unusually significant in terms of potential impact on the equity market, consumer spending, and private commercial capital investments. 

My concern with the equity markets is the Fed's statement about expectations. And you can write this down: I think that decision is the most ill-advised the Fed has made in three years.

Paul Carroll

Thanks, Michel. Great talking to you, as always. 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Top Emerging Risks for Life and Health (Re)insurers

(Re)insurers must watch out for AI-related risks, geoeconomic confrontation, unsettled regulatory and legal environments, technological acceleration, and global inflation shocks. 

itl future of risk

 

headshot

Sandra Said is the Vice President and Head of Global Enterprise Risk Management (ERM) Operations and Reporting at Reinsurance Group of America, Incorporated (RGA).


Paul Carroll

How does RGA define an emerging risk?

Sandra Said

RGA defines an emerging risk as a new or evolving risk that is difficult to assess and could impact the life and health insurance industry and RGA’s strategy.

Paul Carroll

What are some of the most significant emerging risks facing (re)insurers today?

Sandra Said

The risk landscape continues to evolve rapidly, shaped by a widening array of economic and social forces. Many of these risks are increasingly systemic and interconnected.

Among the most significant emerging risks are AI-related, geoeconomic confrontation, unsettled regulatory and legal environments, technological acceleration, and global inflation shocks. Each of these risks present unique challenges that can impact the stability, operations, and strategic direction of (re)insurers globally.

Paul Carroll

How does AI pose a risk to the insurance industry?

Sandra Said

Threat actors are increasingly using AI to rapidly adapt attack methods and deploy sophisticated tools, such as deepfake voice and image generation capabilities. This escalation in AI-enabled cyberattacks is creating a growing risk of more frequent and damaging cyber incidents. AI-enhanced threats are a growing concern due to their ability to adapt quickly, evade detection, and autonomously exploit vulnerabilities. 

For (re)insurers, this means a higher frequency and intensity of attacks and increased risk of data breaches, operational disruptions, and reputational damage. Since the industry relies heavily on secure digital infrastructure for several core processes, a successful cyberattack could undermine financial stability and erode client trust.

(Re)insurers should continue to educate employees about deepfakes and social engineering, proactively monitor adversarial tactics, and protect data to prevent attacks. Companies should, among other things, adapt processes involving financial transactions or sensitive data transfer to include multiple verification steps for appropriate mitigation.

Paul Carroll

Can you explain the impact of geoeconomic confrontation on (re)insurers?

Sandra Said

The risk of increased tension among some major global economies, including political polarizations, may have a negative impact on global trade and growth. Trade tensions, economic sanctions, and shifting alliances can disrupt international business relationships and cause shifts in economic conditions. For (re)insurers operating globally, this can lead to currency fluctuations, market volatility, and challenges in complying with differing regulatory requirements across regions. Increasing uncertainty means decisions may need to be made before all desired information is known. These factors may increase operational costs and require nimble risk management strategies.

Paul Carroll

Why is an unsettled regulatory and legal environment considered a major risk?

Sandra Said

Regulatory changes across jurisdictions can affect financial institutions’ operating models, business operations, and capital requirements, to name a few. The evolving landscape, especially regarding data privacy and emerging technologies, demands constant vigilance and adaptation. Increased compliance requirements may create a drain on financial resources, while non-compliance can damage reputation and attract regulatory scrutiny.

While regulatory shifts can drive innovation, they may also introduce complexity and uncertainty, potentially impacting strategic decisions and financial performance.

(Re)insurers may need to prioritize strategies for proactive regulatory engagement, integrating government relations into business units and anticipating regulatory obligations to stay ahead of changes. 

Paul Carroll

How does technological acceleration affect (re)insurers?

Sandra Said

Rapid technological change brings both opportunities and threats. (Re)insurers need to develop and retain the right talent – those who have the technical skills along with the knowledge of the company, its technical capabilities, and its ways of working. Ensuring a talent pipeline exists within the organization is key, including robust succession planning to build the next generation of digital skills necessary following retirements and attrition.

The experience and skills needed to build emerging technology solutions are in high demand to support changing business needs. The challenge lies in integrating new technologies effectively; failure to do so can impact competitiveness and operational efficiency. Increased digitalization also heightens the importance of data privacy and regulatory compliance.

(Re)insurers may take a proactive approach to technology adoption, while balancing innovation exploration (through monitoring and proof-of-concepts) with security and compliance considerations.

Paul Carroll

What steps can (re)insurers take to address these emerging risks?

Sandra Said

Agile and responsive risk management is essential. Integrating emerging risk insights into strategic planning and developing targeted action plans is no longer a “nice-to-have”; it is a strategic imperative. Equally, fostering a culture of risk awareness and knowledge sharing helps ensure emerging risks are openly communicated throughout the organization. 

At RGA, we have designed and implemented an online platform for enterprise-wide collaboration and information sharing. Education sessions delivered by subject matter experts throughout RGA help cascade knowledge about these emerging risks across the enterprise. Such proactive engagement with cross-functional experts and ongoing education can help organizations anticipate challenges, seize opportunities, and maintain a competitive edge in a complex global environment.

In addition, scenario analysis enables (re)insurers to evaluate how emerging risks might impact their business under different future conditions. By conducting this analysis, organizations can develop proactive strategies and contingency plans to ensure readiness when these scenarios materialize. They also can – and should – establish early warning indicators that serve as alerts when potential risk scenarios begin to unfold.