Download

Insurance AI Needs Context Over Speed

Heavy AI investment yields limited returns in insurance because speed-focused automation lacks decision-making context.

An artist’s illustration of artificial intelligence

The insurance industry is investing heavily in AI, primarily to automate functions across underwriting, claims, fraud detection, and customer experience. However, despite the increase in investment, few carriers have extracted outsize value from AI, largely because enterprise rewiring has been focused solely on speed instead of context. 

Automation is undeniably on the rise, but automation that prioritizes speed without understanding runs the risk of misfires. And in insurance, misfires are costly. Humans remain essential for nuanced decisions, especially in complex or highly emotional scenarios. Scaling AI responsibly is possible, but it requires a shift from a speed-first to a context-first approach. Models must understand not only the patterns they're analyzing, but the underlying reasons behind them.

The Accuracy Gap

Insurers have made significant progress in developing predictive models that identify probabilities with increasing precision. However, these models are constrained by the need for explainability. Regulatory requirements mean actuarial and underwriting models rely on limited, well-understood variables, creating a gap between what advanced machine learning could predict and what insurers can reasonably deploy. As a result, models often struggle to capture the full nuance behind risk, reinforcing the need for human judgment in complex or exceptional cases.

Generative AI and machine learning systems can support everything from claims triage to fraud detection, yet their decision-making abilities quickly erode when context is missing. Because of this, underwriters and claims professionals continue to shoulder the burden of work, as most cases need manual intervention as "exception" cases from a standard predictive model. For example, adjusting perils for unusual environmental factors or managing emotionally charged claims where customer history, tone, and circumstances matter.

Underwriters are often presented with the promise of "black box outputs" that could materially improve decision quality, even though such models are rarely fully incorporated into production due to transparency requirements. The potential performance gains are compelling, but the lack of explainability introduces governance, compliance, and audit challenges that prevent widespread adoption.

As a result, AI and automation are often used to accelerate modeled decisions, such as automating data gathering processes, rather than to enhance the quality of those decisions. Speed becomes the default measure of progress, while insurers still rely on humans to interpret context and manage high-stakes judgments. In a regulated industry where defensibility is paramount, this focus on efficiency over insight limits the potential value of AI.

Why Context Matters

Context is the cornerstone of trustworthy outputs. Contextual intelligence allows insurers to model the nuances behind decision-making that traditional analytical approaches often miss. Decisions aren't made solely on isolated transactions or entities; they depend on how these elements relate to one another.

Many critical factors remain hidden: loss performance reflects both human behavior and environmental conditions, and while the latter are difficult to influence, the former, like insureds' relationships with other individuals or businesses, are frequently under-modeled. Most insurers evaluate behavior in the context of a specific line of business, but miss signals such as commercial directors' histories at prior firms, social relationships with other stakeholders, or patterns spanning personal and commercial lines. Accessing this kind of data quickly and reliably is challenging, which is where context is often lacking from many decisions.

While horizontal uses of AI, such as summarization or transcription, provide operational support, they rarely deliver transformational change. Vertical, context-aware AI, by contrast, enables true next-best-action guidance. It can prioritize claims based on severity and customer value, evaluate broker behavior over time, or surface hidden relationships across complex books of business. According to Deloitte's 2025 Insurance Technology Trends Report, even as AI systems progress, they will still rely heavily on connected datasets to provide "actionable insights" to both regulators and customers.

Transformative Applications

Insurers are already beginning to explore how contextual AI can reshape core operations. In underwriting, context-aware models can evaluate risk holistically, incorporating nuanced exposures, historical performance, customer behavior patterns, and third-party datasets. These systems support more fair, accurate, and consistent decisions. According to a 2025 CEFPro study, insurers that are already applying contextual AI to underwriting have reduced processing time by 31% and enhanced risk assessment more than 40% while improving overall efficiency.

In claims, AI models paired with human oversight are accelerating adjudication while preserving transparency and auditability. Context helps claims teams identify which cases can move quickly and which require deeper review, improving both cycle time and customer satisfaction without compromising regulatory scrutiny. It also helps more accurately model claim severity and find aggregate recovery opportunities due to recurring third-party fault, something which often goes undetected in claim-centric analyses.

In fraud detection, contextual AI can identify coordinated fraud rings, reduce false positives, and improve straight-through processing rates. Building contextual relationships before flagging anomalies leads to far more accurate decisions, confidently catching the largest threats and letting good customers flow through frictionlessly, improving customer experience while reducing leakage and supporting revenue growth.

Governance and Readiness

Strong data governance is no longer optional; it is a prerequisite for responsible AI adoption. No model, no matter how technically sophisticated, can compensate for weak data foundations. Insurers leading the market are actively investing in robust governance frameworks that include model monitoring, explainability standards, auditable decision trails, and continuing regulatory alignment.

The Evident AI Insurance Index shows that the top-performing insurers prioritize transparency and ethical AI development, linking performance to strong governance and leadership accountability. This demonstrates that AI success is not just about technology; it's about embedding trust and oversight into every stage of deployment.

Meanwhile, the IAIS Mid-Year Insurance Report emphasizes that resilience in AI deployment depends on trustworthy data foundations and consistent oversight. Together, these findings highlight a critical lesson: insurers that invest in robust governance and high-quality data are better positioned to scale AI effectively, reduce risk, and generate measurable business value.

Defining the Next Chapter of Insurance AI

The insurance industry has reached an inflection point. AI's success will not be measured by how many processes it automates, but by how effectively it helps humans make better, more defensible decisions. Insurers that prioritize integrated contextual data, invest in human-AI collaboration, and build systems that are explainable and auditable will be the ones who unlock AI's full potential.

As the industry moves into 2026 and beyond, context will be the competitive currency that determines which insurers lead the market, and which are left trying to explain decisions they can't fully understand.

Flood Risk Demands New Insurance Approach

A $255 billion flood protection gap exposes outdated risk models, pushing the industry toward parametric insurance and captive structures.

Man standing between benches in flooded area

Flood risk is no longer a peripheral climate concern. It is fast becoming one of the most underestimated balance-sheet threats facing businesses and insurers globally. Over the last five years alone, flooding has caused an estimated $325 billion in economic losses worldwide, yet only $70 billion was insured (source: Munich Re), exposing a widening protection gap that the industry can no longer ignore.

This is not merely a story of rising water levels. It is a story of outdated assumptions.

Traditional flood models, rooted in historical event catalogues, are increasingly unfit for purpose in a world of volatile weather patterns, rapid urbanization, and climate-driven extremes. As Hamid Khandahari of Descartes Underwriting says, historical data "cannot fully account for events beyond anything previously recorded." The implications for underwriting, pricing, and capital allocation are profound.

The new reality: unpredictable, underinsured, unprepared

The scale of the challenge is stark. In the U.K. alone, surface-water flood risk could affect 6.1 million properties by 2050—a 30% increase compared to previous projections (source: NaFRA). In the U.S., flood events jumped nearly 30% year-on-year between 2022 and 2023, with several states seeing events quadruple (source: Lending Tree).

Yet, despite mounting evidence, risk perception remains dangerously muted. Many organizations still operate under a flawed logic: "We haven't flooded before, so we probably won't." This mindset is actively reinforced by commercial insurance dynamics. When losses do occur, the response is typically capacity withdrawal, higher deductibles, exclusions, or outright non-renewal—exactly when resilience is most needed.

This has created a vicious cycle: low perceived risk leads to underinsurance; the first major loss triggers rate shock and restricted coverage; risk then becomes both more expensive and harder to transfer.

Technology is changing what's possible

The industry now has the tools to break this cycle—but only if it evolves how it uses them.

Advanced flood forecasting, hydrodynamic modeling, and IoT sensor networks are changing the economics of risk. Leading platforms such as Previsico's can now provide 36-48 hours of warning, allowing businesses to move assets, shut down operations safely, and materially reduce losses.

The Balfour Beatty Vinci HS2 case illustrates this shift in practice. After suffering multimillion-pound flood losses, the company used predictive flood intelligence and sensors to protect sites, relocate critical equipment, and avoid repeat losses when the next event occurred.

Crucially, parametric solutions are not constrained by the same capital bottlenecks that plague traditional catastrophe underwriting. They can also be structured to cover deductibles, gaps, or even function as primary protection where conventional policies fail.

Yet adoption remains strikingly low. Despite 43% of U.K. organizations reporting flood impact, only 7% currently use parametric insurance in their flood risk financing strategy. That disconnect represents both a risk and an opportunity for the market. 

The strategic role of captives

This is where captives emerge as the industry's most underused strategic asset.

Captives are no longer simply about premium arbitrage or tax efficiency. They are fast becoming risk laboratories—vehicles for innovation, structured retention, and long-term resilience.

More than 1,700 new captives have been formed since 2020, bringing the global total above 7,000. Many are now absorbing flood risk by necessity, not choice—particularly in the U.S., where obtaining flood risk coverage is often incredibly difficult. These captives are then highly motivated to encourage operating divisions to manage flood risk effectively.

When combined with parametric structures, captives unlock a powerful model:

  • The captive retains frequency risk.
  • Parametric reinsurance absorbs severity risk.
  • The business benefits from faster liquidity and reduced earnings volatility.

This architecture also helps address "basis risk"—the mismatch between actual loss and parametric payout—by allowing the captive to smooth inconsistencies and manage retained exposures.

In practice, this makes flood risk more insurable, more predictable, and more strategically manageable.

From risk transfer to risk resilience

The industry stands at an inflection point.

Flood is no longer just a peril to be transferred; it is a systemic risk that must be actively managed, predicted, and financed in new ways. The combination of advanced forecasting, real-time data, parametric triggers, and captive-backed structures represents a shift from exposure to resilience.

The winners in this market will not be those who wait for traditional models to catch up. They will be the insurers, reinsurers, brokers, and risk managers who accept that the future of flood insurance is not about pricing the past—but engineering resilience for a climate-altered future.


Jonathan Jackson

Profile picture for user JonathanJackson

Jonathan Jackson

Jonathan Jackson is CEO at Previsico.

He has built three businesses to valuations totaling £40 million in the technology and telecom sector, including launching the U.K.’s longest-running B2B internet business.

Our 10 Most-Read Articles From 2025

It was an AI kind of year. No surprise there. But there was also great interest in social inflation, drones, the Predict & Prevent model and even lessons from the NFL playoffs.

Image
2025 calendar

Of the scores of articles we published this year on AI, five, in particular, struck a chord with you, our esteemed readers: on how AI is reshaping workers' comp, compliance and fraud, on how to unlock ROI (a tricky task) and on how AI's progress is accelerating, with no end in sight. 

Social inflation remained a hot topic, with two pieces in the top 10 on how verdict sizes in insurance cases have tripled since COVID and on how insurers are losing billions of dollars before cases even get to trial. 

Rounding out the top 10 are articles on how drones are profoundly changing how property claims are handled, how misconceptions about electrical fires lead to disasters that could be prevented and (appropriately for this time of year) what the NFL playoffs can teach us about innovation.

Herewith the highlights from 2025, as determined by your interest in them:

Artificial Intelligence

The most-read article, AI and Automation Reshape Workers' Comp, says "67% of organizations expect over 80% of claims to be automatically triaged and assigned in the future — without any manual intervention." The piece explores other efficiencies that AI offers, describes tools that are detecting fraud and offers advice on how to encourage adoption of AI.

At #2 is Why AI Is Game-Changer for Insurance Compliance. It says: "90% of small business owners are unsure about the adequacy of their coverage. AI serves as an intelligent assistant, quickly surfacing important information and providing context when needed.... The impact includes faster verification, fewer coverage and requirement gaps left unaddressed, and faster time to compliance. As Gartner predicts a doubling in risk and compliance technology spending by 2027, companies recognize that AI solutions that enhance collaboration deliver the greatest returns."

At #3 is The Key to Unlocking ROI From AI. It states its thesis starkly: "Your AI and automation initiatives will fail. Not because of bad code. Not because your data scientists aren't smart enough. But because you'll lack the one thing that determines whether any AI initiative succeeds: observability." It then explains at length how "you can't see what your automation is doing — how it's affecting business processes, where it's breaking down, and what value it's delivering."

How AI Can Detect Fraud and Speed Claims was the fourth-most read. It warns that "today's fraudsters have access to AI-generated medical records, synthetic identities, and eerily convincing deepfake videos, allowing them to construct entirely fabricated incidents with alarming precision." But, on a hopeful note, the article then explains how, "with the ability to process billions of data points in real time, AI-powered fraud detection systems can do what human analysts cannot: instantly cross-reference claims against vast datasets, identify inconsistencies, and flag suspicious activity before payouts occur. This technology enables insurers to detect deepfake-generated documents and videos, analyze behavioral patterns that suggest fraudulent intent, and shut down scams before they drain company resources."

At #6 was my summary of an exhaustive research paper published mid-year on the state of AI and where it would go from there: Mary Meeker Weighs in on AI. Among many (many) other things, the prominent analyst laid out startling detail (the cost of using AI has declined 99% just in the past two years), offered useful examples (more than 10,000 doctors at Kaiser Permanente use an AI assistant to automatically document patient visits, freeing three hours a week for 25,000 clinicians) and made some bold projections (by 2030, AI will run autonomous customer service and sales operations, and by 2035 will operate autonomous companies).

Social Inflation

At #5 was We’re Losing Billions—Before We Ever Get to Court, and at #7 was The Tripling of Verdict Size Post-COVID. Both were written by Taylor Smith and John Burge, who also wrote two of the three most-read articles of 2024, on what I broadly think of as social inflation (including third-party litigation funding and other aggressive tactics by plaintiff lawyers). 

In "We're Losing Billions," they write that property/casualty carriers have a blind spot in how they negotiate: "In an era where 99% of litigated claims settle, the cultural instinct on the defense side to 'hold back' our strongest arguments has become a billion-dollar blind spot. We ration key negotiating points, fearing we’ll run out of ammo. We save key arguments to “surprise them at trial.” We frame less, anchor less, and persuade less. Meanwhile, the plaintiff bar is doing the opposite—and it’s working."

In "The Tripling of Verdict Size," Taylor and John describe data they've collected on 11,000 P&C verdicts, across the industry, to address the fact that carriers typically just see their own slice of the verdicts. They argue that only by amassing better data can insurance lawyers keep up with the plaintiff bar in understanding how a case is likely to play out in a certain venue, in front of a certain judge, against a certain lawyer -- and fashion settlement offers accordingly.

(If those articles appeal to you, I'd encourage you to watch the webinar I recently conducted with Taylor and with Rose Hall: "Modernizing Claims: Competing Against AI-Powered Plaintiff Attorneys.")

Predict & Prevent

Hazardous Misconceptions on Electrical Fires was #8 on the top 10 list, highlighting how the insurance industry can help prevent many of the "approximately 51,000 fires annually in the U.S. [that result] in over $1.3 billion in property damage." The piece describes how we can educate policyholders about the fact that circuit breakers don't catch all electrical problems, that even new homes can have electrical issues and that there very often aren't warning signs of electrical problems before they start a fire. (The piece was written by Bob Marshall, CEO of Whisker Labs, which makes a device, the Ting, that detects electrical problems and that I think of as the poster child for the Predict & Prevent movement. I recently interviewed him here.) 

Drones 

Drones Revolutionize Property Insurance Claims, at #9, shows how drones have "emerged as a powerful tool for addressing some of the industry's most persistent challenges, including the need for increased accuracy, faster speed, and more cost-effectiveness" in property inspections during the claims process. 

Lessons From the NFL

It amused me to reread the final article to make the list: What NFL Playoffs Say About Innovation in Insurance. I wrote it following the conference championship games last January and opened by saying: "My main takeaway from the NFL conference championship games over the weekend was that I'm soooo ready to move on from the Kansas City Chiefs — anyone with me?" I've heard in the past 11 months about plenty of folks who are tried of looking up at the Chiefs in the standings — and, lo and behold, we don't have to worry about the Chiefs in the playoffs for the first time in 11 seasons.

After venting my spleen (I'm a frustrated Steelers fan), I got into how coaches were finally following the data and going for it on so many more fourth downs than they used to, on why it took them so long and on how insurers can learn from NFL coaches and throw off even deeply entrenched bad habits.

Wishing you all a healthy, happy and prosperous New Year!

(While desperately hoping that my Steelers beat the Ravens on Sunday.)

Cheers,

Paul

What If Manufacturers Provide Insurance for Free?

As embedded insurance takes hold, what if manufacturers heavily discount coverage or give it away so they can sell more product? How do insurers compete? 

Image
working woman with headset on

During the internet boom of the late 1990s, I heard a term that stuck with me: "the Las Vegas business model." The term was used by a Harvard Business School professor on a panel I moderated -- and he said the results aren't pretty for any competitor caught in the cross-hairs.

The Las Vegas business model involves someone giving away a product -- YOUR product, if you're unlucky -- to sell more of something else. The professor called this the Las Vegas model because he said it's tough to sell run-of-the-mill hotel rooms or meals in Las Vegas when casinos will give away rooms and food to people deemed likely to leave enough money behind at the gambling tables.

The same problem could hit at least some parts of insurance, especially as embedded insurance gains steam. Apple doesn't need to make money off warranties, for instance; it just needs to keep your devices running so you can keep buying things through the Apple Store -- and Apple can keep collecting its tens of billions of dollars of commission each year. Many car makers have started offering insurance, but they're mainly in the business of selling cars. What if they start bundling insurance at a steep discount to help dealers persuade prospective customers to buy their car and not a competitor's? 

This could get ugly. 

The Las Vegas business model springs to mind because of a smart piece Tom Bobrowski published with us last week: "Tech Giants Aim to Eliminate Insurance Costs." The summary warns: "Technology companies view insurance as a cost to eliminate, not a business opportunity to pursue." 

He walks through some examples, including how Tesla is trying to minimize insurance costs as a way of bringing down the total cost of ownership so it can sell more vehicles. He also looks at cybersecurity, where huge software vendors such as Microsoft are doing their utmost to reduce vulnerability and reduce the need for insurance. 

I'd add the liability insurance Amazon offers. Amazon has every incentive to make it as cheap and convenient as possible for sellers to operate on its site -- and keep paying those hefty commissions to Amazon. Amazon doesn't even have to earn a profit on that insurance, so good luck to any insurer trying to compete. 

Tom says tech giants have four major advantages over insurance companies: 

  • Superior data, which comes from the ability to continuously monitor behavior, as Tesla can do with its cars and their drivers
  • Direct customer relationships, which eliminate distribution costs that constitute 15-25% of premiums
  • Technology infrastructure that can automate claims, detect fraud and model risks
  • Brand trust: Customers already trust them with payments, personal data, and critical services

For me, the first two are much more formidable than the last two. Tech giants can have a major advantage on data. So can other manufacturers, such as the big car companies, given all the sensors now being built into products. And any company that can embed insurance into the process of selling something else takes a huge chunk out of customer acquisition costs. 

As for the last two, I'd say insurers have extensive technology, too -- even if there are always complaints that it's dated. Insurers also have the sort of experience with processing claims, detecting fraud and modeling risks that requires all sorts of nuance and that tech companies would have to develop from scratch. Tech giants do seem to have brands deemed more trustworthy than those of insurers, in general, but that gap would surely narrow if the tech companies get into insurance in a big way, because that would put them into the business of denying lots of claims.

Despite the fears at the start of the insurtech wave a decade ago that insurance would be "Amazoned," as retail commerce had been, tech giants have mostly stayed away. Google tried car insurance but found it could sell leads for more than it would earn by selling insurance. Amazon is experimenting with telehealth and pharmacy services but has shied away from any major moves in healthcare.

In general, tech companies didn't want to commit the capital or have to deal with the extensive, state-by-state regulation that insurers face. Those reservations will continue, I believe. Besides, many giants from outside the industry are talking, at least for now, about insurance as a business opportunity, not as a cost to be eliminated. General Motors has said it hopes to generate $6 billion of insurance revenue by 2030, and Elon Musk has said insurance could account for 30-40% of Tesla's business. 

But I think Tom is right when he says the Las Vegas business model represents a major trend, even if different parts of the insurance industry will be affected at different rates and even if it will take, in his estimation, five to 20-plus years to play out.

An ugly trend for insurers, but one we should all keep in mind.

Cheers,

Paul

 

 

What Would You Do With $1 Trillion?

Record $14.6 billion fraud highlights an urgent need for entity resolution technology in P&C operations.

Silver case with stacked 100 dollar bills in rubber bands

For the first time ever, direct premiums in P&C exceeded $1 trillion in 2025. Also a first in 2025: a $14.6 billion alleged fraud ring was exposed. (The prior record was $6 billion.)

The watchword for industry executives should be: "entity."

Fraud risk, customer experience, and effective AI? They're all keyed to entity. The money you make, the money you keep, and the faster you grow? Entity, again.

That total of direct premiums means there are now more than one trillion reasons to understand who is paying you and who you are paying. That "who" is an "entity" -- people, businesses, and organizations.

Entities have identity – names, addresses, phone numbers, etc. In logical fashion, there are only three kinds of entities – trusted, unknown, and untrusted. If you can't distinguish among these three kinds, then you are reading the right article.

With interaction, entities also have history, behavior, and outcomes. Entities may be related to each other. Sometimes those relations are very transparent, like parent-and-child or employer-employee. Sometimes they are hidden, like in an organized crime ring or in a conspiracy and collusion affiliation. Entities may be multifaceted – driver, renter, business owner, group leader, member of an organization, neighbor, volunteer, relative, known associate. These relationships all change over time, yet there is still the same entity.

Reflect on this for a pause here. Consider yourself for example as EntityONE. Now quickly list all the roles and relationships you have in the physical world at your home, office and neighborhood, and then online as an emailer, shopper, commentator, reader. Your identity in all those real and digital places may take different forms, but it is always you, EntityONE.

The everyday entity

In the day-to-day of insurance and business life, there is always a concern about fraud and abuse. From application through claims payment, your need to know your business extends from your new business funnel through, third parties, vendors, customers, agents, and even staff.

A new person applies for car insurance, a business makes a claim involving a third party, an invoice arrives from a new address, an agent makes a submission, finance issues a payment – to trust or not to trust?

Names, addresses, phone numbers, etc. are the data vestiges of ways to describe an entity. Either physical or digital in origin, these data are typically scattered across various boxes in an organization chart and different core, ancillary, and API-accessed third party systems.

We store identifier elements like names and address with varying lengths, spellings, inaccuracies, and levels of incompleteness, and in unstructured and semi-structured data entry fields and free form text like notes and templates.

Then we store them again and again over time, moving between systems, between carriers, between vendors, and of course, across multiple CRM applications, which are additionally stuffed with all manner of duplicate and partial records.

Think of yourself as EntityONE

If you tried to have your own self, hereafter called EntityONE, appear the same in every field in every system in every organization over time, you would fail. Even if you never moved and never changed your name, random data entry error alone would ruin your ambition.

One data exercise to try at home: If you have address data from northern California – find a system where "city" is collected as part of an address. Then see how many ways "San Francisco" appears. At one large carrier with tens of thousands of transactions across five years of data entry there were 97 unique entries.

The correct answer was the dominant response, "San Francisco." Shorthand like "SF" and nicknames like "SanFran," "Frisco," and "San Fran" were next. A lower-case version of the correct answer was next, "san francisco." All sorts of typos and transpositions followed. An unthought-of case was a space key entry as a valid character – "S F" is now different than "SF." And those space key values could be leading, trailing, or in the middle. Another very frequent response, when permitted by system data field edit logic, was "blank," no entry at all, or in some cases any number of space key entries.

If you ran a literal matching algorithm on the "city" field, in theory EntityONE could have 97 different data "cities" yet is still only a single unique entity.

Some other factors might also contribute to your failure to have perfect EntityONE data.

One system has separate fields for first name and last name, with no field for middle name and no fields for title/prefix, or suffix. Another system has one long field where all of that is supposed to be entered. Is it Dr. or Mrs. or Ms or Miss with suffix MD, PhD, DO?

Generally, the simplest of contact information – name, address, phone number – can be entered and stored so inconsistently in so many multiple places over time that EntityONE would not exist as a whole and unique name-address in the best of cases.

When it comes to legal entity, the EntityONE Family Trust, or your business version, EntityONE., it's still you, but you now may also have shared rights and not be the only decisionmaker. So enough of thinking of just yourself.

Think of how difficult it might be to search for your customer as their data is entered and maintained across different systems in different ways. Your decades-old processes still treat paper and data as if they were entities, not as entities that have related paper and data. 

This work process of literal data computing is at the core of delivering customer experience but allows an opening for fraudsters and is the bane of AI.

Let this sink in: Data are not entities; entities have data.

Entities have data. You as EntityONE are unique. All the aliases, name changes, addresses, business titles, partnership and shareholder situations, and your honorifics aside, you are still you. Even after you pass away, the estate of EntityONE will persist.

Resolving the many ways to identify you is now what you need to turn inside out.

Every other person, business, group, and organization has the same issues. When you encounter any identity, you need to resolve it down to the core entity, or you will not know who you are dealing with.

Whether an entity is legal or not legal or illegal or foreign or even sanctioned, as we think on the identity data we see every day, many entities present as if their data is thin, with seemingly little to none. Some appear squeaky clean. Some have long years of history. Some look like they popped out of thin air. Some, like a bad penny, keep popping up after we have decided not to interact with them. Synthetic, assumed, straw man, take over, hacked, phished, fraudulent, and other forms of malfeasance also exist.

Keeping tabs on entities (e.g. people and organizations), and the hidden relationships among them in real time is now practical with advanced analytics powered by a technology known as entity resolution. Entity resolution brings all the snippets of various identifiers around an entity into focus.

Entity resolution may involve several efforts, all claiming to do the same thing across your data and computer laden landscape. In the earliest days of computing, crazy sounding technical terms sprouted to try to address this existential data identity issue around keeping EntityONE clearly in focus. It started field by field in databases and has modernized to complex multi-attribute vector and graphical analytics.

These geeky but incomplete early algorithms left a lot undone while still showing some value – they had names like Levenshtein (an edit distance formula for suggesting a typo was made in text similarity), Hamming distance, and more recently in AI terms, tokens with Jaccard and Cosine TF-IDF similarity approaches. There are dozens upon hundreds of challenger approaches. But an analytic or a technique is not a product or a solution.

An early inventor created a combination of steps and orchestrated a set of code he called "fuzzy matching." (In memory of Charles Patridge, here is a link to a seminal paper he wrote.) Many data analytic communities shared that code and subsequent innovations to make progress on name and address standardization and name and address matching. The postal service benefited greatly with more deliverable mail, and database marketing boomed, while customer analytics and lifetime value ascended, as did provider and agent and vendor scorecards with more ambitious service level monitoring.

As with many other business problems, necessity is the mother of invention. Almost every company now has inventions that come from do-it-yourself, homegrown efforts. It is the only way forward before a workable, scalable solution is created.

Also likely installed are several versions and half attempts of making the problem better inside an application or between systems. First, companies used data quality checks, then field validation efforts, then more hardened data standards. For all that work, the human data entry staff invented "99999" and other bypass work hacks. You can see that still today.

This data is what you are training your AI models on.

The largest legacy problem today is this data pioneer spirit turned hubris. IT pros and data science teams do the best they can with what they have – full stop. The satisficing behavior limits their contribution. It also injects unneeded error into all the models they are building and operationalizing. Much of the AI risk is self-inflicted poor entity resolution management. Actuary staff feel largely immune at the aggregated triangle and spreadsheet point of view, but that is a false sense of security, since they cannot see into the granularity of transactions beneath a spreadsheet cell. This is changing dramatically fast with the emergence of the machine learning and AI wielding actuarial-data_scientist corps of employed professionals, academicians, and consultants.

New techniques like large language models (LLM) are making short work of text data in all forms to create new segmentation and features for existing models, while also enabling new modeling techniques to iterate faster. The next phase of workflow improvement is almost limitless. All these great breakthrough efforts need an entity level of application to have their highest value.

The rise of industrial-grade entity resolution

The financial stress indices are high. The sympathy toward companies is low. The opportunity to use AI and seemingly anonymous internet connections makes people think they can't get caught – a presumption with a lot of truth to it these days.

A shout out to our industry career criminal counterparts enjoying the status "transnational criminal organizations": Terms like straw owners, encrypted messaging, assumed and stolen credentials, synthetic identities, and fake documentation are now everyday occurrences.

And that's just what relates to money. For truly awful perpetrators, anarchists, drug dealers, arms dealers, human traffickers, hackers, terrorists, espionage, traitors, nation state actors, and worse, the problem space of entity resolution is mission critical.

Keeping tabs on entities (e.g. people and organizations), and the hidden relationships among them in real time is possible today. It elevates internal "good enough'" learned implementations to "never finished being done, continuously adapting, and real time' data driven implementations."

What you should do about entity

The most capable solutions sit around existing efforts in place, so no need to rip and replace anything. This makes entity resolution prioritization easier, as it can be adopted with what you do now. This extends to your analytic ambitions in cyber resilience and digital modernization, as it can interact seamlessly with additional identifiers like digital entity resolution – emails, domains, IP addresses, that have an address corollary to a street address in a neighborhood. (Here is an earlier article I wrote for ITL on "Your Invisible Neighbors and You.")

Do yourself, your board, your customers, and your future AI successes a favor and get serious about entity and entity resolution as the nearest thing to a single truth as you can get.

Some Background

The author has built matching and fuzzy matching applications multiple times with multiple technologies over a four-decade career and advises that benchmarking is essential for understanding fit for use in entity resolution. A four out of five, or 80%, accuracy might be fine for some use cases and considered corporately negligent in others.  Getting to the high 90s takes much more data and resources than most internal teams can dedicate on a sustained basis. 

A practical example from the author’s experience is Verisk Analytics, where they have billions of records of names and addresses coming from hundreds of carrier systems, all needing attribution to an entity level for highest business value. They have instituted an industrial solution to supplement or replace methods the author’s team built originally for fraud analytics. 

The vendor they give testimonials for is one that is now being adopted in insurance after widespread use in governments and security, customer management, financial integrity, and supply chain use cases globally. It is called Senzing. Their methodology creates the capability to recognize relationships across a number of data attributes and features shared across disparate records and systems, e.g.  names, addresses, phone numbers, etc. in real time. 

Modern entity resolution systems can deploy inside your company as an SDK, so you never need to share any data to move forward. Multiple use cases around your enterprise can also derive benefit from improving entity resolution management so it is reliable on the first shot. 

Was the Fed Rate Cut a Mistake?

Michel Léonard, chief economist for the Triple-I, says the Fed's statement downplaying the possibility of future rate cuts will keep key interest rates high.

Interview Banner

Paul Carroll

We've had a prolonged dance with the Federal Reserve over whether they would cut rates again this year, and they finally did, on Dec. 10, right as you and I began this conversation. They also signaled they’re probably done for a while. Where do we go from here?

Michel Léonard

First, I think the Fed made a policy mistake by cutting rates and changing monetary outlook from easing to holding. Setting expectations is more impactful on growth than actual rate changes. By saying “don’t expect rate cuts” they took the wind out of the current easing’s impact. We’re lucky the stock market didn’t drop by 4-5% in the days since. 

Instead, the proper policy would have been, in my and many economists’ opinion, to skip the cut but keep easing expectations alive. That would have a strong multiplying impact on GDP. 

Had the Fed stuck to easing, we would have started to see decreases in mortgage and auto loan rates by Q3 2026. We needed those lower rates to fuel homeowners and personal auto insurance premium volume growth. Instead, we’re likely to face historically high mortgage and auto loan rates through Q1 2027.  Most likely, we’re stuck with weak housing starts, weak existing home sales, and lower auto sales, and without that homeowners and personal auto premium volume driver. 

Commercial property, especially, needed the Fed’s help. We have all these commercial Class A downtown conversions into housing sitting still. This is Q4 2023 all over again: The Fed said, Don’t expect more rate cuts – and took the wind out of economic activity throughout 2024. It was just starting to recover by now. The Fed took the wind out of Class A conversions then, and it’s going to do it again. Conversions were starting to recover – now expect no significant changes until Q4 2026.

It’s likely the Fed just caused another soft year of overall U.S. GDP growth and P&C insurance underlying growth, especially when it comes to economic premium volume growth drivers. 

I was just looking at premium volume growth for homeowners, personal auto, and commercial property in 2025. Typically, actuaries build in a baseline for premium volume growth by adding net GDP growth and CPI.  For 2025, that would bring us to about 7%. But premium volume growth for those lines is below 5%. The argument can be made that, at that level, premium volume growth was flat to negative in 2025. 

Paul Carroll

You make a compelling case, as always. So why do you think the Fed cut rates again?

Michel Léonard

I was surprised that the Fed would cut once this year. I was surprised when they cut twice, and I was speechless when they cut a third time. 

The Fed's estimate is for real GDP growth to decrease to about 1.7% by 2027. That's starting to be at the lower end of their goal. They do not see inflation picking up significantly, which is probably why they felt comfortable with the statement about further cuts.

But they’re totally flying blind here.

There’s the diminishing growth multiplier impact of rate cuts by changing expectations from easing to holding. Perhaps even more so, the Fed decided to do this with no GDP numbers since June, and no CPI and employment numbers since September. For GDP, getting data for Q3 was critical because of inventory depletion in Q2. The same for getting CPI and unemployment numbers through November. You can’t make decisions about monetary policy without those three.  How about without even one?

Paul Carroll

With Trump expected to name his next nominee to run the Fed in January, does that introduce another layer of uncertainty into the equation?

Michel Léonard

There’s a lot of noise in the market asking why the Fed made the statement about the direction of monetary policy. It did not need to.  One view is that it did so to preempt rate cuts-galore next year with Trump’s new appointment(s). I don’t think that’s the case.

First, there are many governors other than the chairman who get to vote on rates. 

Second, the Fed has already altered its inflation target. A rate cut with CPI at 3.0% means the current board of governors already tolerates annual inflation up to 3.5% (significantly more than the former 2.0% goal). 

Third, I was surprised by how mainstream the president’s leading candidate for Fed governor, Stephen Miran, is. He’s a consensus candidate, even though he might put more emphasis on growth than price stability when it comes to the Fed’s dual mandate. Personally, I see that shift, within reason, as beneficial to the overall economy. That said, tolerating inflation up to 3.5% is not the same as up to 4.0%. That would ring alarm bells even from me. 

Now keep in mind that an increase of one percentage point in tolerable annual inflation is a significant number.  For context, 1% compounded over a 35-year career means U.S. households have to increase their annual savings by 21% just to keep up. 

Paul Carroll

What dates should we keep in mind for releases of economic data, so we know whether we’re getting a nice present or a lump of coal in our stocking?

Michel Léonard

The next key date is Dec. 16, for unemployment data. A couple of days later, we get CPI, then GDP on the 23rd. Let me walk through these in chronological order, starting with unemployment.

The recent ADP numbers were a bit worse than expected but certainly within an acceptable range. We're currently at 4.40% unemployment in the U.S., and the consensus is that the new number will be 4.45%. If we get anywhere above 4.45% or 4.5%, I think the market may start reacting. [Editor’s note: The unemployment rate came in at 4.6%.]

The market consensus for the CPI number right now is 3.05%. I think we can be fine up to 3.2% or 3.25%. If we get above that, if we get to 3.5%, that might not be catastrophic, but it would certainly be the last nail in the coffin of further rate cuts. [Editor's note: The CPI number came in at 2.7%. There were, however, anomalies in data collection because of the government shutdown, so the number is being treated with some caution.]

Now we get to GDP. The market consensus expectation for Q3, at 2.48% growth annualized, is much more than I and the Fed think is feasible, which is around 1.9% and 2.0%. The market consensus is likely overly optimistic because Q2 GDP reached 3.8% on a quarterly basis. Again, we’re flying blind. [Editor's note: The number for Q3 growth turned out to be 4.3%.]

Paul Carroll

We’ll have another of these conversations in January, and there’s so much uncertainty now, even about the economic numbers, that I can imagine you’ll want to hold your thoughts about next year until then, but can I tempt you into making any projections about 2026?

Michel Léonard

Market reaction to the Q3 and November economic releases will be critical in determining the course of the economy in the next six months, which makes that Dec. 23 release unusually significant in terms of potential impact on the equity market, consumer spending, and private commercial capital investments. 

My concern with the equity markets is the Fed's statement about expectations. And you can write this down: I think that decision is the most ill-advised the Fed has made in three years.

Paul Carroll

Thanks, Michel. Great talking to you, as always. 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Top Emerging Risks for Life and Health (Re)insurers

(Re)insurers must watch out for AI-related risks, geoeconomic confrontation, unsettled regulatory and legal environments, technological acceleration, and global inflation shocks. 

itl future of risk

 

headshot

Sandra Said is the Vice President and Head of Global Enterprise Risk Management (ERM) Operations and Reporting at Reinsurance Group of America, Incorporated (RGA).


Paul Carroll

How does RGA define an emerging risk?

Sandra Said

RGA defines an emerging risk as a new or evolving risk that is difficult to assess and could impact the life and health insurance industry and RGA’s strategy.

Paul Carroll

What are some of the most significant emerging risks facing (re)insurers today?

Sandra Said

The risk landscape continues to evolve rapidly, shaped by a widening array of economic and social forces. Many of these risks are increasingly systemic and interconnected.

Among the most significant emerging risks are AI-related, geoeconomic confrontation, unsettled regulatory and legal environments, technological acceleration, and global inflation shocks. Each of these risks present unique challenges that can impact the stability, operations, and strategic direction of (re)insurers globally.

Paul Carroll

How does AI pose a risk to the insurance industry?

Sandra Said

Threat actors are increasingly using AI to rapidly adapt attack methods and deploy sophisticated tools, such as deepfake voice and image generation capabilities. This escalation in AI-enabled cyberattacks is creating a growing risk of more frequent and damaging cyber incidents. AI-enhanced threats are a growing concern due to their ability to adapt quickly, evade detection, and autonomously exploit vulnerabilities. 

For (re)insurers, this means a higher frequency and intensity of attacks and increased risk of data breaches, operational disruptions, and reputational damage. Since the industry relies heavily on secure digital infrastructure for several core processes, a successful cyberattack could undermine financial stability and erode client trust.

(Re)insurers should continue to educate employees about deepfakes and social engineering, proactively monitor adversarial tactics, and protect data to prevent attacks. Companies should, among other things, adapt processes involving financial transactions or sensitive data transfer to include multiple verification steps for appropriate mitigation.

Paul Carroll

Can you explain the impact of geoeconomic confrontation on (re)insurers?

Sandra Said

The risk of increased tension among some major global economies, including political polarizations, may have a negative impact on global trade and growth. Trade tensions, economic sanctions, and shifting alliances can disrupt international business relationships and cause shifts in economic conditions. For (re)insurers operating globally, this can lead to currency fluctuations, market volatility, and challenges in complying with differing regulatory requirements across regions. Increasing uncertainty means decisions may need to be made before all desired information is known. These factors may increase operational costs and require nimble risk management strategies.

Paul Carroll

Why is an unsettled regulatory and legal environment considered a major risk?

Sandra Said

Regulatory changes across jurisdictions can affect financial institutions’ operating models, business operations, and capital requirements, to name a few. The evolving landscape, especially regarding data privacy and emerging technologies, demands constant vigilance and adaptation. Increased compliance requirements may create a drain on financial resources, while non-compliance can damage reputation and attract regulatory scrutiny.

While regulatory shifts can drive innovation, they may also introduce complexity and uncertainty, potentially impacting strategic decisions and financial performance.

(Re)insurers may need to prioritize strategies for proactive regulatory engagement, integrating government relations into business units and anticipating regulatory obligations to stay ahead of changes. 

Paul Carroll

How does technological acceleration affect (re)insurers?

Sandra Said

Rapid technological change brings both opportunities and threats. (Re)insurers need to develop and retain the right talent – those who have the technical skills along with the knowledge of the company, its technical capabilities, and its ways of working. Ensuring a talent pipeline exists within the organization is key, including robust succession planning to build the next generation of digital skills necessary following retirements and attrition.

The experience and skills needed to build emerging technology solutions are in high demand to support changing business needs. The challenge lies in integrating new technologies effectively; failure to do so can impact competitiveness and operational efficiency. Increased digitalization also heightens the importance of data privacy and regulatory compliance.

(Re)insurers may take a proactive approach to technology adoption, while balancing innovation exploration (through monitoring and proof-of-concepts) with security and compliance considerations.

Paul Carroll

What steps can (re)insurers take to address these emerging risks?

Sandra Said

Agile and responsive risk management is essential. Integrating emerging risk insights into strategic planning and developing targeted action plans is no longer a “nice-to-have”; it is a strategic imperative. Equally, fostering a culture of risk awareness and knowledge sharing helps ensure emerging risks are openly communicated throughout the organization. 

At RGA, we have designed and implemented an online platform for enterprise-wide collaboration and information sharing. Education sessions delivered by subject matter experts throughout RGA help cascade knowledge about these emerging risks across the enterprise. Such proactive engagement with cross-functional experts and ongoing education can help organizations anticipate challenges, seize opportunities, and maintain a competitive edge in a complex global environment.

In addition, scenario analysis enables (re)insurers to evaluate how emerging risks might impact their business under different future conditions. By conducting this analysis, organizations can develop proactive strategies and contingency plans to ensure readiness when these scenarios materialize. They also can – and should – establish early warning indicators that serve as alerts when potential risk scenarios begin to unfold.


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Responsible Underwriting Becomes Priority for Insurers

Tight margins and customer demands for transparency drive insurers to embrace responsible underwriting as a competitive necessity.

Person Holding Blue Ballpoint Pen Writing in Notebook

Insurance is not an easy market. We operate under tight deadlines, low margins, and a fair bit of uncertainty that takes a toll on our focus and capacity. This must change. Customer expectations have risen, and there is a growing demand for transparency and empathy in the decisions we make. This is why 'responsible underwriting' is the need of the hour.

I was a panelist at a recent Send webinar along with Haley Robinson, NED and advisor to the commercial and specialty market, and Russell Brown, principal at Safe Harbor Insurance. Together, we unpacked how insurers can protect margins while delivering value to customers. Here are some insights from our discussion that can be translated into actionable strategies in 2026.

Balance profitability and customer-centricity.

This is non-negotiable. There can be a strong urge to overaccommodate to meet targets, especially if the underwriter is new to the industry. This is where underwriters must hold their ground and be transparent on what they can write and what they cannot. To earn the trust of customers, underwriters must be consistent, fair, and clear. However, Haley Robinson reinforced a fundamental truth that ultimately, sustainable profitability is essential. With discipline, underwriters can gauge when to be flexible and when to say no to deliver long-term value to customers and retain profitability.

The AI era: The best time for underwriters

There has not been a better time to be an underwriter than today. Two decades ago, all an underwriter had was a pen and paper. Today, underwriting workbenches can ingest broker submissions, cleanse data, benchmark risks, and much more. AI represents a major change in the industry. AI is creating an ecosystem where underwriters can reduce administrative tasks and focus on writing business. This can help underwriters show a transparent picture to the client and give them a tailored customer experience. But I still caution the industry that underwriters must remain curious and skeptical about using AI. Just because a pricing engine gives you a number doesn't mean it's right. Tools must guide decisions and not dictate them.

Empower the next generation of underwriters with soft skills.

Russell Brown pointed out how underwriters today have major training gaps that prevent them from being critical thinkers. I agree with him. I recall how I was part of programs earlier on in my career that focused on communication, negotiation, and relationship building. I do feel that today's underwriters are over-reliant on digital communication. In my opinion, effective underwriters are those who combine strong technical skills with confidence to engage directly with clients, especially with large commercial lines. So AI is indeed a blessing as it reduces administrative work and allows junior underwriters to spend more time observing, engaging, and developing situational judgment.

Responsible underwriting is the only way forward.

As we move ahead, new industries and businesses are emerging. Insurers must support innovation by withstanding risks and moving capital to those who need it. I agree with Haley Robinson when she says, "If something is covered and it's clearly covered, we must pay quickly and in full." Insurers who have a good reputation for paying claims that are valid are the ones that will win in the future.

At its core, insurance is a promise, and responsible underwriting means keeping up the promise. As AI and technology reshapes workflows, the true test of success will be insurers who can combine data-driven insights with empathy, integrity, and accountability. By doing this, responsible underwriting will be both a strategic advantage and a promise delivered.

Cyber Insurance Holds Ground Despite Rising Threats

Despite escalating cyber incidents, insurance policies remain stable as industry emphasis moves from availability to understanding coverage.

People on a Video Call

While cyber threats are coming at business from every angle, cyber insurance policies have been able to hold their ground, and they look like they will continue to do so into 2026.

As a whole, cyber policies are generally available and relatively still affordable for most businesses. Most companies can obtain cyber insurance unless they have a severe open claim or extremely poor controls, said Katie Pope, Esq., senior vice president, executive lines for The Liberty Company Insurance Brokers.

"Even insureds with significant loss history are often able to find competitive options because capacity is plentiful and many new entrants are competing for market share," Pope said.

And that is good news for businesses because according to a 2025 study, there were over 3,000 reported cyber incidents with small businesses in the last year, and 75% involved ransomware.

In that environment, even businesses on the smaller side are recognizing they need cyber coverage.

"On the claims side, the cyber insurance landscape continues to evolve quickly," said Mitch Miles CISSP, CISA, with Shoreline Public Adjusters.

Ransomware and tracking claims continue to be the primary loss drivers. The shift from cyber risk as a hypothetical concern to a real one is a reality many businesses are grappling with.

Typical risks include

  • Ransomware
  • Data breach
  • Network attacks
  • Phishing
  • Social engineering
  • Cyber intrusion

The healthcare space seems to have a uniquely high risk profile, with more exposure than nearly any other sector. That is because it is not only high stakes, but the consequences of a health care breach are rife with the risk of fines and class action suits from the affected patients.

Regardless of sector, one of the biggest questions following a cyber incident seems to be clarity on who is on the hook for which type of claim. Depending on policy language, this could result in coverage disputes that could be prolonged and may even find their way into mediation or court.

"Once a claim is filed, coverage interpretation is often where the most significant disputes arise," Miles said. "From a claims perspective, the core challenge is rarely whether an incident occurred. Instead, the dispute usually centers on whether the loss fits within narrowly drafted coverage triggers."

Beyond sorting out who is at fault, many insurers are leaning in and working directly with policyholders before an incident to prevent the losses in the first place. Many are demanding best practices and security protocols, and others are providing their policyholders with tools they can use to fight back against the cyber criminals.

"Underwriting remains highly selective," Miles said. "Insurers are placing far greater emphasis on the actual enforcement of security controls, such as multifactor authentication, backup integrity, and incident response readiness, rather than relying solely on completed questionnaires."

As cyber risks evolve, policies are needing to adapt to respond; take cloud outages, AI agents, and deep fakes.

With cloud outages, businesses and insurers are now scrambling to clarify the details of their coverage to see who, if anyone, might be on the hook to reimburse for any losses.

When the high-profile Hong Kong deepfake incident went down, where a finance worker was duped into making a multimillion-dollar transfer based on a video call with two of his colleagues who turned out to actually be deepfakes, insurers the world over wondered how their policies would hold up under a similar attack.

And with AI agents operating autonomously, businesses now face a new layer of risk. Previously the question was what happens if a human clicks the wrong link, but now with autonomous agents, entire suites of tools might now be at risk where they might not have been in the past.

That said, some of these risks are still largely theoretical.

"Everyone is fearful of AI-related cyber claims, but we haven't actually seen many cyber losses directly tied to AI yet," Pope said.

Moving forward, many in the industry are expecting more aggressive carrier subrogation efforts to try to spread the risks throughout the value chain.

And as cyber claims mature, the industry conversation has shifted from simply asking, "Do you have cyber insurance?" to the far more critical question: "Do you understand how your policy will actually respond when it's under stress?" Miles said.

How the Past Predicts the Future in P&C

The recovery from the hard market will be hugely uneven because so many carriers have dug such an enormous hole for themselves.

A Person Passing a Document

The hard market, which we're hopefully exiting, had its origins in 2018. This is likely a surprise to many readers because rates didn't begin to accelerate until approximately three to four years later.

All hard markets originate with a lack of surplus. Profits, or the lack thereof, typically have nothing to do with hard markets. No matter how long, forceful, and even believable the testimonies given by insurance companies are that their lack of profits cause a hard market, don't believe them. Most people preaching this have drunk the Kool-Aid and don't know any better. They're not purposely misrepresenting the situation. They are just ignorant.

If profits were the issue, carriers would not have been profitable 31 of the last 32 years. If profits were the issue, carriers would not have made record profits the previous two years. By "record," I mean ginormous profits the last two years, and preliminary numbers suggest 2025 profits will be huge, as well. Carriers as a whole have not experienced profit problems for decades.

Hard markets are a result of inadequate surplus. The most obvious aspect of a hard market is how expensive insurance becomes. Why is this? Because there is less competition. Why is there less competition? Because some companies were mismanaged and lost their surplus. Without strong operational surplus, carriers cannot afford to grow, so they quit competing for business.

Ever wonder why carrier reps tell you about how they want to write all this and that, but everything you submit gets rejected? Sometimes it's because the submissions are not quality, but other times, especially lately, it is because a carrier does not possess adequate surplus.

The reason carriers as a whole lacked surplus in 2018 is because they forewent the largest purchase of reinsurance likely in history. This means they severely reduced the amount of reinsurance they purchased. Some carriers almost quit buying reinsurance. The effect was so pronounced that carriers overall doubled their net written premium growth rate. Their direct growth rate that year was approximately 5%, but their net growth rate was almost 11%. The only way this happens is when carriers quit buying as much reinsurance. My research into individual carriers supports this paradigm change. The overall surplus to net premiums written decreased by approximately 10% as a result.

The carriers may have explained their actions by stating that the reinsurance was too expensive. It was only too costly for them. It's like someone who has squandered their paycheck on non-essentials saying eating is too expensive. It is too expensive for them, but not for financially responsible people.

Another fascinating correlation to me was how the reinsurers thought, "If primary carriers do not want to buy reinsurance, what should we do with all our capital? Let's invest in MGAs!" In 2018, surplus lines growth was double-digit and has remained double-digit ever since; moreover, it has accounted for nearly 100% of all commercial premium growth after adjusting for inflation. This means primary carriers are, as a whole, going backward, becoming less and less important.

This explains why the MGA market has exploded. Reinsurers provided start-up capital and often the marketing rating for their initial products.

The hard market did not start immediately in 2018 for many reasons, including that a hard market takes time to get started. Then COVID hit, resulting in large premium rebates, which caused significant distortions in premium growth. Carriers had time to rebuild the surplus they lost from buying so much less reinsurance, but in 2022, they made terrible investment decisions. For unknown reasons, they decided interest rates would not increase and might even decrease. When interest rates increased, the value of their surplus dived, resulting in one of the most significant losses of surplus in insurance history. The losses had nothing to do with nuclear verdicts or catastrophic storms. It was caused entirely by bad investment decisions.

Surplus to premiums has decreased each year since. While carriers have made record profits the last two years, they have not left the money in surplus. They paid shareholders or executives, or maybe wasted it, but they did not leave enough in surplus to rebuild it. The surplus ratio at the end of 2024 was the lowest it has been likely since 2001.

This is also due to the rapid increase in rates. If a carrier has a $1 surplus to $2 premium ratio and loses 25% of its surplus (and some companies lost more than this in 2022), they now have a $0.75 surplus to $2 premium ratio. If rates increase by 10%, the ratio goes to $0.75 to $2.20. Leverage goes from 0.5 to 0.3, or in reciprocal terms, it goes from 2 to 1 to 2.2 to 0.75, an increase of 40%. That is a considerable increase and enough to cause a hard market or keep a market hard.

Another aspect of this market is that a few carriers are better managed than most. Berkshire finished 2024 with nearly 28% of all surplus in the entire industry. There were around 1,100 carriers in 2024. If Berkshire's surplus is removed, the remaining carriers' leverage increases significantly. Berkshire's surplus position skews the entire industry's results.

Progressive also skews the industry's results. If Progressive had started from scratch 24 months ago, it would already be the 10th (out of 1,100 carriers) largest by net premiums written. It is also, of the larger carriers, the most profitable on an underwriting basis. Its growth and profitability skew the industry's overall underwriting results.

Going forward, the market should soften if carriers leave profits in surplus. But the recovery will be hugely uneven because the hole many carriers have dug for themselves has only two exits: The carrier ceases to exist or they sell all or part of themselves. We've seen a lot of activity where carriers have sold off assets in the last 24 months, and simultaneously, many mutuals effectively de-mutualize so they could sell equity. Selling equity always entails selling parts of the company. Many of those companies will be sold entirely because they don't have the wherewithal to compete with the well-run carriers.

The barriers to entry are so low that in the immediate future, many tiny carriers will be created. These may be standard insurance companies, RRGs, or captives, but they will be small. Small can be beneficial for being nimble, or scary simply because they are small. The best-run larger carriers, though, will become dominant. The top 10 carriers already write more than 50% of all net premiums, and the top 90 out of 1,100 write almost 90% of all premiums. Approximately 500 carriers are unimportant and have no material future. With the barriers to entry so low on every level, financial, regulatory, and IT, I don't believe the number of carriers will decrease, but the number of important carriers will be limited. This is especially true for superfluous, small, admitted carriers because of the growth of surplus lines.

The rest of the industry will become more dependent on these larger, well-managed companies. This does not mean all the large carriers will become more important. Several are horribly run and working hard to become irrelevant. But some of the carriers ranked between 11 and 50 by net premiums are run by high-quality people, they have surplus, and they will be taking the place of the poorly run carriers, resulting in a stronger cadre of large carriers than we have today.

Whether you are an agent, a carrier, or an insurance industry vendor, it pays to know who the winners and losers will be.