Download

Our 10 Most-Read Articles From 2025

It was an AI kind of year. No surprise there. But there was also great interest in social inflation, drones, the Predict & Prevent model and even lessons from the NFL playoffs.

Image
2025 calendar

Of the scores of articles we published this year on AI, five, in particular, struck a chord with you, our esteemed readers: on how AI is reshaping workers' comp, compliance and fraud, on how to unlock ROI (a tricky task) and on how AI's progress is accelerating, with no end in sight. 

Social inflation remained a hot topic, with two pieces in the top 10 on how verdict sizes in insurance cases have tripled since COVID and on how insurers are losing billions of dollars before cases even get to trial. 

Rounding out the top 10 are articles on how drones are profoundly changing how property claims are handled, how misconceptions about electrical fires lead to disasters that could be prevented and (appropriately for this time of year) what the NFL playoffs can teach us about innovation.

Herewith the highlights from 2025, as determined by your interest in them:

Artificial Intelligence

The most-read article, AI and Automation Reshape Workers' Comp, says "67% of organizations expect over 80% of claims to be automatically triaged and assigned in the future — without any manual intervention." The piece explores other efficiencies that AI offers, describes tools that are detecting fraud and offers advice on how to encourage adoption of AI.

At #2 is Why AI Is Game-Changer for Insurance Compliance. It says: "90% of small business owners are unsure about the adequacy of their coverage. AI serves as an intelligent assistant, quickly surfacing important information and providing context when needed.... The impact includes faster verification, fewer coverage and requirement gaps left unaddressed, and faster time to compliance. As Gartner predicts a doubling in risk and compliance technology spending by 2027, companies recognize that AI solutions that enhance collaboration deliver the greatest returns."

At #3 is The Key to Unlocking ROI From AI. It states its thesis starkly: "Your AI and automation initiatives will fail. Not because of bad code. Not because your data scientists aren't smart enough. But because you'll lack the one thing that determines whether any AI initiative succeeds: observability." It then explains at length how "you can't see what your automation is doing — how it's affecting business processes, where it's breaking down, and what value it's delivering."

How AI Can Detect Fraud and Speed Claims was the fourth-most read. It warns that "today's fraudsters have access to AI-generated medical records, synthetic identities, and eerily convincing deepfake videos, allowing them to construct entirely fabricated incidents with alarming precision." But, on a hopeful note, the article then explains how, "with the ability to process billions of data points in real time, AI-powered fraud detection systems can do what human analysts cannot: instantly cross-reference claims against vast datasets, identify inconsistencies, and flag suspicious activity before payouts occur. This technology enables insurers to detect deepfake-generated documents and videos, analyze behavioral patterns that suggest fraudulent intent, and shut down scams before they drain company resources."

At #6 was my summary of an exhaustive research paper published mid-year on the state of AI and where it would go from there: Mary Meeker Weighs in on AI. Among many (many) other things, the prominent analyst laid out startling detail (the cost of using AI has declined 99% just in the past two years), offered useful examples (more than 10,000 doctors at Kaiser Permanente use an AI assistant to automatically document patient visits, freeing three hours a week for 25,000 clinicians) and made some bold projections (by 2030, AI will run autonomous customer service and sales operations, and by 2035 will operate autonomous companies).

Social Inflation

At #5 was We’re Losing Billions—Before We Ever Get to Court, and at #7 was The Tripling of Verdict Size Post-COVID. Both were written by Taylor Smith and John Burge, who also wrote two of the three most-read articles of 2024, on what I broadly think of as social inflation (including third-party litigation funding and other aggressive tactics by plaintiff lawyers). 

In "We're Losing Billions," they write that property/casualty carriers have a blind spot in how they negotiate: "In an era where 99% of litigated claims settle, the cultural instinct on the defense side to 'hold back' our strongest arguments has become a billion-dollar blind spot. We ration key negotiating points, fearing we’ll run out of ammo. We save key arguments to “surprise them at trial.” We frame less, anchor less, and persuade less. Meanwhile, the plaintiff bar is doing the opposite—and it’s working."

In "The Tripling of Verdict Size," Taylor and John describe data they've collected on 11,000 P&C verdicts, across the industry, to address the fact that carriers typically just see their own slice of the verdicts. They argue that only by amassing better data can insurance lawyers keep up with the plaintiff bar in understanding how a case is likely to play out in a certain venue, in front of a certain judge, against a certain lawyer -- and fashion settlement offers accordingly.

(If those articles appeal to you, I'd encourage you to watch the webinar I recently conducted with Taylor and with Rose Hall: "Modernizing Claims: Competing Against AI-Powered Plaintiff Attorneys.")

Predict & Prevent

Hazardous Misconceptions on Electrical Fires was #8 on the top 10 list, highlighting how the insurance industry can help prevent many of the "approximately 51,000 fires annually in the U.S. [that result] in over $1.3 billion in property damage." The piece describes how we can educate policyholders about the fact that circuit breakers don't catch all electrical problems, that even new homes can have electrical issues and that there very often aren't warning signs of electrical problems before they start a fire. (The piece was written by Bob Marshall, CEO of Whisker Labs, which makes a device, the Ting, that detects electrical problems and that I think of as the poster child for the Predict & Prevent movement. I recently interviewed him here.) 

Drones 

Drones Revolutionize Property Insurance Claims, at #9, shows how drones have "emerged as a powerful tool for addressing some of the industry's most persistent challenges, including the need for increased accuracy, faster speed, and more cost-effectiveness" in property inspections during the claims process. 

Lessons From the NFL

It amused me to reread the final article to make the list: What NFL Playoffs Say About Innovation in Insurance. I wrote it following the conference championship games last January and opened by saying: "My main takeaway from the NFL conference championship games over the weekend was that I'm soooo ready to move on from the Kansas City Chiefs — anyone with me?" I've heard in the past 11 months about plenty of folks who are tried of looking up at the Chiefs in the standings — and, lo and behold, we don't have to worry about the Chiefs in the playoffs for the first time in 11 seasons.

After venting my spleen (I'm a frustrated Steelers fan), I got into how coaches were finally following the data and going for it on so many more fourth downs than they used to, on why it took them so long and on how insurers can learn from NFL coaches and throw off even deeply entrenched bad habits.

Wishing you all a healthy, happy and prosperous New Year!

(While desperately hoping that my Steelers beat the Ravens on Sunday.)

Cheers,

Paul

What If Manufacturers Provide Insurance for Free?

As embedded insurance takes hold, what if manufacturers heavily discount coverage or give it away so they can sell more product? How do insurers compete? 

Image
working woman with headset on

During the internet boom of the late 1990s, I heard a term that stuck with me: "the Las Vegas business model." The term was used by a Harvard Business School professor on a panel I moderated -- and he said the results aren't pretty for any competitor caught in the cross-hairs.

The Las Vegas business model involves someone giving away a product -- YOUR product, if you're unlucky -- to sell more of something else. The professor called this the Las Vegas model because he said it's tough to sell run-of-the-mill hotel rooms or meals in Las Vegas when casinos will give away rooms and food to people deemed likely to leave enough money behind at the gambling tables.

The same problem could hit at least some parts of insurance, especially as embedded insurance gains steam. Apple doesn't need to make money off warranties, for instance; it just needs to keep your devices running so you can keep buying things through the Apple Store -- and Apple can keep collecting its tens of billions of dollars of commission each year. Many car makers have started offering insurance, but they're mainly in the business of selling cars. What if they start bundling insurance at a steep discount to help dealers persuade prospective customers to buy their car and not a competitor's? 

This could get ugly. 

The Las Vegas business model springs to mind because of a smart piece Tom Bobrowski published with us last week: "Tech Giants Aim to Eliminate Insurance Costs." The summary warns: "Technology companies view insurance as a cost to eliminate, not a business opportunity to pursue." 

He walks through some examples, including how Tesla is trying to minimize insurance costs as a way of bringing down the total cost of ownership so it can sell more vehicles. He also looks at cybersecurity, where huge software vendors such as Microsoft are doing their utmost to reduce vulnerability and reduce the need for insurance. 

I'd add the liability insurance Amazon offers. Amazon has every incentive to make it as cheap and convenient as possible for sellers to operate on its site -- and keep paying those hefty commissions to Amazon. Amazon doesn't even have to earn a profit on that insurance, so good luck to any insurer trying to compete. 

Tom says tech giants have four major advantages over insurance companies: 

  • Superior data, which comes from the ability to continuously monitor behavior, as Tesla can do with its cars and their drivers
  • Direct customer relationships, which eliminate distribution costs that constitute 15-25% of premiums
  • Technology infrastructure that can automate claims, detect fraud and model risks
  • Brand trust: Customers already trust them with payments, personal data, and critical services

For me, the first two are much more formidable than the last two. Tech giants can have a major advantage on data. So can other manufacturers, such as the big car companies, given all the sensors now being built into products. And any company that can embed insurance into the process of selling something else takes a huge chunk out of customer acquisition costs. 

As for the last two, I'd say insurers have extensive technology, too -- even if there are always complaints that it's dated. Insurers also have the sort of experience with processing claims, detecting fraud and modeling risks that requires all sorts of nuance and that tech companies would have to develop from scratch. Tech giants do seem to have brands deemed more trustworthy than those of insurers, in general, but that gap would surely narrow if the tech companies get into insurance in a big way, because that would put them into the business of denying lots of claims.

Despite the fears at the start of the insurtech wave a decade ago that insurance would be "Amazoned," as retail commerce had been, tech giants have mostly stayed away. Google tried car insurance but found it could sell leads for more than it would earn by selling insurance. Amazon is experimenting with telehealth and pharmacy services but has shied away from any major moves in healthcare.

In general, tech companies didn't want to commit the capital or have to deal with the extensive, state-by-state regulation that insurers face. Those reservations will continue, I believe. Besides, many giants from outside the industry are talking, at least for now, about insurance as a business opportunity, not as a cost to be eliminated. General Motors has said it hopes to generate $6 billion of insurance revenue by 2030, and Elon Musk has said insurance could account for 30-40% of Tesla's business. 

But I think Tom is right when he says the Las Vegas business model represents a major trend, even if different parts of the insurance industry will be affected at different rates and even if it will take, in his estimation, five to 20-plus years to play out.

An ugly trend for insurers, but one we should all keep in mind.

Cheers,

Paul

 

 

What Would You Do With $1 Trillion?

Record $14.6 billion fraud highlights an urgent need for entity resolution technology in P&C operations.

Silver case with stacked 100 dollar bills in rubber bands

For the first time ever, direct premiums in P&C exceeded $1 trillion in 2025. Also a first in 2025: a $14.6 billion alleged fraud ring was exposed. (The prior record was $6 billion.)

The watchword for industry executives should be: "entity."

Fraud risk, customer experience, and effective AI? They're all keyed to entity. The money you make, the money you keep, and the faster you grow? Entity, again.

That total of direct premiums means there are now more than one trillion reasons to understand who is paying you and who you are paying. That "who" is an "entity" -- people, businesses, and organizations.

Entities have identity – names, addresses, phone numbers, etc. In logical fashion, there are only three kinds of entities – trusted, unknown, and untrusted. If you can't distinguish among these three kinds, then you are reading the right article.

With interaction, entities also have history, behavior, and outcomes. Entities may be related to each other. Sometimes those relations are very transparent, like parent-and-child or employer-employee. Sometimes they are hidden, like in an organized crime ring or in a conspiracy and collusion affiliation. Entities may be multifaceted – driver, renter, business owner, group leader, member of an organization, neighbor, volunteer, relative, known associate. These relationships all change over time, yet there is still the same entity.

Reflect on this for a pause here. Consider yourself for example as EntityONE. Now quickly list all the roles and relationships you have in the physical world at your home, office and neighborhood, and then online as an emailer, shopper, commentator, reader. Your identity in all those real and digital places may take different forms, but it is always you, EntityONE.

The everyday entity

In the day-to-day of insurance and business life, there is always a concern about fraud and abuse. From application through claims payment, your need to know your business extends from your new business funnel through, third parties, vendors, customers, agents, and even staff.

A new person applies for car insurance, a business makes a claim involving a third party, an invoice arrives from a new address, an agent makes a submission, finance issues a payment – to trust or not to trust?

Names, addresses, phone numbers, etc. are the data vestiges of ways to describe an entity. Either physical or digital in origin, these data are typically scattered across various boxes in an organization chart and different core, ancillary, and API-accessed third party systems.

We store identifier elements like names and address with varying lengths, spellings, inaccuracies, and levels of incompleteness, and in unstructured and semi-structured data entry fields and free form text like notes and templates.

Then we store them again and again over time, moving between systems, between carriers, between vendors, and of course, across multiple CRM applications, which are additionally stuffed with all manner of duplicate and partial records.

Think of yourself as EntityONE

If you tried to have your own self, hereafter called EntityONE, appear the same in every field in every system in every organization over time, you would fail. Even if you never moved and never changed your name, random data entry error alone would ruin your ambition.

One data exercise to try at home: If you have address data from northern California – find a system where "city" is collected as part of an address. Then see how many ways "San Francisco" appears. At one large carrier with tens of thousands of transactions across five years of data entry there were 97 unique entries.

The correct answer was the dominant response, "San Francisco." Shorthand like "SF" and nicknames like "SanFran," "Frisco," and "San Fran" were next. A lower-case version of the correct answer was next, "san francisco." All sorts of typos and transpositions followed. An unthought-of case was a space key entry as a valid character – "S F" is now different than "SF." And those space key values could be leading, trailing, or in the middle. Another very frequent response, when permitted by system data field edit logic, was "blank," no entry at all, or in some cases any number of space key entries.

If you ran a literal matching algorithm on the "city" field, in theory EntityONE could have 97 different data "cities" yet is still only a single unique entity.

Some other factors might also contribute to your failure to have perfect EntityONE data.

One system has separate fields for first name and last name, with no field for middle name and no fields for title/prefix, or suffix. Another system has one long field where all of that is supposed to be entered. Is it Dr. or Mrs. or Ms or Miss with suffix MD, PhD, DO?

Generally, the simplest of contact information – name, address, phone number – can be entered and stored so inconsistently in so many multiple places over time that EntityONE would not exist as a whole and unique name-address in the best of cases.

When it comes to legal entity, the EntityONE Family Trust, or your business version, EntityONE., it's still you, but you now may also have shared rights and not be the only decisionmaker. So enough of thinking of just yourself.

Think of how difficult it might be to search for your customer as their data is entered and maintained across different systems in different ways. Your decades-old processes still treat paper and data as if they were entities, not as entities that have related paper and data. 

This work process of literal data computing is at the core of delivering customer experience but allows an opening for fraudsters and is the bane of AI.

Let this sink in: Data are not entities; entities have data.

Entities have data. You as EntityONE are unique. All the aliases, name changes, addresses, business titles, partnership and shareholder situations, and your honorifics aside, you are still you. Even after you pass away, the estate of EntityONE will persist.

Resolving the many ways to identify you is now what you need to turn inside out.

Every other person, business, group, and organization has the same issues. When you encounter any identity, you need to resolve it down to the core entity, or you will not know who you are dealing with.

Whether an entity is legal or not legal or illegal or foreign or even sanctioned, as we think on the identity data we see every day, many entities present as if their data is thin, with seemingly little to none. Some appear squeaky clean. Some have long years of history. Some look like they popped out of thin air. Some, like a bad penny, keep popping up after we have decided not to interact with them. Synthetic, assumed, straw man, take over, hacked, phished, fraudulent, and other forms of malfeasance also exist.

Keeping tabs on entities (e.g. people and organizations), and the hidden relationships among them in real time is now practical with advanced analytics powered by a technology known as entity resolution. Entity resolution brings all the snippets of various identifiers around an entity into focus.

Entity resolution may involve several efforts, all claiming to do the same thing across your data and computer laden landscape. In the earliest days of computing, crazy sounding technical terms sprouted to try to address this existential data identity issue around keeping EntityONE clearly in focus. It started field by field in databases and has modernized to complex multi-attribute vector and graphical analytics.

These geeky but incomplete early algorithms left a lot undone while still showing some value – they had names like Levenshtein (an edit distance formula for suggesting a typo was made in text similarity), Hamming distance, and more recently in AI terms, tokens with Jaccard and Cosine TF-IDF similarity approaches. There are dozens upon hundreds of challenger approaches. But an analytic or a technique is not a product or a solution.

An early inventor created a combination of steps and orchestrated a set of code he called "fuzzy matching." (In memory of Charles Patridge, here is a link to a seminal paper he wrote.) Many data analytic communities shared that code and subsequent innovations to make progress on name and address standardization and name and address matching. The postal service benefited greatly with more deliverable mail, and database marketing boomed, while customer analytics and lifetime value ascended, as did provider and agent and vendor scorecards with more ambitious service level monitoring.

As with many other business problems, necessity is the mother of invention. Almost every company now has inventions that come from do-it-yourself, homegrown efforts. It is the only way forward before a workable, scalable solution is created.

Also likely installed are several versions and half attempts of making the problem better inside an application or between systems. First, companies used data quality checks, then field validation efforts, then more hardened data standards. For all that work, the human data entry staff invented "99999" and other bypass work hacks. You can see that still today.

This data is what you are training your AI models on.

The largest legacy problem today is this data pioneer spirit turned hubris. IT pros and data science teams do the best they can with what they have – full stop. The satisficing behavior limits their contribution. It also injects unneeded error into all the models they are building and operationalizing. Much of the AI risk is self-inflicted poor entity resolution management. Actuary staff feel largely immune at the aggregated triangle and spreadsheet point of view, but that is a false sense of security, since they cannot see into the granularity of transactions beneath a spreadsheet cell. This is changing dramatically fast with the emergence of the machine learning and AI wielding actuarial-data_scientist corps of employed professionals, academicians, and consultants.

New techniques like large language models (LLM) are making short work of text data in all forms to create new segmentation and features for existing models, while also enabling new modeling techniques to iterate faster. The next phase of workflow improvement is almost limitless. All these great breakthrough efforts need an entity level of application to have their highest value.

The rise of industrial-grade entity resolution

The financial stress indices are high. The sympathy toward companies is low. The opportunity to use AI and seemingly anonymous internet connections makes people think they can't get caught – a presumption with a lot of truth to it these days.

A shout out to our industry career criminal counterparts enjoying the status "transnational criminal organizations": Terms like straw owners, encrypted messaging, assumed and stolen credentials, synthetic identities, and fake documentation are now everyday occurrences.

And that's just what relates to money. For truly awful perpetrators, anarchists, drug dealers, arms dealers, human traffickers, hackers, terrorists, espionage, traitors, nation state actors, and worse, the problem space of entity resolution is mission critical.

Keeping tabs on entities (e.g. people and organizations), and the hidden relationships among them in real time is possible today. It elevates internal "good enough'" learned implementations to "never finished being done, continuously adapting, and real time' data driven implementations."

What you should do about entity

The most capable solutions sit around existing efforts in place, so no need to rip and replace anything. This makes entity resolution prioritization easier, as it can be adopted with what you do now. This extends to your analytic ambitions in cyber resilience and digital modernization, as it can interact seamlessly with additional identifiers like digital entity resolution – emails, domains, IP addresses, that have an address corollary to a street address in a neighborhood. (Here is an earlier article I wrote for ITL on "Your Invisible Neighbors and You.")

Do yourself, your board, your customers, and your future AI successes a favor and get serious about entity and entity resolution as the nearest thing to a single truth as you can get.

Some Background

The author has built matching and fuzzy matching applications multiple times with multiple technologies over a four-decade career and advises that benchmarking is essential for understanding fit for use in entity resolution. A four out of five, or 80%, accuracy might be fine for some use cases and considered corporately negligent in others.  Getting to the high 90s takes much more data and resources than most internal teams can dedicate on a sustained basis. 

A practical example from the author’s experience is Verisk Analytics, where they have billions of records of names and addresses coming from hundreds of carrier systems, all needing attribution to an entity level for highest business value. They have instituted an industrial solution to supplement or replace methods the author’s team built originally for fraud analytics. 

The vendor they give testimonials for is one that is now being adopted in insurance after widespread use in governments and security, customer management, financial integrity, and supply chain use cases globally. It is called Senzing. Their methodology creates the capability to recognize relationships across a number of data attributes and features shared across disparate records and systems, e.g.  names, addresses, phone numbers, etc. in real time. 

Modern entity resolution systems can deploy inside your company as an SDK, so you never need to share any data to move forward. Multiple use cases around your enterprise can also derive benefit from improving entity resolution management so it is reliable on the first shot. 

Was the Fed Rate Cut a Mistake?

Michel Léonard, chief economist for the Triple-I, says the Fed's statement downplaying the possibility of future rate cuts will keep key interest rates high.

Interview Banner

Paul Carroll

We've had a prolonged dance with the Federal Reserve over whether they would cut rates again this year, and they finally did, on Dec. 10, right as you and I began this conversation. They also signaled they’re probably done for a while. Where do we go from here?

Michel Léonard

First, I think the Fed made a policy mistake by cutting rates and changing monetary outlook from easing to holding. Setting expectations is more impactful on growth than actual rate changes. By saying “don’t expect rate cuts” they took the wind out of the current easing’s impact. We’re lucky the stock market didn’t drop by 4-5% in the days since. 

Instead, the proper policy would have been, in my and many economists’ opinion, to skip the cut but keep easing expectations alive. That would have a strong multiplying impact on GDP. 

Had the Fed stuck to easing, we would have started to see decreases in mortgage and auto loan rates by Q3 2026. We needed those lower rates to fuel homeowners and personal auto insurance premium volume growth. Instead, we’re likely to face historically high mortgage and auto loan rates through Q1 2027.  Most likely, we’re stuck with weak housing starts, weak existing home sales, and lower auto sales, and without that homeowners and personal auto premium volume driver. 

Commercial property, especially, needed the Fed’s help. We have all these commercial Class A downtown conversions into housing sitting still. This is Q4 2023 all over again: The Fed said, Don’t expect more rate cuts – and took the wind out of economic activity throughout 2024. It was just starting to recover by now. The Fed took the wind out of Class A conversions then, and it’s going to do it again. Conversions were starting to recover – now expect no significant changes until Q4 2026.

It’s likely the Fed just caused another soft year of overall U.S. GDP growth and P&C insurance underlying growth, especially when it comes to economic premium volume growth drivers. 

I was just looking at premium volume growth for homeowners, personal auto, and commercial property in 2025. Typically, actuaries build in a baseline for premium volume growth by adding net GDP growth and CPI.  For 2025, that would bring us to about 7%. But premium volume growth for those lines is below 5%. The argument can be made that, at that level, premium volume growth was flat to negative in 2025. 

Paul Carroll

You make a compelling case, as always. So why do you think the Fed cut rates again?

Michel Léonard

I was surprised that the Fed would cut once this year. I was surprised when they cut twice, and I was speechless when they cut a third time. 

The Fed's estimate is for real GDP growth to decrease to about 1.7% by 2027. That's starting to be at the lower end of their goal. They do not see inflation picking up significantly, which is probably why they felt comfortable with the statement about further cuts.

But they’re totally flying blind here.

There’s the diminishing growth multiplier impact of rate cuts by changing expectations from easing to holding. Perhaps even more so, the Fed decided to do this with no GDP numbers since June, and no CPI and employment numbers since September. For GDP, getting data for Q3 was critical because of inventory depletion in Q2. The same for getting CPI and unemployment numbers through November. You can’t make decisions about monetary policy without those three.  How about without even one?

Paul Carroll

With Trump expected to name his next nominee to run the Fed in January, does that introduce another layer of uncertainty into the equation?

Michel Léonard

There’s a lot of noise in the market asking why the Fed made the statement about the direction of monetary policy. It did not need to.  One view is that it did so to preempt rate cuts-galore next year with Trump’s new appointment(s). I don’t think that’s the case.

First, there are many governors other than the chairman who get to vote on rates. 

Second, the Fed has already altered its inflation target. A rate cut with CPI at 3.0% means the current board of governors already tolerates annual inflation up to 3.5% (significantly more than the former 2.0% goal). 

Third, I was surprised by how mainstream the president’s leading candidate for Fed governor, Stephen Miran, is. He’s a consensus candidate, even though he might put more emphasis on growth than price stability when it comes to the Fed’s dual mandate. Personally, I see that shift, within reason, as beneficial to the overall economy. That said, tolerating inflation up to 3.5% is not the same as up to 4.0%. That would ring alarm bells even from me. 

Now keep in mind that an increase of one percentage point in tolerable annual inflation is a significant number.  For context, 1% compounded over a 35-year career means U.S. households have to increase their annual savings by 21% just to keep up. 

Paul Carroll

What dates should we keep in mind for releases of economic data, so we know whether we’re getting a nice present or a lump of coal in our stocking?

Michel Léonard

The next key date is Dec. 16, for unemployment data. A couple of days later, we get CPI, then GDP on the 23rd. Let me walk through these in chronological order, starting with unemployment.

The recent ADP numbers were a bit worse than expected but certainly within an acceptable range. We're currently at 4.40% unemployment in the U.S., and the consensus is that the new number will be 4.45%. If we get anywhere above 4.45% or 4.5%, I think the market may start reacting. [Editor’s note: The unemployment rate came in at 4.6%.]

The market consensus for the CPI number right now is 3.05%. I think we can be fine up to 3.2% or 3.25%. If we get above that, if we get to 3.5%, that might not be catastrophic, but it would certainly be the last nail in the coffin of further rate cuts. [Editor's note: The CPI number came in at 2.7%. There were, however, anomalies in data collection because of the government shutdown, so the number is being treated with some caution.]

Now we get to GDP. The market consensus expectation for Q3, at 2.48% growth annualized, is much more than I and the Fed think is feasible, which is around 1.9% and 2.0%. The market consensus is likely overly optimistic because Q2 GDP reached 3.8% on a quarterly basis. Again, we’re flying blind. [Editor's note: The number for Q3 growth turned out to be 4.3%.]

Paul Carroll

We’ll have another of these conversations in January, and there’s so much uncertainty now, even about the economic numbers, that I can imagine you’ll want to hold your thoughts about next year until then, but can I tempt you into making any projections about 2026?

Michel Léonard

Market reaction to the Q3 and November economic releases will be critical in determining the course of the economy in the next six months, which makes that Dec. 23 release unusually significant in terms of potential impact on the equity market, consumer spending, and private commercial capital investments. 

My concern with the equity markets is the Fed's statement about expectations. And you can write this down: I think that decision is the most ill-advised the Fed has made in three years.

Paul Carroll

Thanks, Michel. Great talking to you, as always. 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Insurtech Boards Need More Operational Expertise

As insurtechs pivot toward profitability, boards lacking operational expertise from actual insurtech builders face predictable scaling failures.

Focused colleagues working on project in office

Insurtech boards often look the same: VCs providing the funding, tech execs offering product wisdom, and a retired insurance CEO from a bluechip org offering access to their network. What's missing? Insurtech operators who've actually built and scaled an insurtech.

This gap matters as insurtechs mature toward profitability. The sector's shift is evident: 57% of UK insurtechs now prioritize cost metrics compared with 29% in 2023. As insurtechs move from disrupting insurance to being insurance companies, the absence of operational expertise creates predictable failures.

The Traditional NED Profile Doesn't Always Fit an Insurtech

Insurance boards traditionally draw non-executive directors (NEDs) from predictable pools: retired CEOs, CFOs, accountants, lawyers, ex-regulators. These profiles bring essential corporate governance—financial oversight, regulatory compliance and crisis management experience—and make sense for mature carriers.

But for insurtechs, there's a critical gap: operational experience in how insurtechs actually work. Insurtechs move fast, iterate weekly, deploy code hourly, operate cloud-native infrastructure, and leverage real-time data for pricing. Most traditional NEDs haven't experienced this reality.

Board Oversight Requires Operational Fluency

You cannot challenge what you don't understand, and you cannot understand what you haven't experienced. Take underwriting. A traditional NED might ask: "Are you rating on the full postcode?" But the question to the executive should be: "What's the key telematics data point that identifies poor driving, then how quickly are you removing drivers?" This requires an understanding of telematics data, real-time risk scoring, and behavioral analytics—having built these systems, not just read about them.

Regulatory relationships: I wore trainers to the GFSC and still secured approvals for Zego Insurance. Not because regulators don't care about professionalism—they do—but they care more intensely about competence, transparency, consumer protections and how an insurance carrier has quantified and manages their risks. The fact that I wore trainers is irrelevant. What mattered most was demonstrating an intimate understanding of underwriting models, capital adequacy, conduct and risk. It's about substance and authenticity, not formality.

Traditional NEDs can sometimes impose processes mismatched to insurtech speed: 200-page quarterly board packs taking weeks to prepare, approval gates slowing decisions, risk frameworks designed for 100-year-old companies applied to three-year-old startups. Not because they're obstructive, but because their experience hasn't equipped them to distinguish proportionality.

This isn't a license to ignore governance. Insurtech operator NEDs who are effective understand that to move fast, this still means robust governance controls, stress testing and customer protections. The best insurtech operator NEDs combine speed of execution with a governance mindset.

When Insurtech Operator NEDs Become Critical

Phase 1 (Seed-Series A, £0-10M): Early stage insurtechs prioritize product-market fit. Board composition appropriately made up of VCs and product/tech executives. Operational insurtech expertise remains advisory. Governance-focused NEDs to ensure financial control.

Phase 2 (Series B-C, £10-100M): This is the inflection point where most insurtechs stumble without operational guidance. The company faces scaling challenges: establishing reinsurance relationships, navigating regulatory relationships, building in-house claims operations at scale, managing underwriting profitability, and risk management.

This is precisely when operational insurtech NEDs become essential. They've navigated these transitions, built these capabilities, and made these mistakes. They recognize the warning signs: adverse selection in growth cohorts, claims cost inflation, disadvantageous reinsurance treaties, and regulatory relationships straining. The majority of insurtech value creation or destruction happens here.

The optimal board at this stage combines a blend of skills and experience:

  • Insurtech operator NEDs, who understand how an insurtech actually works and have the battle scars from past experience
  • Governance NEDs, who bring their legal, regulatory, financial and compliance background and insight
  • Investor NEDs, who provide direction on their investment and capital markets.

It's not an either-or; it's achieving the right balance. It's about achieving good corporate governance.

Phase 3 (Series D+, £100M+): Late-stage insurtechs require boards driving profitability discipline, optimizing capital efficiency, and preparing for exit. This demands senior executives who've scaled carriers to £500M+ revenues and delivered sustained profitability.

What Insurance Operator NEDs Bring

Underwriting Discipline: VC-backed insurtechs face pressure to grow revenues and hit milestones, creating an incentive to loosen underwriting standards. Operators who've lived through this provide an essential counterbalance.

Regulatory Navigation: Insurance regulation is relationship-intensive. Regulators want to know you understand the risks and that you will be transparent when things go wrong. Operator NEDs help management distinguish genuine regulatory concerns. The best foster proactive regulatory relationships, flagging emerging risks early, explaining new approaches transparently and treating supervision as a partnership. This builds trust that protects the business during challenges.

Reinsurance Expertise: Technology founders approach reinsurance as pure optimization: minimize cost, maximize coverage. The reality is more nuanced. Reinsurance partners provide capacity, capital relief, risk diversification, and revenue. Insurtech operators understand these tradeoffs from experience, preventing expensive mistakes.

Claims Reality Checks: Claims are where profitability lives or dies. The temptation is to treat claims as pure tech: automate everything, eliminate humans. The reality resists this. Claims require empathy, judgment, and negotiation. Insurtech operator NEDs help boards recognize when automation enhances versus degrades operations.

Governance integration: The best insurtech operator NEDs bring execution expertise and governance understanding. They've worked with regulators, managed capital adequacy, balanced speed with policyholder protections and regulatory compliance. This combination of operational fluency plus governance mindset makes them most valuable.

Consumer Outcomes: NEDs understand that algorithmic pricing models must present fair customer outcomes. They ask: How do our pricing models treat vulnerable customers? How do we identify vulnerable customers during automated claims processes? This consumer-first mindset is critical.

The Insurtech Operator NED Profile

Not every insurance executive makes an effective insurtech operator NED. Requirements include:

  • Recent Experience (Within Five Years): Insurance and tech evolve rapidly. Experience from 1998 lacks 2025 context.
  • Hands-On Building: NEDs who've implemented underwriting systems, launched products, negotiated reinsurance, secured regulatory approvals—not just strategized.
  • Technology Fluency: An understanding of how modern technology enables insurance operations at a conceptual level sufficient for informed questions.
  • Startup Empathy: Distinguish necessary governance from bureaucratic waste. Insurtechs with 50 employees cannot operate like 5,000-employee global carriers.

Red Flags:

  • Retired executives disconnected from current dynamics
  • Pure strategy/M&A backgrounds without operational depth
  • "We did it this way at [Big Incumbent]" mentality
  • No understanding of data-driven underwriting
  • Too hands-on—they can't distinguish between strategic oversight from operational execution
  • Dismissive of governance requirements or regulatory concerns
Data-Driven Underwriting Requires Data-Literate Boards

Board oversight of data-driven underwriting requires an understanding of how modern data infrastructure and machine learning function in production. How are you validating model performance? Detecting drift? Preventing algorithmic bias? Former compliance officers typically can't ask these questions.

Making It Work

For insurtechs adding operator NEDs:

  • Give them a voice when it comes to product, underwriting, pricing and claims decisions
  • Pay competitively (+ equity for serious involvement)
  • Include them in strategic discussions early—their pattern recognition prevents expensive mistakes and supports faster execution.
  • Balance insurtech operator NEDs with governance NEDs—you need the best of both worlds

For operators becoming NEDs:

  • Move from doing to advising—you're not the CEO anymore
  • Bring your governance experience too, alongside operational expertise.
The Opportunity

Insurtech operators contemplating board careers face an unprecedented opportunity. The UK alone hosts 300+ insurtechs seeking insurtech operational expertise. As funding shifted toward profitability in 2024 (Series B/C funding up 30% while seed declined), boards are increasingly recognizing operational expertise gaps.

For insurtechs, the imperative is clear. As you scale beyond Series A, add operational insurtech expertise. Not just former CFOs or compliance officers, but operators who've built insurtech carriers, navigated regulators, negotiated reinsurance and achieved profitability, while understanding the governance requirements and their fiduciary duties.

The insurtech sector's next phase belongs to companies combining technological innovation with operational excellence. That combination requires the board's understanding of both. The missing ingredient on most insurtech boards, operational insurtech expertise, is also the most critical ingredient for sustainable success. The question is not whether your board needs insurtech operators, but how quickly you'll recognize the gap before expensive mistakes make the answer obvious.


Andy Wright

Profile picture for user AndyWright

Andy Wright

Andy Wright is the co-founder of Resnova, an insurance consulting firm specializing in insurtech strategy, product development, market expansion and regulatory relationships. 

He previously served as a senior manager within Tesla’s European entity with approval from the MFSA and FCA and most recently as managing director of Zego’s insurance carrier. 

Geopolitical Risks Threaten Cobalt Supplies

Cobalt mining's concentration in politically unstable regions threatens critical supply chains powering the global energy transition.

Abstract background texture of blue and brown color

Cobalt can be considered the silent metal of the energy transition. Unlike lithium, which has become shorthand for the electric vehicle (EV) revolution, or nickel, which has long been a staple of industrial production, cobalt rarely makes headlines. Yet, it is cobalt that stabilizes the cathodes of high-performance Li-Ion batteries, enabling the size, safety, and durability that consumers demand. Without it, the green transition would likely be slower, more expensive and certainly more uncertain.

Cobalt mining is also concentrated in some of the world's most politically fragile regions, dominated by a handful of actors whose interests do not always align with those of consuming nations. 84%, or 220,000 metric tons (2024), of all the world's production comes from the Democratic Republic of Congo (DRC), mostly as by-products from copper and nickel mining. The second largest supplier is Indonesia, with 7% of the world supply. China, which dominates the world's refining capacity (roughly 70%), supplies less than 1% of the world's cobalt output. This leaves both cobalt mining and refining significantly exposed to geopolitical events and tensions. Recently, the Democratic Republic of Congo (DRC) set quotas of 96,600 tons of annual export through 2027. This is less than half what the country exported in terms of volume in 2024. Trump's recent tariffs might also significantly affect the global cobalt markets. These risks are not just hypothetical: they are already reshaping flows of investments, trade policies and corporate strategies.

Geographical dependence

Within the DRC, production is clustered in the copper belt of Katanga, where industrial-scale mines sit alongside thousands of miners working in precarious conditions. This concentration is not an accident of domestic policy but of actual geology. Cobalt is rarely mined on its own; it is a by-product of copper and nickel extraction (the exception being the Bou-Azzer mine in Morocco). The DRC's copper belt happens to be unusually rich in cobalt-bearing ores, giving it a natural monopoly. The cobalt belt is located in the south part of the country, near the borders of Zambia and Angola, leaving it vulnerable not only to domestic disruptions such as political instability, labor unrest, or regulatory change, but also geopolitical tensions in neighboring states. Sudden changes or events can quickly ripple through global supply chains (unlike oil, where multiple producers can offset disruptions).

Decades of conflict, corruption, and weak governance have left DRC with fragile institutions. Elections are often contested, and the state struggles to exert control over its vast territory. Political and ethnical violence is largely concentrated to the east of the country, near the Rwandan, Burundi and Ugandan borders. Recent insurgency by the rebel group M23 can be seen as a resurgence, possibly supported by Rwanda. The group emerged in 2012 and later surrendered in 2013 as part of a peace deal after a year of intense fighting with DRC government forces. The Allied Democratic Forces (ADF), a Uganda based Islamic group has recently performed attacks in North Kivu, killing more than 80 civilians. Geographically, these incidents are taking place more than 1,500 km north of the cobalt concentration in the country. Careful monitoring of the situation is required, as situations can evolve quickly, even over relatively large distances.

For mining companies, this creates a volatile operating environment. Besides the armed groups present in the country, local and national contracts can be renegotiated or revoked. Taxes and royalties can be raised unpredictably. Local communities, often excluded from the benefits of mining, can erupt in protest.

The hidden cobalt dependency: military and aerospace

While battery production and EVs dominate the cobalt conversation, the military and aerospace sectors have their own, less publicized, strong dependence on the metal. Cobalt is essential for superalloys used in jet engines and gas turbines (for power production). These alloys must withstand extreme temperatures and stresses, making cobalt's heat and corrosion resistance hard to replace. For defense applications, cobalt is also used in magnetic alloys for guidance systems (e.g. precision-guided missiles), smart bombs, and for armor resistant plating.

The aerospace and defense (A&D) industry faces unique challenges in replacing suppliers. Qualifying a new source of aerospace grade cobalt can take up to 10 years due to stringent performance and safety requirements. Even if alternative mines come online, they cannot be integrated into critical defense supply chains overnight.

Moreover, A&D grade cobalt is purchased in smaller quantities than grades for other industries, giving defense buyers less market leverage. In a tight market, EV manufacturers — with their larger, more predictable orders — may crowd out smaller but strategically vital defense procurement.

The Rise of Resource Nationalism

The DRC is not the only country reassessing its role in the global mineral supply chain. Indonesia, which has vast nickel reserves and growing cobalt by-products, has banned the export of raw nickel ores to force investment in domestic processing. The government's goal is to capture more value from its resources, moving up the supply chain into refining and battery production. This strategy might have paid off, as Chinese investments have significantly increased since 2020 and global export has grown from less than 2,000 tons/year in 2020 to close to 20,000 tons in 2024. The recent move by DRC is possibly reflecting the same goals. At the same time as it boosts local production, it also strengthens Chinas position and leverage globally. Increased Chinese investments would also add some level of security for the DRC government. What would China do if the threat from M23 or ADF become real for the Cobalt sector?

Other countries are following recent developments carefully. Governments are tightening control over critical minerals, raising royalties and demanding local beneficiation. This wave of resource nationalism reflects both economic ambition and political intention. Leaders see critical minerals as leverage in a world hungry for energy transition inputs.

For investors, this obviously means greater uncertainty. Projects that looked profitable under one national policy can become unviable under another. Long-term contracts are no guarantee against political shifts, and geopolitical strategies are now more than ever affecting the strategies of businesses. That is why geopolitical risk recently has climbed into the top 10 risks for corporations to deal with.

Strategic Intelligence in Decision-Making

In such a volatile business environment, strategic intelligence has become indispensable. For mining companies, investors, and governments alike, the ability to anticipate risks and act preemptively is as valuable as the metal itself.

Strategic intelligence is not simply about collecting data. Intelligence is the product of data and information analysis, requiring both analytical skills and geopolitical awareness to be able to produce informed assessments. In essence it's about synthesizing political, economic, and social signals into actionable foresight. This can be divided into smaller segments of intelligence, for example:

  • Political risk forecasting: Intelligence professionals monitor electoral cycles in the DRC, shifts in Chinese industrial policy, and the rise of resource nationalism in emerging markets.
  • Supply chain mapping: By tracing cobalt flows from artisanal mines to refineries, companies can identify chokepoints and potential vulnerabilities.
  • Scenario planning: Modeling of the impact of sanctions, export bans, or regional conflicts on cobalt availability, allowing firms to stress-test their strategies.
  • ESG monitoring: Open-source intelligence (OSINT) and satellite imagery are increasingly used to verify supply chain claims and detect abuses in artisanal mining.

The intelligence function is no longer confined to governments and private intelligence firms. Corporations now employ former diplomats, analysts, and data scientists to build in-house capabilities. Multinationals share intelligence with allies and industry consortia, recognizing that resilience is collective.

To manage and stay on top of the full cobalt supply chain, businesses need to stay on top of what is happening on all levels.

Supply Chain Alternatives

Building alternative supply chains is easier said than done. New mines take years to develop, and environmental opposition can delay projects in developed countries. Processing capacity is still overwhelmingly concentrated in China. Even so, consuming nations are not passive and are constantly looking for alternative supply chain solutions to reduce the overall risk of supply concentrated to single resources. The United States, through the Inflation Reduction Act (IRA), was offering subsidies for domestic battery production and incentives for sourcing from allies. With the new Trump administration in place, the IRA has plunged into uncertainty, with funding being paused in January 2025. The US also has the Defense Production Act in place to fund domestic mining and refining of cobalt, nickel and lithium.

The European Union launched its Critical Raw Materials Act, aiming to secure diversified supply. The recent dramatic US change in the IRA policy, aimed at reducing funding of renewable projects could affect EU policies as well, as European far-right political parties gain momentum. Trump made it very clear in his latest speech at the UN General Assembly in New York that he's not a big fan of the green transition and called it 'the greatest con job.'

Japan and South Korea have their own resilience strategies and are investing in overseas mining projects, sometimes in partnership with Western firms. Canada and Australia are both part of the Minerals Security Partnership (MSP) together with the US, EU and Japan.

Recycling could also eventually provide a significant share of supply, but not on a scale in the near term. For now, dependence on the DRC—and by extension on China—remains a structural reality.

Managing Resilience

Risk transfer mechanisms help companies manage volatility and maintain operations. What are the risk transfer mechanisms available then for investors and end users?

  • Political risk insurance (PRI) protects against expropriation, contract frustration, currency inconvertibility, and civil disturbance. For mining companies, this coverage can be critical in securing financing and maintaining access to capital markets.
  • Trade credit insurance ensures the payment of receivables throughout a chain of intermediaries, reducing the risk of liquidity crises resulting from counterparty defaults.
  • Supply chain interruption insurance mitigates risks arising from disruptions such as port closures, sanctions, or cyberattacks. For automotive manufacturers reliant on just-in-time cobalt deliveries, this type of coverage is essential to business continuity.
  • Hedging instruments—though currently limited in the cobalt market—offer buyers the ability to fix prices or share financial risks with suppliers, thereby providing greater price stability.
  • Parametric insurance: Trigger based payouts linked to measurable events (e.g., port closures, conflict escalation, strikes) that disrupt supply.

Traditional insurance, once seen as a peripheral cost of doing business, is increasingly central to the strategic calculus of companies exposed to geopolitical risk. Insurance alone, though, does not necessarily provide a complete solution. A great example of this would be the war in Ukraine. It has had a significant direct effect on the global transformer supply, as insulation used in transformers is fabricated in Ukraine, and Ukraine's domestic need for transformer significantly increased with Russian attacks on critical infrastructure. Today, lead times can be up to 36 months for high-capacity transformers, and with a typical cap of 18-24 months for business interruption insurance coverage, power generation and transmission companies risk significant loss of income that only partly would be covered by insurance.

In essence, what these tools can do is buy time — time for companies to diversify supply, invest in recycling, or adapt technologies to be less dependent on volatile materials. Risk transfer solutions focus more on managing resilience than eliminating exposure.

Tech Giants Aim to Eliminate Insurance Costs

Technology companies view insurance as a cost to eliminate, not a business opportunity to pursue.

Low angle view of a tall office building with glass windows against a blue sky

My recent article on continuous underwriting drew some pushback from peers regarding the Tesla Insurance case study. Their argument: Progressive, State Farm, and Allstate have offered usage-based insurance for more than a decade—and they write far more of it. So, what makes Tesla Insurance special?

The difference is that Tesla sells cars, not insurance. Auto insurance is a variable in the car's total cost of ownership (TCO)—and Tesla Insurance exists to drive that number toward zero, or as close to zero as possible. Insurance isn't their business; it's their anti-business.

Tesla brings world-class engineering, its own AI infrastructure, and relentless drive to overcome barriers—including regulatory ones. With a $1.5 trillion market cap (as of this writing)—the equivalent of 11 Progressives, 24 Travelers, 28 Allstates, or 40 Hartfords—it packs serious financial firepower.

Embedded Insurance

It's hard to discuss continuous underwriting without considering embedded insurance. When Tesla bundles coverage with vehicle purchases, when Apple offers device protection, or when DoorDash automatically includes delivery insurance—these feel like features, not insurance products. The insurance almost disappears into the broader service relationship.

Tesla Insurance captures customers at the point of sale, when they're most inclined to bundle. Its customer acquisition cost is nearly zero (versus the industry average of $200-$800), it writes policies 20–30% below standard rates for good drivers while maintaining acceptable loss ratios, and its Net Promoter Scores exceed the industry average by more than 40 points.

This kind of experience turns insurance from a grudge purchase into an invisible part of something people actually enjoy. That perceptual shift creates space for continuous underwriting innovations to take root gradually, without friction.

The Convergence Thesis

Here's where this gets strategically fascinating: Multiple trillion-dollar technology companies—Tesla, Amazon, Apple, Google, Microsoft—all have business models where minimizing insurance costs creates competitive advantage in their primary markets. They possess:

  • Superior data: continuous behavioral and environmental monitoring through devices and services
  • Direct customer relationships: eliminating distribution costs that constitute 15-25% of premiums
  • Technology infrastructure: claims automation, fraud detection, and risk modeling capabilities
  • Brand trust: customers already trust them with payments, personal data, and critical services

Traditional insurers face a daunting scenario: the companies best positioned to innovate in insurance have no interest in sustaining the industry as currently structured. They want insurance to become a nearly free utility that enables their actual businesses.

The Timeline Question
  • Will this disruption happen quickly or slowly? The answer varies by line:
  • Auto insurance: 10-15 years as autonomous vehicles scale
  • Cyber insurance: 5-10 years as security tools improve and become commoditized
  • Property insurance: 15-20 years as smart home technology reaches critical mass
  • Health/life insurance: 20+ years due to regulatory complexity and medical cost inflation

But the direction is clear. We are moving toward a world where insurance exists primarily as:

  • Embedded features bundled free or nearly free with other products
  • Regulatory compliance where coverage is legally mandated
  • Catastrophic protection for truly unpredictable tail risks

The companies building this future aren't insurers trying to sell more policies. They are automakers, technology platforms, security vendors, and device makers for whom insurance is an obstacle to be minimized or eliminated.

Traditional insurers are defending a $1.3 trillion U.S. market. But they're facing adversaries who would gladly destroy 70% of that market if it means selling more of their own products.

The Transition—The Cybersecurity Vendors

Cyber is a textbook case of the elimination incentive. Cybersecurity firms like CrowdStrike, Palo Alto Networks, SentinelOne, and Microsoft profit from prevention, not protection—making cyber insurance their natural competitor.

Some see opportunity in cooperation. CrowdStrike partners with insurers, sharing data to improve underwriting and prove its users suffer fewer breaches. That helps customers pay less for coverage while quietly shrinking the cyber insurance market itself.

Microsoft's position is even more intricate. As both a top security vendor and the source of many exploited vulnerabilities, it has every reason to make breaches rarer. Its visibility into corporate systems through Azure, Office 365, and Windows gives it the data to underwrite risk directly—or eliminate it by making insurance nearly irrelevant.

The endgame isn't selling policies; it's securing systems so completely that the need for insurance disappears.

The Transition—The Hybrids

Out on the edge where technological possibility meets regulatory reality, a new wave of tech‑enabled MGAs and MGUs is emerging. They blend niche specialization—restaurants, beauty, wellness—with continuous underwriting, real‑time risk visibility, behavior‑based credits, and agent‑first digital distribution and economics.

In simple terms, they aim to be the best "program carrier + software layer" for independent agents in specific verticals, not broad, direct‑to‑consumer threats to the long-standing agency model.

With digital sophistication rivaling top cyber vendors, these companies work comfortably within existing regulations and sell through independent agencies using a human‑first model—spending most of their day actually talking to people.

If Tesla Insurance is a glimpse of what auto coverage will look like in 10 years, startups like Rainbow, Next, Coterie, Relm, and Thimble show the incremental progress toward that future happening right now.

In the end, healthy tension between innovators pushing for better experiences and regulators safeguarding financial markets keeps our industry moving—slowly, but in the right direction.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.   

Why Zillow Chickened Out

Zillow pulled its climate risk ratings from its home listings even though its model is widely validated. That's a bad sign for the movement to improve resilience.

Image
Orange Sky and Powerlines

Based on the notion that sunlight is the best disinfectant, I've long advocated that homeowners insurance companies give clients as much information as possible about the risks they face. Don't just quote me a premium. Tell me that, perhaps, I'm at more risk of flood or wildfire than old government maps show--and help me understand what I can do to reduce those risks.

Zillow just took a step in the opposite direction. 

It had announced 15 months ago that it would feature detailed climate risk information for flood, wildfire, wind, heat and air quality, but the company quietly dropped that information last month. 

The reason is obvious: pressure from sellers who didn't want the risks to their properties spelled out.

The implications are disheartening. 

What Zillow was attempting was always going to be tough, because we humans aren't wired to think rationally about probabilities. If some political poll says a candidate has only a one-in-10 chance of winning, and they win, we leap to the conclusion that the poll was wrong and the pollster incompetent. Maybe. But maybe not. 

The only way to test is to look over a body of work and over time. Did those predicted to have a one-in-two chance win about half the time? Did the one-in-fours win a quarter of the time? Did those one-in-10s win a tenth of the time?

But models like the one from First Street that Zillow used haven't been around long enough for us to have much evidence about whether they're right when they say there's a one-in-50 chance of a wildfire affecting a home each year. 

A spokesman for First Street said, "During the Los Angeles wildfires, our maps identified over 90 percent of the homes that ultimately burned as being at severe or extreme risk — our highest risk rating — and 100 percent as having some level of risk, significantly outperforming CalFire’s official state hazard maps.”

But we humans are still wired to think, "Zillow said I was at severe risk of flooding, and I didn't have a flood this year, so those bozos were wrong." In the context of the risk ratings provided by Zillow, someone with a house to sell would surely also think, "And their error is costing me money."

While that sort of thinking led to enough pressure on realtors, a key constituency of Zillow's, that Zillow pulled the ratings, there's still some hope for the long run. Even Zillow still provides a link to First Street so those curious enough can find information about risks to properties they might buy. And good models like First Street's will not only get better but will be more accepted over time, as they build up a track record.

It'll just take longer than I had hoped, perhaps much longer.

Sorry, I don't make the rules. I sure wish I did....

Cheers,

Paul

P.S. So I don't end on a total downer, I'll share two links that contain a healthy dose of encouragement. First is a webinar I did recently with Francis Bouchard, a managing director at Marsh McLennan who has focused on resilience for years, and Nancy Watkins, a principal at Milliman who has developed a Data Commons to help mitigate wildfire risks in the wildland-urban interface. Second is the ITL Focus from September on resilience and sustainability, featuring an interview with Francis and parts of an interview with Nancy. 

Both describe the sort of conversation that insurers need to have--and are starting to have-- with architects, builders, city planners and others so that, as a group, we can build resilience into properties from the outset and can at least offer advice to homeowners and communities on how to reduce risks related to severe weather. 

Insurance's Silver Tsunami Knowledge Crisis

P&C carriers face knowledge drain from retiring boomers. AI, used well, can provide systematic processes to capture expertise.

Aerial View of Ocean Waves

The P&C insurance industry is about to lose nearly half of its workforce to retirement in the next five years, driven by the baby boomer exodus from the workforce. Much of that loss is deep expertise about underwriting, decades-honed claims handling skill, and the undocumented tribal knowledge that carry the day for carriers.

This "Great Retirement," also called "The Silver Tsunami," is fast approaching. According to a recent survey by APQC (American Productivity and Quality Center), 93% of insurance CxOs are genuinely concerned ("mission-critical", "strong", or "moderate" concern) about this knowledge hemorrhage. Coincidentally and paradoxically, the same percentage of carriers are not capturing knowledge consistently from departing employees before they walk out the door.

The result of this concern-complacency disconnect? A perfect storm of knowledge drain, compliance exposure, operational disruption, and customer experience degradation—unless insurers leap out of the "boiling frog" syndrome.

Methods Create Barriers

According to the survey, 83% of respondents capture knowledge using manual methods such as people-to-people transfer and time-consuming documentation, a Sisyphean approach that is neither scalable nor sustainable. No wonder time (mentioned by 62% of respondents) and resources (mentioned by 41% of respondents) topped the list of barriers to knowledge capture and management in the survey.

While interest in AI remains high, a stunning 87% of carriers surveyed have yet to operationalize it to automate knowledge capture and management. AI adoption has been slowed down by concerns about compliance (cited by 59%) and correctness of answers (cited by 38%). AI initiatives have been stymied by "garbage in, garbage out" where some carriers tried to slap AI onto enterprise knowledge silos of dubious consistency, accuracy, and compliance. No wonder a recent MIT survey found that only 5% of AI deployments have created any business value!

Trusted Knowledge Foundational to AI Success

The precious few AI-savvy carriers succeed in AI with a trusted knowledge infrastructure, which addresses adoption barriers such as correctness of answers and compliance head-on. At the same time, these organizations use AI to automate the knowledge capture, management, and optimization process:

  • Capture questions that are the highest in volume, value, and complexity, and mine gold-standard answers for them from customer interactions with high-performance agents and intra-enterprise conversation stores among SMEs
  • Capture procedures from flowcharts into in-band guidance for customer conversations
  • Create drafts of knowledge articles that are aligned with the brand voice for human experts in the loop to review and approve
  • Curate content to make it findable and AI-ready
  • Analyze and optimize to identify gaps and improve knowledge performance
Winning best practices

Leading carriers treat knowledge as a strategic asset rather than a collection of documents and unstructured content. Their best practices include:

  • Using AI to continuously capture expertise from daily work, not just end-of-career interviews
  • Embedding trusted knowledge in claims, underwriting, and customer service systems
  • Including compliance checks in knowledge workflows to ensure that answers are correct and aligned with regulatory requirements
  • Training employees to use AI tools as assistants and not adversaries

A 10X acceleration in creation and curation of knowledge and a 3X acceleration in time-to-value is possible when companies use AI well.

The Bottom Line

With AI-powered knowledge capture and management, forward-thinking carriers capture, preserve, and activate institutional knowledge at scale—so every employee—from new adjusters to seasoned underwriters—can access trusted answers and the best thinking the organization has to offer exactly when they need it. Others are well advised to follow suit lest the Silver Tsunami sweep them away!


Anand Subramaniam

Profile picture for user AnandSubramaniam

Anand Subramaniam

Anand Subramaniam is SVP global marketing for eGain. Prior to eGain, Subramaniam served in executive and marketing management roles in a range of organizations from SaaS startups to companies such as Oracle, Autodesk, and Intel. He holds an MBA from the University of California at Berkeley and an MSME from the University of Rhode Island.

Insurers Must Rebuild Trust in 2026

As premiums surge and AI transforms operations, insurers must prioritize transparent communication to rebuild eroding customer trust.

Close Up Shot of Handshake from Above

As we approach 2026, it's clear that the insurance industry needs to focus on something more foundational than product innovation or operational efficiency: rebuilding and strengthening trust. Whether in property & casualty (P&C) or life & annuity (L&A), trust is emerging as an important differentiator, yet it is the hardest to maintain in an environment defined by climate volatility, rising costs, demographic shifts, and accelerating automation.

Right now, insurers face a dual challenge. They must navigate economic and regulatory pressures while also meeting customer expectations shaped by real-time digital experiences in every other part of consumers' lives. That means the industry can no longer view communication as a compliance requirement or an operational task. Communication is the customer experience, and ultimately the foundation of trust.

The 2026 Challenge for P&C

For P&C insurers especially, 2026 is shaping up to be another turbulent year. Rising premiums – driven by higher reinsurance costs, climate-related catastrophe losses, supply-chain volatility, and inflation – are straining affordability. And when coverage feels more expensive while service feels inconsistent or complex, customer trust erodes.

So the question becomes: How do insurers maintain trust at a moment when customers are being asked to pay more?

Across industry studies and customer research, three actions matter most:

1. Explain the "why."

Transparency around rate drivers is strongly correlated with retention. While weather trends, rebuilding costs, inflation, and fraud patterns are all out of insurers' control, and premium increases are something that will never make a customer happy, these factors make sense to policyholders when communicated proactively and clearly.

This is an area where insurers can create real differentiation. A renewal communication that simply states "your premium is increasing" feels one-sided. A renewal that explains why the price changed, how the customer's risk profile evolved, and what the insurer is doing to help reduce costs over time feels partnership-oriented.

2. Increase transparency throughout the journey.

Claims remain the emotional center of insurance. Forrester's 2026 outlook underscores that even when claim outcomes are favorable, perceptions of fairness and trustworthiness are shaped most by transparency through real-time status updates, clear next steps, consistent messaging across channels, and expectations set early and often.

This is also where operational gaps tend to show. If one message comes from the portal, another from the adjuster, and a third from email or SMS – each with a different tone – customers sense misalignment. Consistency and transparency across every channel signals competence, care, and honesty.

3. Make every interaction worth the premium.

People trust organizations that demonstrate reliability, humanity, and capability in real time – not just in marketing.

In other words: customers judge the value of insurance not just at the point of loss, but through every touchpoint, whether for billing, servicing, onboarding, updates, coverage changes, or support. Transparent, consistent, multi-channel communications will be essential to demonstrating that value in a world where premiums continue to rise.

The L&A Perspective: The Trust Gap with Younger Buyers

L&A insurers face a different challenge, one shaped by demographic shifts and a stark contrast in purchasing motivation.

On one hand, the aging population is creating unprecedented momentum: 4.2 million Americans will turn 65 this year, and the wave of retirees has contributed to a dramatic rise in annuity sales, reaching $432.4 billion in 2024, up 12% year over year. For older buyers, the value of retirement income solutions is clear and immediate.

But beyond annuities, L&A products are struggling to capture interest, especially among younger or less financially secure consumers. According to a LIMRA study, younger buyers look, compare, and browse – but do not purchase. Why?

  • They are unsure whether the products are worth the cost.
  • They often feel the industry does not communicate in ways that are relevant to their financial realities.
  • They question whether insurers have their best interests at heart.

Building relevance for younger generations means meeting them where they are: with clarity around value, real-world scenarios that make protection feel tangible, and messaging that explains benefits in everyday language. It also means demonstrating affordability and flexibility – not just in price, but in how insurers communicate, educate, and guide.

The organizations gaining traction with younger customers are leaning heavily into personalized digital journeys, plain-language education, and transparent explanations of how products fit into broader financial wellness and not solely on risk mitigation.

The Trust Imperative for AI in Insurance

Overlaying all of this is the accelerating adoption of AI across underwriting, claims, service, and customer engagement. According to a McKinsey report on the future of AI, insurers see AI as a path to efficiency and accuracy, while customers see risk – particularly around data privacy, bias, and fairness. Public awareness has grown significantly, and regulators are moving quickly.

So the question shifts again: How can insurers build and maintain trust while deploying more AI across the value chain?

A trustworthy AI strategy looks like this:

Be transparent.

Publish clear, accessible notices that describe where and how AI is used, whether for underwriting insights, claims triage, personalization, or fraud detection.

Be accountable.

Explain the safeguards in place to prevent bias or incorrect outcomes and commit to human oversight for material decisions. Make it clear that customers can opt out where appropriate.

Give customers a voice.

Provide simple methods for policyholders to query, appeal, or request review of any AI-driven decision. That two-way transparency is foundational to maintaining trust. AI doesn't erode trust – opaque AI does. When insurers use AI to improve clarity, reduce friction, and increase fairness, customer trust grows rather than diminishes.

The Bottom Line: Trust Is Built Through Communication

As 2026 approaches, one theme cuts across all insurance segments: trust is earned in moments, and communication shapes those moments more than any other factor.

Whether explaining a premium change, guiding a customer through a claim, educating a first-time life insurance shopper, or deploying AI responsibly, insurers must treat communication as a strategic capability, not a back-office task.

The organizations that will lead the market are those that communicate clearly, consistently, and proactively; meet customers across channels with a single, unified voice; make interactions personalized and empathetic; explain the "why" behind every major decision; and demonstrate responsibility and transparency in their use of emerging technologies.

Insurance is, at its core, a promise. And a promise is only as strong as the trust behind it. The industry has a rare opportunity to strengthen the partnership between insurers and policyholders – through transparency, communication, and a commitment to treating every interaction as a moment that matters.


Eileen Potter

Profile picture for user EileenPotter

Eileen Potter

Eileen Potter is vice president of marketing for insurance at Smart Communications

She has more than 25 years of insurance experience with both P&C and life. She has worked in independent agencies and MGA operations in various roles, including commercial marketing and underwriting. Her software background includes work with organizations such as ABBYY, Appian, One and Duck Creek Technologies.