Download

Agent, Heal Thyself (on Cyber Security)

Independent agents help clients understand cyber liability, but many run their own operations on shared passwords and informal access controls.

Men Sitting at a Desk

There's an uncomfortable conversation happening across the industry. Independent agents are spending real time helping clients understand cyber liability exposure – walking them through what underwriters want to see, what gaps create problems at renewal, and what a breach actually costs. And many of those same agents are running their own operations on shared passwords, informal access arrangements, and a working assumption that nothing bad will happen to them.

That assumption is getting harder to justify. Cyber underwriters are applying the same scrutiny to agencies that agencies apply to their clients. The questions at renewal are getting more specific. And agents who can't demonstrate basic credential discipline may find themselves in an awkward position, struggling to answer questions they've been asking their clients for years.

The access problem nobody sits down to create

Credential sprawl doesn't happen because anyone made a bad decision. It happens because agencies grow.

A new carrier portal gets added. A staff member needs access to a client management system, so someone shares their login to get things moving. Another person leaves, but their credentials aren't fully revoked. They just stop being used, as far as anyone knows. Over time, no single person has a complete picture of who can reach what.

This is the normal pattern in small agencies, and the problem isn't negligence; it's the absence of governance. When the priority is always the client in front of you, internal operations fill in around the edges however they can. Spreadsheets become the credential store. Memory becomes the access policy.

That works until it doesn't.

The bar has moved – and MFA alone won't clear it

A few years ago, having multi-factor authentication (MFA) in place was enough to satisfy most cyber underwriters. That's no longer true.

MFA is now a baseline requirement. What underwriters are looking for beyond that is privileged access controls, documented audit trails, zero-trust principles, and evidence that offboarding is immediate and verifiable when someone leaves. The reason for this tighter scrutiny is that social engineering and credential compromise now account for the majority of breach incidents. Underwriters have adjusted their models accordingly.

The harder issue is proof. Saying the right things on an application isn't the same as being able to demonstrate that controls are in place and actively used. Cyber underwriters increasingly want to see evidence of continuing compliance. A clean snapshot taken at the moment of the audit won't meet the bar.

The risk isn't only higher premiums. An agency that suffers a breach and can't demonstrate it was operating as it claimed may find its coverage denied. That's a different kind of problem entirely.

What a practical audit actually looks like

Agencies don't need a dedicated IT team to close the most important gaps. They need a clear-eyed look at what they actually have.

Start by mapping access: every carrier portal, every client management system, every shared tool, and who currently holds credentials for each. Most agencies find this exercise surfaces access that should have been revoked months ago.

From there, apply a simple principle: access should match role and need. Not everyone requires access to everything, and treating it as though they do creates exposure for no good reason. This is what's meant by least-privilege access, and it's one of the controls underwriters are now specifically looking for.

Build an offboarding checklist and use it without exception. When someone leaves, credential revocation should be immediate and documented. The audit trail matters.

Finally, move credential storage out of spreadsheets and shared documents and into a structured system that logs activity. Who accessed what, when, and what changed. That record is what turns good intentions into demonstrable practice.

The credibility case

The agencies that get this right aren't just better protected. They're better positioned.

When a client asks hard questions about cyber risk, the advisor who manages their own exposure rigorously is speaking from experience, not theory. That's a different kind of credibility, and clients can tell the difference.

The underwriting environment will keep tightening. The agencies that build these habits now – before the renewal conversation forces the issue – will find they've solved two problems at once: their own security posture, and their standing as a trusted voice on everyone else's.

The requirement to demonstrate what you preach isn't new to this industry. It's just arrived in cyber.

A Hopeful Conversation on Climate Risk

Last week's ClimateTech Connect assembled an impressive variety of voices and laid out paths to important — if gradual — progress on climate risk.

Image
Neighborhood Flooding around Homes

My favorite anecdote from last week's ClimateTech Connect was a little gem of high tech meeting low tech: a sophisticated network of sensors and a woman with a rake that, together, are protecting hundreds of homes from flooding.

A panelist at the conference on mitigating the risks from climate change said flash flooding had washed away some 300 homes in a small town in the U.K. As the insurance industry helped it rebuild, the town took advantage of improvements in technology and installed sensors that monitor upstream water levels. When they reach potentially dangerous levels, an alarm sounds in the mayor's office. A clerk then grabs her rake and walks down the street, where she clears the debris that collects in a culvert, ensuring that any flood waters will quickly run off.

Few problems have such simple, happy solutions, of course, but the conference still offered some hopeful signs in a world seemingly buried under warnings of impending doom. The mere fact that hundreds of senior people from a whole variety of vantage points — big banks, home builders, municipalities, etc., as well as insurance companies — spent two days in Washington, D.C., strikes me as a good sign.

I'll share a few highlights, in the hope they provide food for thought.

A former fire chief said my second-favorite thing at the conference. I almost hesitate to share, because, in retrospect, what he said is obvious. But it had never occurred to me, and, in my defense, I hadn't heard anyone else say it despite having spent years wrestling with how to get people to understand that everyone  in a community is in the fight together when it comes to wildfire risk.

I knew that reducing the risk to my house reduced the risk to yours, and vice versa, but I didn't think strategically enough — and the former fire fighter helped me out. He said it doesn't help a community much to have a scattershot approach to hardening homes against fire. He said communities have to be systematic. That means focusing on the homes at the edge of the community closest to the wildlands that might catch fire, while worrying far less about the homes that are well inside the boundary. 

That sort of approach not only makes sense but seems more manageable. It reduces the amount of money that is needed to protect a community and takes some of the onus off individual homeowners to alter their landscaping, put mesh over vents to keep embers from getting into a home, etc. A homeowners association could undertake the hardening work on the key homes on behalf of the whole community. 

Or a community could follow the lead of Amy Berry, CEO of the nonprofit Tahoe Fund. She has raised $30 million of private capital to leverage $200 million of public funds for more than 220 projects, including five in the Tahoe area that take the sort of approach advocated by the ex-fire chief. The fund uses public resources to identify homes that could be "superspreaders," then knocks on their doors and offers to help harden their homes. (These projects are near and dear to my heart, given that I used to live just down the road from three of these projects. When I mentioned the name of the town that had our favorite pizza place, she said its name instantly.)

More broadly, the conference embodied the sort of broad conversation, reaching well beyond the insurance industry, that needs to happen. Francis Bouchard, a managing director at Marsh, has hit that theme hard at ITL, including in an interview I did with him last fall and in a webinar I conducted with him and Nancy Watkins, a principal at Milliman, in December. At ClimateTech Connect, Francis continued the theme with a fireside chat with Illya Azaroff, president of the American Institute of Architects. He represents 110,000 architects and described all he's doing to try to get them to design for resilience from the get-go. JP Morgan, which has announced a massive financing initiative related to climate change, was represented by Sarah Kapnick, its global head of climate advisory. The climate chief for Massachusetts was there, too. 

So, there was a broad array of important, interested parties even before you got to the insurance ecosystem, well represented by Nationwide, Travelers, Munich Re, etc., including a host of intriguing technology startups. There were lots of foreign accents, too, which suggests that we're getting the sort of cross-fertilization of ideas that really hard problems require. 

The only real disappointment was that the federal government didn't show, other than to describe what data sets might be available and useful, but that lack of presence was hardly a surprise, given the current administration's stance on climate change and promotion of fossil fuels.

Denise Garth, chief strategy officer at Majesco, told a story that epitomized for me just how hard we're going to have to keep pushing. A storm with huge hail hit her home in Omaha, doing $140,000 of damage, including requiring a new roof. Her insurer, a top-five carrier, promptly cut her a check, but its agent missed an opportunity of the sort we just can't miss if we're going to make the world more resilient. 

It was only when Denise started dealing with a roofing contractor that she learned that rubber roofs were available that looked like tile, shake or whatever she wanted. In the future, hail would just bounce off. She had a rubber roof installed and actually got a 20% discount from her carrier as a result. But somebody — actually, lots of somebodies — needs to do a much better job of educating agents and encouraging them to counsel customers. 

After attending last week's conference, I'm encouraged about the progress we're making on resilience, but we have a long way to go.

Cheers,

Paul

Why Insurance Is Lagging on AI

Data fragmentation prevents most insurers from turning AI strategy into operational reality despite industry-wide ambition.

An artist's illustration of AI

The insurance sector has a well-documented mismatch between its AI ambition and operational readiness. While 82% of insurance companies believe AI will define the industry's future, only 14% have fully integrated it into their financial operations, and 52% describe their data governance frameworks as early-stage or still developing. The distance between those numbers reflects how most firms are approaching AI as a strategy to announce rather than an operational capability to build.

All data cited in this article is from AutoRek's 2026 Insurance Operations and Financial Transformation Report, based on 250 interviews with insurance and healthcare insurance managers across the U.S. and U.K. The three most commonly cited barriers were legacy system integration challenges (42%), fragmented data environments (39%) and a shortage of in-house AI expertise (40%). None of these are new problems, but the cost of carrying them forward has grown significantly.

Data fragmentation is the core problem

The average insurer managed 17 data sources feeding premium processes alone. Each source represents a different format, a separate update frequency and another potential point of failure in the reconciliation chain. AI deployed across such an environment does not streamline operations; instead, it amplifies the inconsistencies already embedded within those systems.

This is why firms that have made measurable progress on AI integration share a starting point. They first standardized their data architecture before layering on automation capabilities. They also built workflow and governance frameworks that are auditable and measurable rather than theoretical. Reconciliation was typically automated first, creating a reliable and consistent data environment that makes AI-driven workflows viable later in the process.

M&As create back-office operational complexities

Industry consolidation is accelerating, and the operational burden is falling on already strained infrastructure. 54% of insurers said incompatible systems and data architectures were their biggest post-merger integration challenge. For firms managing over a dozen data sources before a deal closes, an acquisition means introducing additional complexity before the existing complexity is resolved.

The carriers who are able to realize sustainable value from the merger treat data harmonization as pre-merger work. Integration planning begins at the architecture level rather than after the deal closes, ensuring that new systems are absorbed into a standardized environment instead of being added to an already fragmented one.

Settlement cycles measure operational health

44% of insurers faced settlement periods exceeding 60 days. Transaction volumes are projected to grow 28.7% over the next two years.

Settlement cycle length is the clearest indicator of how well data moves between systems and how much manual intervention is required to close transactions. Firms with shorter settlement cycles have typically completed foundational infrastructure work, including implementing automated reconciliation, reducing the number of data sources and establishing governance frameworks. The correlation between operational discipline and AI readiness was consistent across the research.

The data show a clear path forward

Despite the persistent barriers, the research shows clear intent to act with 50% of firms prioritizing AI and machine learning, 42% focusing on automation of back- and middle-office functions and 51% citing regulatory requirements were the primary driver of modernization decisions.

Insurance firms seeing results from those investments have sequenced them deliberately. They have taken a structured approach, starting with governance frameworks, followed by data standardization, then building automation on top before introducing AI. That sequencing matters because AI running on fragmented, manually managed data will produce similarly fragmented and manually intensive results, only at greater speed and cost.

The operational reality from inside the carrier

I spent 12 years within the carriers including MetLife, HSBC Life, Aviva, AIG and Generali before moving into insurtech. The constraints highlighted in this research were recognizable from the inside. The organizations that made the most progress treated back-office infrastructure as a strategic investment rather than an operational cost and made data quality an asset and a prerequisite for adopting new technology.

With 6% of insurers reporting no AI usage in financial operations at all, the performance gap between firms that have modernized and those that have not is widening. As transaction volumes grow and consolidation continues, that gap will complicate the path forward for firms that have deferred the infrastructure work. The decisions insurers make about data infrastructure in 2026 will determine how much value they ultimately capture from their AI investments.


Tony Shek

Profile picture for user TonyShek

Tony Shek

Tony Shek is the insurance lead at AutoRek.

He has over 12 years’ experience in technology and consulting. He has worked at global insurers including Aviva, HSBC Life, Generali, AIG, and MetLife.

He has an engineering degree and an MBA from Imperial College London.

Telematics Drives Shift in Commercial Insurance

Commercial insurance is evolving from reactive risk transfer to continuous prevention through real-time telematics and behavioral data.

Palm Trees In the Wind

For decades, commercial insurance has operated on a largely reactive model. Insurers assess risk using historical data, price policies at the start of the cycle, and respond financially after losses occur. While this approach has ensured stability, it is increasingly misaligned with today's dynamic risk environment.

Industries such as logistics, transportation, and construction now operate under continuously evolving conditions, where risk exposure changes in real time. In this context, static underwriting and retrospective claims management create critical blind spots, limiting both visibility and control. The widening gap between how risk is priced and how it behaves is placing growing pressure on traditional insurance models.

At the same time, advances in telematics and connected technologies are redefining what insurers can observe and influence. Real-time behavioral and operational data is enabling a shift toward continuous, intervention-driven risk management.

Understanding the Emergence of Continuous Insurance

Continuous insurance represents a structural shift in how risk is assessed and managed. Instead of periodic evaluations, insurers can now maintain a real-time view of exposure through continuous data streams.

Telematics plays a central role in this transformation. By capturing detailed data on asset usage, environmental conditions, and human behavior, telematics systems provide a level of insight that was previously unattainable. This allows insurers to move beyond static assumptions toward dynamic, evidence-based risk assessment.

As a result, insurance is evolving from a transactional model into a continuing process—where risk is continuously monitored, interpreted, and influenced. Intervention is no longer reactive; it is increasingly preventive.

Telematics as the Backbone of Real-Time Risk Visibility

The growing adoption of telematics insurance is not simply enhancing existing models but redefining their foundation. What makes telematics transformative is its ability to convert operational activity into measurable and actionable risk signals.

In commercial auto insurance, for instance, telematics systems capture driving patterns such as acceleration, braking behavior, route selection, and exposure to high-risk environments. This creates a continuous feedback loop where risk is not inferred from past incidents but observed directly as it unfolds.

More importantly, this data does not remain static. Through advanced analytics, it is translated into risk intelligence that can inform immediate decision-making. Insurers can identify emerging patterns, anticipate potential incidents, and enable timely interventions that reduce the likelihood of loss.

This shift from data collection to real-time intelligence marks a critical step in the evolution toward continuous insurance.

The Transition from Periodic Underwriting to Continuing Risk Evaluation

Traditional underwriting operates within defined timeframes, often relying on annual policy cycles. While effective in stable environments, this approach struggles to capture the variability of modern risk landscapes.

Continuous insurance introduces a more adaptive model where underwriting becomes a continuing process. Real-time inputs from telematics systems allow insurers to reassess exposure continuously rather than at fixed intervals.

This has several implications. Risk pricing becomes more closely aligned with actual behavior and conditions, reducing the gap between expected and realized outcomes. Emerging risks can be identified earlier, enabling corrective actions before they escalate into claims. Over time, this leads to more accurate underwriting and improved portfolio performance.

The shift is not merely operational but conceptual. Risk is no longer treated as a fixed attribute but as a dynamic variable that requires constant evaluation.

Redefining the Role of the Insurer in a Continuous Model

As insurance becomes more data-driven and continuous, the role of the insurer is undergoing a fundamental transformation. The traditional function of compensating losses after they occur is being complemented by a more proactive role in preventing those losses altogether.

Telematics insurance enables insurers to engage directly with policyholders in managing risk. By providing real-time insights and behavioral feedback, insurers can influence decision-making at the point where risk is created. This represents a shift from financial protection to operational partnership.

In this emerging model, insurers are not external entities responding to events but integrated participants in their clients' risk environments. Their value lies increasingly in their ability to reduce uncertainty rather than simply absorb it.

Operational Impact of Telematics in Commercial Fleet Environments

The operational impact of telematics insurance is most clearly visible in commercial fleet environments, where real-time data has become integral to both risk management and performance optimization. By continuously capturing and analyzing driver behavior and vehicle usage, telematics enables insurers and fleet operators to move beyond retrospective assessments and actively manage risk as it develops.

This shift introduces a dynamic feedback loop in which data-driven insights inform immediate actions, improving both safety outcomes and operational efficiency. Over time, this not only reduces claims but also enhances overall fleet performance, creating a more aligned and resilient risk ecosystem.

Key operational outcomes include:

  • Continuous visibility into driver behavior, including speeding, harsh braking, and route risk exposure
  • Early identification of high-risk patterns, enabling timely corrective interventions
  • Improved driver accountability through continuing monitoring and performance feedback
  • Reduction in accident frequency, supporting better loss ratios and underwriting performance
  • Enhanced fleet efficiency through optimized routing, fuel management, and predictive maintenance
Strategic Realignment in a Telematics-Driven Insurance Landscape

The rise of telematics insurance is not only transforming operations but also driving a broader strategic realignment within the insurance industry. As real-time data becomes central to risk assessment, insurers are being compelled to rethink how they compete, collaborate, and create value.

In this evolving landscape, the ability to access, interpret, and act on data is emerging as a critical differentiator. At the same time, insurers must navigate increasingly complex ecosystems where data flows across multiple stakeholders, raising important questions about ownership, control, and long-term positioning.

This transformation is both technological and organizational, requiring insurers to build new capabilities while shifting toward a more proactive and partnership-oriented model.

Key strategic implications include:

  • Real-time data emerging as a core driver of underwriting accuracy and competitive differentiation
  • Increased importance of data ownership and control in shaping long-term market positioning
  • Greater reliance on partnerships with telematics providers, platform operators, and OEMs
  • Expansion of insurer capabilities in advanced analytics, real-time processing, and digital infrastructure
  • Evolution of business models toward continuous engagement rather than periodic interaction
  • Cultural shift from reactive claims management to proactive risk prevention and client collaboration
A Structural Shift Toward Embedded and Preventive Insurance

The movement toward continuous insurance reflects a broader transformation in how risk is conceptualized. Insurance is gradually becoming embedded within the operational fabric of businesses, supported by real-time data and continuous feedback loops.

Telematics insurance will remain central to this evolution, enabling insurers to maintain visibility and influence at every stage of the risk lifecycle. As adoption increases, the distinction between risk assessment and risk management will continue to blur.

Over time, this will lead to a model where prevention becomes the primary objective and claims become less frequent by design.

Conclusion

The transition from risk transfer to risk intervention represents a defining shift in commercial insurance. Telematics insurance is at the core of this transformation, enabling continuous visibility, predictive insight, and proactive engagement.

Insurers that successfully adapt to this model will move beyond their traditional role and become integral partners in managing and reducing risk. In an increasingly complex and fast-moving environment, the ability to intervene before loss occurs will determine long-term relevance and competitive advantage.


Shammi Thakur

Profile picture for user ShammiThakur

Shammi Thakur

Shammi Thakur is research director at MarkNtel Advisors.

He has over 15 years of experience in strategic market intelligence, industry forecasting, and competitive analytics, with a strong focus on the global insurance sector. 

Unconnected Dots: Why We Don’t Prevent Fraud 

Fraud networks exploit multiple digital identities across accounts and websites, making digital entity resolution critical for preventing fraudulent payments.

Large purple iris around a puli and purple background with writing biometrics

Like any business, the business of fraud has a founding moment, a growth story, and sometimes an ending. We hope none of those are "happily ever after."

Every large fraud scheme with tens of millions or more started with their first dollar. Whether it is a single dollar or over hundreds of million dollars, insurance companies don't hand out cash money, they issue checks.

While the form of money dispersed nowadays may include all manner of e-payments, credits, drawdowns, gift cards even, or other mechanisms, typically our follow-the-money gumshoeing types go from account to account to follow the flow. But in a world where any business can have multiple accounts, and any business can have multiple business names with multiple accounts, isn't it time to resolve "who is the who" we are making payments to?

Some of these shards of identity are our own fault and not fraud at all – let's presume that is the case for most of our unconnected data.

Let's just put that in the story book bucket of honest mistakes and rotten data control. That is the state of data entry into free text fields across multiple systems and applications. We know we don't know "who is who" just by watching our daily data work and all the errors and uncertainty of our work processes. (See, "What Would You Do With $1 Trillion?")

We are starting to get a handle on run of the mill missing and misspent money with entity resolution breakthroughs. These breakthroughs come on a use case basis. Government programs, healthcare, life insurance, property and casualty insurance are trillions upon trillions of annual dollars transacting as you read this. And we are making progress.

This is happening with ending annuities being paid to the deceased, with accurately managing eligibility for assistance, with escheat funds being repatriated, with customers having all their products, services, and subscriptions beginning to connect to their own lifetime value profiles, and with marketing able to see clearly if that customer has multiple carriers, an all-in-one solution provider, or is missing a product that prudent advice would warrant. File this under "doing the right things better."

Where we struggle more, and where we are getting increasingly exploited, we will open as a new file called "stop paying the wrong things."

Sometimes, it's an overpayment no one asks to have returned, which is not really that wrong or crooked, right? Then there's paying out to the revenue maximizer padding a bill or receipt. That grows into over-treatment, unnecessary tests, excess calibrations, and perhaps prescriptions, equipment, or other services that maybe were never delivered.

Then comes the extra volume where we move from the misrepresented, wasted, and abused, into full on fabricated, fictitious, and fraudulent. (See, "Entity Resolution Transforms Risk Management.") Here the story of "look what happened when nobody could catch me once" starts to scale into webs, schemes, and networks and the theme swings to "look what is happening when nobody catches me ever…." But for every great business, legal or illegal, there's always someone wanting more.

The easiest way to make more, and make it faster, is franchising. The next is to expand territory. And after that, cook the books - but that raises the risk of getting caught.

This last hitch can be mitigated by making it seem like one business is many – just misspell the name and address often. If that seems risky in a brick-and-mortar way, then open multiple accounts and even have multiple emails, heck, multiple websites. Thus, simply taking advantage of existing weaknesses in companies not deploying effective entity resolution can mitigate the risk of getting caught while stealing.

Those willing to really reach for the stars, however, can expand even more and faster – multiple cities, multiple countries, with or without a physical presence, sometimes going all online. The likelihood of having a ghost claim from an invisible entity that receives real money is a growing reality, especially when it is so easy to not see the invisible parts of our work processes that include email, web domains, and the inscrutable IP addresses behind those. (Read up on that here, "Your Invisible Neighbors and You," and here, "Are You Fraud-Friendly?")

Making payments to invisible entities makes it impossible to connect the dots between what we really owe to whom and what we end up paying to "who knows who?"

The only way to make progress is to add digital entity resolution to our gumshoe efforts. And to really make a dent, reveal all the invisible dots on a map (See, "Uncovering Hidden Fraud Networks.")

We can predict and prevent a payment when we can tell before issuing a quote, binding a policy, entering a vendor, setting up a claim, putting up a reserve, or paying anything that the email or website we are seeing online is either suspicious or concretely a bad actor. Just look it up – that can be as easy as connecting to a "hot list," an operational graph of continuing fraud cases, or even a simple map showing bad actor rooftops, and locations around the world.

The visible web is where business happens. The dark web is for dirty business.

While we obviously should ignore anything from the dark web while we work, just like we should not work with known bad actors, there are many invisible players hiding in the visible web.

The first are the ones we just don't track because our systems don't have a place for emails and websites. The second are the ones more deeply invisible because we have no digital entity resolution process running. And the third are hiding in the subnets of the Internet where we would need a deep infrastructure level technographical trace to see "who is really who".

When your digital business intelligence uses an approach that includes the ultimate domain family tree of "who is that at my digital door" and "who am I transacting with digitally," then you can erase invisibility and connect the unconnected dots right in front of you.

An email that really comes from North Korea, simply does not belong in your run of the mill financial transactions. The same for an active network actor in a transnational criminal organization. Anyone on the "hard" sanctions list. And include the regular old fraud schemes and schemers, too, who are adding e-stealing to their repertoire.

The digital footprint capabilities that create attribution of websites, web servers, and web services linked to who we are transacting with fall under digital entity resolution.

Adding digital entity resolution to your existing framework, or even better, improving entity and digital entity simultaneously while also putting accurate mapping analytics under your investigative yarn balls and link analysis engines is what it takes to connect the unconnected dots we have today that cause us to make payment errors to fraudsters and criminals.

By painting in the invisibilities of our real-world transactions and transactors, we can enable our online and offline activities to predict, prevent, and more quickly mitigate fraud.

Get Ready for a Long, Hot Summer and Fall

What could be the strongest El Niño in 140 years may cause record temperatures and contribute to cyclones, convective storms, droughts and wildfires.

Image
Yellow Sun and Sky with a Thermometer Showing High Temperatures

Writing a newsletter on a morning when the president of the United States threatens that "a whole civilization will die tonight" strikes me as a fool's errand. Whatever I write will quickly pale in comparison with what happens -- or, I hope, doesn't happen -- in Iran in the next 24 hours. 

So I'll keep it short this week, just pointing out something I've been tracking for a while: that a "super" El Niño has become increasingly likely. An El Niño increases surface temperatures in the ocean, leading to higher temperatures worldwide and exacerbating just about all the sorts of natural disasters that have been producing record claims for property/casualty insurers (with the notable exception of Atlantic hurricanes). And the El Niño that's now forming looks like it will produce a far greater increase in ocean temperatures than normally occurs -- perhaps the greatest in 140 years. 

I'll take a quick look at the likely effects -- and at why U.S. insurers need to be even more careful than usual about their public statements and handling of claims, given that the insurance industry is an easy target for politicians looking for scapegoats in an election year.

Then we can all get back to our doom scrolling. 

A Washington Post article does a thorough job of laying out the risks of the El Niño that is forming. It says a "super" El Niño is one in which a key part of the Pacific Ocean sees surface temperatures increase by more than 2 degrees Celsius (3.8 degrees Fahrenheit) above average. The El Niño now taking shape could see the temperature rise 2.8 degrees Celsius (5.04 degrees Fahrenheit) above average, breaking the record set in 2015.

The result, the article says, could be: 

  • "Reduced hurricane activity in the Atlantic Ocean and possible drought in the Caribbean islands. Increased hurricane and typhoon risk in the Pacific Ocean....
  • "Potential drought in central and northern India....
  • "Above-average summer temperatures and humidity in the Western United States, possibly coming with unusual downpours, which may reach into the Plains and extend severe thunderstorm season.
  • "Developing droughts in portions of Central Africa, Australia, Indonesia, the Philippines, some South Pacific islands, Central America and northern Brazil, particularly later in the year. Flooding downpours in Peru and Ecuador, parts of northern and eastern Africa, the Middle East and near the equator in the Pacific.
  • :Higher frequency of heat waves across large parts of South America, the southern United States, Africa, Europe, parts of the Middle East, India and eventually Australia.
  • "New global temperature records — especially in 2027 — probably breaking records set in 2024."

These threats come as high temperatures and low precipitation have created drought conditions across more than half the continental U.S. The problem is especially severe in the West, where devastating wildfires have become all too common. 

So, while the lack of a landfalling hurricane in the U.S. last year meant insured losses from natural catastrophes were below the average for the past decade, claims could well soar this year.

Natural disasters always draw complaints about insurers. Everybody wants the recovery to be fast and smooth -- in situations where it's almost impossible for anything to happen fast or smoothly. Politicians, always looking for a way to position themselves on the side of voters, will home in on problems and publicly scold insurers -- a week ago, President Trump made a social post calling State Farm "absolutely horrible" for its handling of the Los Angeles fires early last year, and we can expect a lot more sniping at insurers from all sides in the run-up to the fraught mid-term elections this November. 

So everyone in the industry needs to be on high alert as the El Niño develops, helping people make their homes and properties more resilient and then, when the inevitable losses occur, acting as swiftly and empathetically as possible to help people recover.

In the meantime, let's all hope and pray for a sane resolution to the man-made disaster taking place in the Middle East.

Cheers,

Paul

The Leadership Gap We’re Misreading

When performance outpaces achievement, high-achieving women recalibrate. That's a problem for all of us.

Black and white photo taken at a low angle of a gap in rocks with a staircase across them

The race to fill senior leadership roles—and to do it faster, more competitively, and with greater diversity—has become a defining challenge for CEOs and CHROs. Across industries, organizations report a limited pipeline of "ready-now" talent, particularly at the highest levels. This raises an important question about whether the gap is one of supply or of visibility.

At the same time, women's advancement into senior leadership continues to lag despite years of focused initiatives and investments. Recent Women in the Workplace research from McKinsey and LeanIn.org attributes part of this to a shift in ambition.

But what if the issue isn't ambition at all? What if the signal is being misread?

What the Data Shows and What It Doesn't

For more than a decade, McKinsey's annual research has shaped the national conversation on women's advancement, offering critical visibility into representation and career progression. The data reveal a consistent pattern: Women enter the workforce in strong numbers, advance early, and then experience slower progression at senior levels. In fact, recent reports have sparked a broader discussion on whether gender parity will be reached in 50 years and whether there is an ambition gap.

Yet this survey research data tells only part of the story. It maps where women are and the movement, but it does not fully illuminate the experience of leadership once they arrive. That deeper understanding requires a different lens—one that brings into focus how advancement actually unfolds.

What the Experience Reveals

As part of the research for SelfPowerment: The Inner Shift for High-Achieving Women Who Want More Than Just Success, a qualitative study was conducted with 52 senior women leaders and 10 male executives across industries in the United States, including meaningful representation from the insurance sector across property & casualty and life & annuities.

Rather than focusing on representation alone, this research examined lived leadership experience—how careers begin and evolve, and ultimately how advancement unfolds once performance has already been proven.

What emerged is a pattern that is widely experienced yet rarely articulated. High-achieving women are consistently delivering at the highest levels. They are leading enterprise transformation, running complex organizations, and driving the outcomes companies rely on for both growth and performance.

And yet, many describe a subtle but consistent shift. Performance continues while advancement slows. What surfaces is not a question of capability; it is one of visibility.

As outlined in the SelfPowerment white paper, "What No One talks About—But Women Know," this dynamic forms what is defined as the Invisible Advancement Cycle—a repeatable pattern in which leaders become indispensable to execution while authority, sponsorship, and progression fail to keep pace.

The Misinterpretation of Ambition

From the outside, this dynamic often presents as disengagement. Over time, some women step back from pursuing the next role and become more selective in the opportunities they consider. In some cases, they choose to leave the organization altogether.

This shift is frequently interpreted as a decline in ambition. A closer examination of the lived experience reveals something far more grounded: When sustained high performance no longer consistently leads to advancement with greater influence or authority, leaders recalibrate how they engage. They become more intentional about the roles they accept, seeking opportunities where responsibility is matched with decision-making authority, where visibility translates into influence, and where increased scope aligns with how they define success.

Ambition remains. But it evolves, becoming more focused, more deliberate.

Where Alignment Changes Everything

One of the most important insights from the research is this: Not all women stayed in this invisible advancement cycle. Those who were able to sustain both influence and fulfillment did so because they had made a shift. They moved from endurance to alignment.

These women took ownership of their careers rather than wait for validation. They set clear, strategic boundaries around roles that expanded responsibility without corresponding authority. They repositioned their leadership from execution alone to enterprise-level impact, and they defined success on their own terms—beyond title or traditional progression.

This is the foundation of SelfPowerment—a return to purpose and a renewed ownership of one's career on one's own terms. It reflects a fundamental shift from, "Will they choose me?" to "Do I choose this?"

Alignment changes how women engage—with greater clarity, confidence, and conscious choice.

Why This Matters to CEOs and CHROs

This realignment extends beyond a women's issue; it is an enterprise leadership imperative. When this alignment pattern persists, organizations begin to absorb hidden costs that often go unrecognized until they become systemic.

Leadership capacity becomes underleveraged as the leaders closest to execution—those who understand how strategy truly operates—remain outside core decision-making circles. Succession pipelines narrow over time, with organizations looking externally for leadership while proven internal talent remains underused. At the same time, dependence concentrates within a small group of high-performing leaders who carry disproportionate responsibility for outcomes.

As alignment erodes, experienced leaders begin to step back or disengage, and gaps emerge between an organization's stated commitments to leadership development and the reality of lived experience.

As the research makes clear, these are not abstract dynamics—they are operational, financial, and strategic in their impact.

The "Operational Leader" Blind Spot

One of the most consistent insights across industries is how leadership continues to be evaluated. Strategy is often defined through vision, narrative, and positioning, while execution—where strategy becomes real—is frequently categorized as operational, tactical, or supportive.

In today's environment, however, execution is where complexity resides. Leaders who integrate technology and operations, guide transformation, and deliver enterprise outcomes often hold the deepest understanding of how the business truly functions. But when execution is undervalued in advancement decisions, organizations inadvertently overlook the very leaders they depend on most.

A More Accurate Question

Rather than asking whether ambition is shifting, organizations are better served by examining how leadership is defined and rewarded. This invites a more precise set of questions: whether visibility is being equated with leadership, whether performance is being recognized or simply relied upon, and whether the leaders most critical to outcomes are also those advancing into positions of influence.

When the answers begin to diverge, the issue becomes clear. It is not ambition that is changing. What is changing is the alignment between performance, authority, and advancement.

A Call to Action

This moment presents a powerful opportunity for both organizations and the leaders within them.

For CEOs and CHROs, the imperative is to make the invisible visible. This begins with a deeper examination of how advancement decisions are truly made, by elevating execution leadership into core strategic conversations, and by ensuring that authority aligns with demonstrated impact rather than perception alone.

For women leaders, the opportunity is equally significant. Recognizing this pattern creates the ability to respond to it thoughtfully. When something has felt misaligned despite continued success, it often reflects a structural dynamic rather than a personal one. Alignment—SelfPowerment—becomes the pathway forward, enabling leadership with greater clarity, confidence, and conscious choice.

The Leadership Shift Ahead

The next decade will require a more integrated model of leadership—one that values not only vision but also execution; not only strategy but also translation into outcomes; and not only performance but also alignment.

In many organizations, this leadership already exists. The capability is present, the experience is proven, and the impact is measurable.

The question is not whether the talent is there.

The question is whether it is fully seen—and whether women are fully choosing it.

You can preorder the book here:. You can download the white paper here.


Deb Smallwood

Profile picture for user DebSmallwood

Deb Smallwood

Deb Smallwood is the founder and CEO of SelfPowerment.

She spent four decades in corporate leadership across the insurance industry, operating at the intersection of business, technology, and organizational transformation. Her leadership inflection point led her to research the experiences of more than 50 high-achieving women and 10 men leaders. This formed the foundation of her book, SelfPowerment: The Inner Shift for High-Achieving Women Who Want More Than Just Success. The work introduces a research-informed framework that redefines success from within and invites women to shift the question from, “Will they choose me?” to “Do I choose them?”

The Key to Scaling Embedded Insurance

The industry's real scaling problem is a lack of organizational mandate and cross-functional accountability, not technical limitations.

Aerial Shot of a Scenic View of London

I've been convening senior practitioners in embedded insurance for long enough to notice when a room shifts. Not in the direction of the conversations--we've been covering the same structural questions for years--but in the quality of the answers. My London conference this year felt different. More candid. Less performative. The gap between what people say on stage and what they actually believe seems to be narrowing, and that, in itself, is a signal worth unpacking.

Here is what I took away: 

The industry's real scaling problem is organizational, not technical

We have been telling ourselves for years that legacy technology is the primary obstacle to scaling embedded insurance. I no longer believe that. The technology solutions are broadly available. The market is well-supplied with capable enablers. What consistently breaks down is the internal will and mandate to treat embedded insurance as a genuine strategic priority rather than a distribution experiment.

The organizations that have scaled are not, on average, better resourced or more technologically advanced than those that haven't. They tend to have one thing in common: someone at the top who decided this mattered and organized accordingly. That means cross-functional accountability, not departmental delegation. It means a mandate to build repeatable infrastructure - the second and third partnership must be structurally easier than the first, or you haven't scaled, you've just shipped.

The debate around core systems transformation is a useful proxy for this. The question isn't really whether to fix the core first or build around it. The question is whether your organization has the internal clarity to make that call and act on it consistently across geographies and business units. Most don't, not because of technical constraints, but because embedded insurance still lacks the internal political capital to force alignment.

Three debates, one underlying argument

We structured several sessions as explicit point-counterpoint debates this year, on branding, on value architecture, and on the vertical versus horizontal scaling question. I find structured disagreement useful because it forces articulation of positions that practitioners might otherwise hedge in polite company. What I didn't fully anticipate was how consistently all three debates would reveal the same underlying argument, approached from different angles.

Visible brand or white label?

The branding debate is one the industry has been having for a decade, usually with the same result: it depends. But the London conversation surfaced something I found more useful than the standard answer: a sharper articulation of what the choice actually turns on.

The case for invisibility is not primarily about aesthetics or customer experience. It is a claim about where trust already lives. Large platforms with deep customer relationships, in logistics, in financial services, in retail, have earned a form of trust that an insurer, however well-branded, has not earned in that context. Embedding insurance invisibly within that relationship is not concealment; it is recognition of where the trust actually resides. The moment-of-truth argument is compelling: at the point of claim, the customer wants the problem solved. The entity that solves it fastest earns the relationship. Brand attribution at that moment is secondary.

The counter-argument is a different kind of claim about trust, not contextual trust but category trust. Insurance is a promise, and promises benefit from a recognizable guarantor. Insurers invest substantially in brand precisely because recognition carries its own form of reassurance in a category where the customer is being asked to pay now for a commitment that may not be called upon for years. Invisibility, in this argument, is not neutral; it progressively commoditizes the underwriting capacity and exerts structural downward pressure on pricing.

What I found most honest about this debate was the convergence point: neither position is universally correct, and the most sophisticated operators are not choosing between them. They are managing both simultaneously, calibrating by geography, product complexity and the relative trust equity of the parties. In certain markets, the platform brand is the primary trust anchor and the insurer's presence is genuinely better invisible. In others, where the insurer brand carries regulatory or reputational weight that the platform does not, visibility is a feature, not an intrusion. The practical implication is that embedded insurance partnerships need to settle the branding question explicitly by market, not as a default, and that any insurer agreeing to white-label terms without a market-by-market rationale is leaving value on the table.

Ecosystem-led or asset-led?

This was the debate I found most intellectually generative, partly because the two positions reflect genuinely different theories of where value in embedded insurance ultimately concentrates.

The ecosystem argument is essentially a claim about adaptability. In a market characterized by real-time data, API-enabled partnerships and rapidly shifting customer preferences, the ability to orchestrate components quickly, to assemble and reassemble the proposition as conditions evolve, is worth more than ownership of any single asset. Speed, in this framing, is not a source of risk but of resilience. A system that can continuously adapt is more durable than one that must wait for static underwriting cycles to catch up with events.

The asset-led argument is a claim about accountability. Banks and asset owners possess something that ecosystem orchestrators do not: a long-term relationship with the customer, built on financial trust, that is transferred to the insurance experience at the moment it matters most. The moment of claim is not the moment customers turn to the orchestrator. It is the moment they turn to the institution that has stood behind their financial life. That equity is real, and it cannot be replicated by a platform that routes transactions without bearing the underlying relationship.

The debate converged on an important practical point about IT security and integration complexity that I think is underweighted in most ecosystem discussions. Multi-party governance in highly regulated financial institutions is not a solvable engineering problem, it is a political and institutional problem that consistently moves slower than the technology. The ecosystem model's promise of rapid reconfiguration depends on all parties agreeing on a common API and governance framework in real time. That is feasible in a well-designed bilateral partnership; it becomes significantly harder at genuine ecosystem scale, particularly when participants include regulated entities with differing risk appetites and compliance timelines.

My read: the most enduring embedded insurance businesses will be those that have asset-level accountability - genuine ownership of the claims moment and the data loop - delivered through ecosystem-grade infrastructure. The two are not in opposition; they describe different layers of the same value proposition.

Vertical specialization or horizontal scale?

The final debate of the day was, I think, the one with the longest strategic half-life for the industry, and the one where I have the most personal conviction.

The verticality argument is intuitive and largely correct on its own terms. Deep specialization in a specific customer segment or risk context produces richer data, higher-intent customer journeys, better conversion and superior retention. The economics are real. An insurer embedded at the precise moment a logistics operator is managing a freight shipment, or a small business is hiring a new employee or issuing an invoice, is operating with intent and context that a general distribution channel cannot replicate.

But the counter-argument challenged something more fundamental: whether the insurance industry's definition of specialization is actually fit for purpose in a modern market context. The most successful companies globally are not vertically organized in the traditional sense. They specialize in a core capability that is simultaneously deep and wide - a logistics system, an operating system - and they extend that capability into strategic adjacencies rather than optimizing a narrowly defined vertical silo. The insurance industry, by organizing itself around vertical customer segments, may be limiting its own conception of the serviceable market.

The synthesis I found most useful, and the one I've been turning over since the event, is that the question may be structurally misconceived. The genuinely admired companies didn't choose between vertical depth and horizontal reach; they went impossibly deep on a core capability and impossibly wide with it simultaneously. The real question for the insurance industry is whether it can build the internal capability to do both, or whether the attempt to do both results in doing neither well.

There is a second, more uncomfortable implication buried in this debate. Vertical specialists enjoy genuine advantages in acquisition cost, customer intent and alignment with the carrier's risk appetite, but those advantages may erode rapidly when a horizontal platform with an existing customer relationship enters the same niche. Owning the customer relationship and serving the customer well are different capabilities. Vertical players build the latter but must continuously work to defend or acquire the former. In a world where platform scale is increasingly available to non-insurance brands, that defense is getting harder.

Data governance is the new product

Across every conversation in London, mobility, bancassurance, ecosystem architecture, AI readiness, the same underlying tension surfaced: data is available in abundance, but governance over who owns it, who benefits from it, and how it flows back into operations remains immature.

This isn't a compliance problem. It's a value architecture problem. The organizations that will disproportionately capture value in embedded insurance over the next five years are those that treat data governance not as a legal prerequisite but as a strategic asset — something to be actively designed, not reluctantly managed. That means closing the loop from claims back into operations. It means structuring partnerships so that behavioral signals flow in both directions. And it means being willing to share data-derived value with distribution partners in ways that create genuine alignment, not just transactional capacity contracts.

The bancassurance discussions were particularly instructive here. The genuinely new element in modern bancassurance is not the channel or the product, it is the serious, structured effort to join banking and insurance data to build propensity models at a level of granularity that wasn't previously feasible. The organizations making that work are not those with the richest data sets; they are those that have been willing to rethink their operating model around the customer journey rather than the product portfolio.

The industry has made progress on building data capability. It has made far less progress on building data culture.

The claims moment is still the only one that counts

Every debate we had in London about branding, about partnership architecture, about whether the insurer should be visible or invisible, about who owns the customer, all of it ultimately collapsed into the same point: none of it matters as much as what happens when something goes wrong.

This isn't a new observation. But what struck me in London was how consistently senior practitioners, across very different organizational contexts and strategic philosophies, returned to it independently. The embedded insurance models that have genuinely earned customer trust are those where the claim experience was treated as a design problem from day one, not a delivery problem to be solved at first incident. The ones that haven't are the ones where the partnership was structured around acquisition and the operational commitments were underspecified.

For practitioners building or renegotiating embedded partnerships right now: if the claims journey hasn't been explicitly co-designed and agreed upon before signing, the partnership is incomplete. Distribution agreements without claims governance are a liability.

Agentic AI is not a future scenario, it is a current design constraint

The London conversation about AI agents and autonomous commerce was the one that generated the most visible discomfort in the room — not because the topic was unfamiliar, but because the timeline was more compressed than many had assumed. We are not talking about a three- to five-year horizon for the first wave of agentic distribution. We are already in it.

My read of where this leaves embedded insurance practitioners is straightforward. The architectural decisions being made today — about API design, product modularity, data standards, and underwriting velocity — are simultaneously decisions about AI readiness. Carriers whose products cannot be parsed and quoted by an AI agent are already invisible to the fastest-growing distribution channel. That is not a future risk; it is a current condition.

The more interesting question is not whether to prepare for agentic distribution but how to build governance structures capable of operating within it — licensing accountability when agents transact autonomously, identity verification when the buyer is not human, and underwriting velocity measured in milliseconds rather than minutes. These are not technology problems. They are regulatory, organizational and commercial design problems that the industry has not yet seriously engaged with.

Where this leaves me

After seven years of convening this community, I remain struck by how much intellectual honesty there is in the room when conditions are right for it — and how much of the industry's public discourse still lags behind what its most thoughtful practitioners actually believe.

London confirmed several things for me. The embedded insurance market is maturing, but unevenly. The organizations that will define it over the next decade are already distinguishable — not by their technology stack or their partnership roster, but by their organizational clarity and their willingness to treat data, governance and claims as strategic imperatives rather than operational overhead.

The conversation continues in New York City on Sept. 22.

Insurance Distributors Should Buy Carriers

Large insurance distributors are poised to acquire carriers as AI and private equity remove traditional barriers.

Photo of Men Working in a Warehouse

Independent agents continue to drive premium growth for carriers, with 60% of life insurance sales and increasing annuity sales (46% through IA/IMO/Independent BD) coming from third-party distribution (TPD), according to LIMRA.

The implication is clear: The future of insurance will be driven by independent distribution. So far, the focus has been on how carriers can control expenses related to third-party sales and get better selecting strategic partnerships. While that is important, there is a secondary implication that may be more important than the carrier focus:

How can distributors develop better partnerships with carriers?

The solution? Distributors should seek to acquire carriers and develop fit-for-purpose products and capabilities. We have seen this already occur in the P&C/specialty space, where MGAs/MGUs have acquired carrier shells. For example, At-Bay, a cyber insurance MGA, acquired an excess & surplus insurer to issue its own cyber and tech E&O insurance. Controlling the full value chain allowed the distributor the ability to accelerate product development and strengthen its commitment to its distribution channels. Life and annuity carriers could follow a similar model.

Why hasn't this worked before?

In the past, distributor-driven acquisitions of carriers have not been a match, for several reasons:

  • Capital and regulatory intensity – Independent distributors like IMOs are known for being light on capital and are built explicitly for sales and overrides. Carriers are built completely differently – RBC requirements and capital reserves are not only necessary, but encouraged to develop stronger ratings from credit agencies.
  • Scale threshold – Because of the capital and investment required, small to mid-size distributors have no chance to successfully acquire a carrier and leverage it for a fit-for-purpose distribution model. High fixed costs mean that in a highly fragmented distribution landscape, only the largest platforms could attempt this strategy.
  • Execution risk & operational complexity – Even if an independent distributor has the capital and scale, carriers bring capabilities that are non-native to independent distributors – claims, actuarial, and investment capabilities all require net new talent and systems to support.
  • Optionality and flexibility – Independent distributors pride themselves on having access to a wide-range of products in their toolkit. This "best-in-breed" approach is part of why independent channels have taken market share from captive channels. Owning an insurer potentially jeopardizes that optionality.
  • Risk mismatch – Distribution is a very different business than being an insurance carrier. Distributors are built for sales volume – overrides, commissions, and fees are all geared toward "more is better". Carriers are built on loss ratios, investments, and product profitability.
Why this is the time to act

All of these reasons have historically provided plausible rationale for why distributors have focused more on consolidation than achieving a "full stack" solution. But significant shifts in the market make this a time for the right distributor to break the mold.

  1. AI leveraged for neo-insurers – AI has made certain processes more scalable than ever before. The next frontier of AI development through agentic AI will be developing AI-native op models that move capabilities from human-centric to AI-centric. For distributors seeking to become full-stack, this means that the typical FTE-heavy carrier model no longer applies. Indeed, distributors can drive the AI-native carrier onslaught by leveraging AI to provide carrier capabilities within a distributor-centric operating model. Underwriting, claims, servicing, wholesaling functions and many others can now be designed around the distributor's goals and tech platform.
  2. Customer preference for holistic financial solutions – The share of US investors seeking holistic financial solutions (insurance & investment products) has grown from 29% in 2018 to 52% in 2023 (McKinsey). An Orion 2025 Investor Survey highlights that 78% of investors whose advisor does not currently offer holistic financial planning say they want it. Only 23% of advisors even offer insurance, suggesting a significant awareness gap and opportunity for distributors
  3. Heavy PE investment in the insurance market – Capital and investment requirements are significantly less of a concern for private equity investors, who have significant dry powder. This has caused several to go after distributors and/or carriers. These same investors can address a significant hurdle for distributors while diversifying their returns from both distribution and the carrier.
  4. Moat development – In an increasingly competitive market, where large distributors are consistently seeking to grow through inorganic acquisition/consolidation, there are fewer ways to differentiate. One way that some firms have sought to do this is through carrier/distributor partnerships to create products for specific distributors. Distribution-owned carriers take this to its logical conclusion – owning products becomes a key differentiator versus other large distributors.
  5. Innovation and speed-to-market – One of the complaints from distributors is how slowly carriers bring products and capabilities to market. Carriers owned by distributors avoid this – instead of long-feedback loops, distributors can leverage their insights to quickly meet the market where it is. Riders that might take 12-18 months from idea to rollout can be done much more quickly.
The way forward for distributors

Distributors that want to move forward have concrete steps to take on their journey to acquiring a carrier.

  • First, distributors have to make sure that they can successfully acquire and run a carrier. That includes:
  • Having sufficient scale as a distributor
  • Having capital or access to capital necessary to support a carrier
  • The ability to execute and successfully integrate a carrier into the distributor's platform

None of this can be overstated – the difference in creating an oddly formed captive model and a forward thinking distribution-led carrier will come down to execution and integration capabilities. This limits the possible audience to large distributors or those backed by significant capital.

If a distributor has both the means and the willingness to accept the risk, there are three concrete next steps.

1. Understand how the distributor wants to do business

The key emphasis for product fit-for-distribution is that distribution drives capabilities and decision-making. That means starting with how the distributor wants to leverage the carrier to better enable their sales and growth. For example, does the distributor want to accurately price the risk (e.g., leverage APS and medical exams) or do they favor speed of decisioning (e.g., leveraging simplified or accelerated underwriting) to lower cycle time?

There will be trade-offs for sure, but making these types of strategic decisions will directly influence how the carrier delivers key capabilities in support of the distributor (i.e., the operating model).

One additional consideration: what will the product development strategy be? Distributors could choose anything from niche products missing in the market to a full product suite and make them distributor-exclusive or allow others to sell them. Distributors should initially focus on addressing gaps in the market with exclusive products, but keep an eye on when they may have a product that makes sense to sell to others,

2. Develop an operating model that is fit for purpose

Once distributors have made key strategic decisions, it is time to convert that strategy from a what to a how. There are three key levers distributors should consider: AI, outsourcing, internal personnel.

Any distributor developing a carrier should do so in an AI 3.0 mindset – assume that the processes and capabilities should be built with AI at the forefront and human interaction only inserted where AI is incapable of performing the task today. This provides a lean organization that positions itself well in the future: as AI continues to develop this will position the carrier (and the distributor) the next stage of AI-driven operating models.

With AI and automation at the forefront of how the carrier operates, distributors can continue to develop capabilities by outsourcing and internal hiring. There are several considerations, but distributors should consider what talent should be in-house. For example, claims may not be a critical experience to the distributor and could be outsourced to a TPA, while product development may be a capability to retain.

3. Integrate and iterate to continue to improve

In a distribution-centric model, there should be a focus on establishing a proprietary data loop to drive sales.

Full ownership provides clean, first-party data on distribution behavior and claims, which can feed back into product/actuarial/pricing decisions faster than any carrier partnership allow.

To enable that, distributors need to integrate all carrier technology with the distributors existing platform – this will ensure that insights are seen end-to-end that improve the distributors ability to sell.

What comes next?

As distributors develop strategy over the next 5-10 years, they should expect to see independent distribution as a primary value driver, increased competition for advisor talent within independent channels, and an increased focus on customer centricity. The combination of those things suggest that distributor who innovate and can best serve the customer will be winners in the market.

The way to do that is simple, but not easy – leverage carrier capabilities to provide a true, distribution-led product development process. Large distributors and their investors must take immediate steps to leverage their scale to be a first in the market. Fast action will provide them with a first-mover advantage, and position them well as the market transitions. The first scaled distributor or PE-backed platform to execute this strategy will own the next decade of independent distribution.


Chris Taylor

Profile picture for user ChrisTaylor

Chris Taylor

Chris Taylor is a director within Alvarez & Marsal’s insurance practice.

He focuses on M&A, performance improvement, and restructuring/turnaround. He brings over a decade of experience in the insurance industry, both as a consultant and in-house with carriers.

AI as a Tool or AI as a Product?

The gap between $20 ChatGPT and six-figure AI vendors lies in integration, repeatability and operational complexity personal tools can't address.

An artist's illustration of AI

Someone on your team just demoed how ChatGPT or Copilot can extract data from a medical report in seconds. Now leadership wants to know why you're paying a vendor six figures for document processing when the same AI is available for $20 a month.

It's a reasonable question. Personal AI tools and operational AI systems solve fundamentally different problems, even when the underlying technology looks identical.

When an adjuster pastes a claim's medical records into ChatGPT to prep for a call, that's a one-time task. They provide context, review outputs, fix what needs fixing. The AI just makes them faster at work they were already doing.

When AI processes hundreds of documents a day as part of an operational workflow, a person isn't shepherding each one through. The AI becomes one component inside a larger system that needs to work reliably at scale.

The problem is these two use cases get treated as interchangeable. Organizations sometimes spend months building infrastructure for tasks an employee could handle with ChatGPT. Other times, someone proposes using ChatGPT for a process that actually requires serious engineering. Both mistakes come from not recognizing when a task requires a product. The telltale signs are integration, operational complexity, and repeatability.

Integration

ChatGPT works with whatever you paste into it. It can't reach into your claims system, query your policy database, or update a file on its own. In production, the AI sits in the middle of a pipeline. Data has to get in, and data has to get out

On the input side, documents arrive through fax servers, email integrations, and carrier portals. On the output side, extracted data has to be written to the system of record and validated against what's already in the claim file, and exceptions need to be flagged and routed. Without the integration work on either side of the AI, the system can't function in production.

Operational Complexity

When someone uses ChatGPT to extract data from a document, they're the orchestration layer. They decide what to paste in, what to ask for, what order to work through it. If something looks wrong, they adjust and try again. That works when you're processing one document at a time.

At scale, software has to do that job instead. Documents need to be normalized into a usable format. Illegible or corrupted files need to be handled. Outputs need to be validated, exceptions routed, and results connected to downstream systems. When something fails halfway through, the system needs to know where it stopped, what succeeded, and how to recover.

There's also the question of proof. When a claim gets litigated or an auditor asks how a decision was made, "the AI said so" isn't an answer. You need to show exactly where in the document a value came from and why it was interpreted that way. Personal AI tools are black boxes. Enterprise systems build in traceability because insurance requires it.

Finally there is the file size. A single claim file can run 10,000 pages. You can't paste that into ChatGPT. Personal AI tools have input limits that make documents like these impossible to process in a single pass. At that point, you're not using a tool. You need a product.

Repeatability

Personal AI tools are inherently variable. Ask ChatGPT the same question twice and you'll get different answers. When drafting a strategy document, this can actually help. Running the same prompt multiple times gives you different angles to choose from.

At operational scale, variability becomes a liability. A diagnosis code extracted one way in the morning might come out differently in the afternoon. Tags get applied inconsistently. Provider names normalize differently across batches. A claim that gets flagged high-priority on Monday might score as routine on Tuesday. These inconsistencies create problems throughout downstream processes.

When outputs are unreliable, users lose trust. They start checking everything manually, which defeats the purpose. Enterprise implementations address this through standardization: controlled prompts, validated extraction logic, versioned models, systematic testing. When something changes, you know what changed and why. When something breaks, you can trace it back. This infrastructure is what makes production systems require real investment, but it's also what makes them suitable for production.

When You Don't Need a Product

But the opposite mistake is also common. Not everything needs a product. If the task is infrequent and doesn't need to connect to anything, a person with ChatGPT can be the right answer.

A couple of times a year, a supervisor prepares for a mediation on a complex claim. They need to review the medical records, understand the treatment history, and build their argument. Someone sees that and thinks: we should build a tool for this. However, ChatGPT can help them work through it directly, surface key details, summarize sections. That's a tool making someone better at their job, not a workflow that needs to be automated.

The same applies outside of claims. Quarterly management presentations. Strategy preparation for a renewal. Evaluating a vendor. One-off policy research. These happen a few times a year, the output goes into a document or slide deck, and nobody needs the data anywhere else. Building automation around them solves a problem that doesn't exist.

The person doing the work already has what they need. They have the data, they understand the context, they'll review and edit whatever the AI produces. The value of personal AI tools is that they require no infrastructure. Let people use the tools directly, get useful output, and move on. Trying to systematize that just adds overhead without adding value.

Different Problems, Different Approaches

Personal productivity works because a human handles everything around the AI. They provide context, review outputs, catch errors, make decisions. For these use cases, give people access to AI tools and get out of their way.

Operational automation requires software to do what the human does in the personal productivity scenario. Integration with existing systems. Repeatable outputs. An application layer that makes the AI's work usable by others. That's a product.

The underlying AI might be identical in both cases. The difference is what surrounds it. If the task requires integration with other systems, that's a product problem. If it requires orchestration across large or complex inputs, that's a product problem. If it requires consistent, auditable outputs, that's a product problem. The more of these that apply, the further you are from something a person with ChatGPT can handle. If none of them apply, you probably don't need a product at all. Give someone the tool and let them work.

Document ingestion is one example. However, the same pattern holds for triage, fraud detection, subrogation, anywhere AI moves from assisting one person to running inside a workflow. The question isn't whether AI can do the task. It's whether you need a tool or a product. Get that wrong and you'll spend six months discovering why the vendor charges what they do, or build a system for something that just needed a person with ChatGPT.