Download

Can We Please Tone Down All the 'Inflection Point' Talk?

I believe in creating a sense of urgency as much as the next guy, but it's just not right to say every part of the insurance industry is forever at a crossroads.

Image
group in business meeting

With all the copy I read through every week as I decide which pieces to publish here at ITL, I'm noticing an odd trend. 

For the past couple of years, tons of the articles submitted to me gloried in the insurance industry's "transformation" and "disruption." In recent months, though, lots warn that insurance is at a "crossroads" or an "inflection point" — often dressed up with ominous adjectives so the situation becomes a "major" crossroads or a "crucial" inflection point.  

Why the doom and gloom? And is it justified?

The one issue that could potentially merit the inflection point talk for the whole industry is generative AI. In the three years since ChatGPT announced itself to the world, it has already created numerous opportunities for efficiency, and AI agents hold the prospect of far more profound change. If you can get to the point where you say to your AI, "Gather all the information I need for this claim by contacting all the relevant parties," you would, in fact, have a crossroads. Those who figured out how to take advantage of that sort of AI agent would go one way, toward paradise, while everyone else would head in the wrong direction.

But we aren't there yet, and I think it'll take time for us to get there. The Silicon Valley ethos may be to "move fast and break things," but insurance companies don't get to do that.  We're not allowed to break things. Too many people get hurt if we do. 

The insurance industry faces plenty of other big issues, too: the increased number and intensity of natural disasters, uncertainty and rising prices because of the on-again, off-again Trump tariffs, federal policy that is reducing aid to states following natural disasters and may mean the elimination of the Federal Emergency Management Agency (FEMA), and so on. 

But does that mean we're at an inflection point? I don't think so. I think those issues just show that insurance is a complex, dynamic world, of the sort we've been dealing with, mostly effectively, for a long time. The supply chain disruptions because of COVID certainly caused a crisis, for instance, but insurers have already recovered enough that I recently published an article with the title, "Are Auto Insurers Now TOO Profitable?"

Besides, most of the claims of impending crisis I see are about far less comprehensive issues than GenAI. They're about the need to update legacy systems, to clean data, to adopt some more efficient approach to underwriting or handling claims, and so on. 

I agree with all the points those thought leaders are making. I also understand the need for innovators to create what is often referred to as "a burning platform." At the Wall Street Journal, I covered IBM in the '80s and '90s, a period during which the very smart executive leaders knew they needed to change to keep up with the increasing pace of innovation in the industry but couldn't quite bring themselves to do anything radical, because IBM had been the most profitable company in the world for so long. Only once the company started taking multibillion-dollar writeoffs and laying off tens of thousands of people — having prided itself on never laying off a single person in its 80-year history — did the company have the burning platform that Lou Gerstner used so effectively to change the culture.

I even accept that some parts of the industry are at inflection points. For instance, I recently published a piece by Stephen Applebaum and Alan Demers, "Embedded Insurance Nears Tipping Point" — because they're right; embedded insurance has been percolating as a possibility for years now and may be about to have its breakout moment, especially in auto insurance. I even published a piece in September with the headline, "Insurance at an Inflection Point." That was before I started seeing the term so often that I became allergic to it, and I wouldn't use it in a headline today, but the article makes a smart point about a potential new business model for insurers. 

But we have to maintain our credibility, and we can't be deluding ourselves. We likely aren't doing that if every fifth piece or so that I read claims the industry is at a crossroads/inflection point. (I recently opened a proposed article whose first sentence was, "The insurance industry is at an inflection point," and the next article began, "The insurance industry is at a critical inflection point.") 

There is loads of important change happening in the insurance industry, and GenAI will surely get us to an inflection point. But let's not oversell what's happening now. 

Not every problem is a make-or-break moment. Not every bit of progress is a game-changer.

Cheers,

Paul

P.S. I seem to need to cleanse my soul every six to 12 months with a piece like this. Here are some of my favorite previous rants, which I think hold up just fine: "Let's Stop With the Gibberish," "May I Rant for a Moment?" and "Two Words We Must Stop Using." 

I get riled up just rereading them. Please share with any colleague you think could use a nudge — or maybe a chuckle. 

Cyber Insurance Exclusions to Expect in 2026

Emerging cyber threats are driving insurers to expand policy exclusions, challenging traditional risk management.

Abstract red and blue blurs

Cyber insurance remains a cornerstone for managing digital risk, yet the market is evolving in ways that may surprise many organizations. By 2026, policies are expected to provide less certainty than policyholders have come to assume. Insurers are introducing new exclusions, enforcing stricter underwriting standards and responding to the rapid emergence of complex threats such as AI-driven vulnerabilities, zero-day exploits and connected Internet of Things exposures. 

For risk managers and insurance brokers, anticipating these exclusions and developing strategies to address coverage gaps is essential. Misalignment between perceived protection and actual policy coverage can expose organizations to significant operational disruption and financial loss. 

The next section examines why insurers are introducing these new exclusions and what drives their focus on high-uncertainty, potentially catastrophic exposures.

Why Exclusions are Escalating

Claims metrics in 2025 show relative stability, with reports indicating that both the number and average severity of large cyber claims have remained largely unchanged compared with prior years. On the surface, this might suggest that insurers are not under pressure. However, the surge in exclusions is driven less by historical claims and more by emerging, high-uncertainty risks that could produce catastrophic losses. 

Insurers are increasingly concerned about exposures without established actuarial history, including AI-driven attacks, zero-day vulnerabilities, connected IoT systems and state-sponsored cyber operations, according to a 2025 report by Allianz

Even isolated events, such as the 2024 CrowdStrike outage affecting multiple Fortune 500 companies, illustrate the accumulation risk insurers now face—where a single incident can affect numerous policyholders simultaneously. 

This combination of unquantified risk, potential for systemic loss and regulatory uncertainty has prompted insurers to tighten coverage and add exclusions to protect against scenarios that could produce outsized financial consequences.

Emerging Exclusions to Expect in 2026

Risk managers should anticipate new categories of exclusions that will redefine what traditional cyber insurance covers. Understanding the rationale behind each exclusion and its potential impact is critical for preparing organizations.

Artificial Intelligence Risks

Artificial intelligence is becoming ubiquitous, yet insurers are increasingly excluding claims linked to its use. Policies may deny coverage for errors or omissions in AI systems, misleading outputs or regulatory violations tied to AI implementation. 

A notable concern is the breadth of some exclusions, which may apply not only to a company's own AI systems but also to third-party platforms used in business operations. This expansive scope creates uncertainty about whether claims will be honored when AI played even a minor role. Risk managers must scrutinize AI-related language in policies and assess whether existing coverage aligns with emerging liabilities, according to an article in the Harvard Law School Forum on Corporate Governance and Financial Regulation.

State-Sponsored Cyberattacks

Following global geopolitical developments, insurers are expanding war or cyberwar exclusions to cover state-backed attacks, according to Mitigata. The impact can be profound, as even incidents occurring in peacetime may fall within the exclusion if a government is implicated. This is particularly significant for organizations operating in critical infrastructure sectors or with extensive international digital networks. Awareness of the scope and triggers of these exclusions is essential for preparing mitigation strategies and considering supplementary coverage.

Catastrophic and Widespread Events

Insurers are increasingly defining "widespread events" or "catastrophes" in ways that limit aggregate exposure from systemic incidents, according to an article by Chubb. These exclusions may restrict coverage when multiple policyholders are affected simultaneously, such as through a coordinated ransomware attack targeting a popular cloud provider. For organizations, this can mean delayed payouts or denied claims when the event's scale triggers a policy exclusion. Clear understanding of these terms is necessary to plan alternative risk strategies.

Web Tracking and Regulatory Liabilities

Policies are tightening language around website tracking, data privacy and compliance with evolving regulatory regimes. Failure to satisfy underwriter inquiries regarding tracking technologies can lead to broad exclusions. Similarly, coverage for fines, penalties and reputational harm is often limited. Organizations must ensure that their security posture, privacy practices and compliance measures are fully documented to avoid coverage gaps.

Enforcement of Existing Exclusions

Even long-standing exclusions are being applied more rigorously, the 2025 Allianz report found. Insurers are denying claims for failure to meet minimum security requirements, including missing multi-factor authentication, unpatched vulnerabilities or outdated incident response protocols. Insider threats, third-party vendor risks, contractual liabilities and regulatory fines are also increasingly scrutinized. For risk managers, this means that maintaining robust, documented controls is not optional but a condition for coverage.

Managing Exclusions

To navigate this tightening environment, organizations should align coverage with actual risk. Key actions include:

  • Implementing and documenting robust controls, including multi-factor authentication, endpoint detection and response systems and formal incident response readiness.
  • Being transparent during underwriting by accurately representing security posture and addressing known vulnerabilities.
  • Conducting regular risk assessments to ensure IT infrastructure aligns with coverage requirements.
  • Reviewing policy language closely, with attention to definitions for catastrophes, state-sponsored attacks and minimum security requirements.
  • Collaborating with specialized brokers who understand the nuances of cyber policies and can advocate for coverage clarity.

These measures help reduce the likelihood of denied claims and ensure policies reflect actual organizational risk. Insurance remains necessary, but it must be coupled with proactive risk management to be effective.

Filling Gaps with Alternative Risk Transfer

When traditional policies leave high-severity, low-frequency risks uncovered, alternative risk transfer solutions can provide supplementary protection.

Captive Insurance

A captive is a subsidiary insurance company established to underwrite risks for its parent organization. Captives allow coverage of exclusions such as state-backed cyberattacks, AI liabilities, or reputational loss. This approach enables customized protection, keeps premiums and underwriting profits within the organization and provides certainty where commercial markets may be constrained.

Parametric Insurance

Parametric policies pay out based on predefined triggers rather than measured losses. For example, a payout may be tied to a specific number of exposed records or a defined system downtime period. Parametric insurance ensures rapid access to capital for business interruption costs, even if the primary cyber policy contains restrictive exclusions.

Capital Market Solutions

Cyber risks can also be transferred to capital markets through insurance-linked securities such as catastrophe bonds. These instruments attract external capital to cover peak risks, including systemic cyber events, and can expand overall capacity for insuring niche exposures that traditional policies exclude.

Conclusion

Cyber insurance exclusions are expanding in response to evolving threats and increasing claims severity. By 2026, risk managers and brokers must recognize that traditional policies alone may not provide full coverage, particularly for AI-related liabilities, state-sponsored attacks and catastrophic events. Proactive strategies, including robust documentation, controls, regular risk assessments and complementary alternative risk transfer solutions, are essential to bridge coverage gaps. Aligning insurance with operational realities ensures that organizations maintain resilience, protect enterprise value, and respond effectively when cyber incidents occur.


Randy Sadler

Profile picture for user RandySadler

Randy Sadler

Randy Sadler is a  principal with CIC Services, which manages more than 100 captives.

He started his career in risk management as an officer in the U.S. Army, where he was responsible for the training and safety of hundreds of soldiers and over 150 wheeled and tracked vehicles. He graduated from the U.S. Military Academy at West Point with a B.S. degree in international and strategic history, with a focus on U.S.–China relations in the 20th century. 

December 2025 ITL FOCUS: Workers' Comp

ITL FOCUS is a monthly initiative featuring topics related to innovation in risk management and insurance.

workers comp itl focus

 

FROM THE EDITOR

When people talk about how AI can make business operate more efficiently, they tend to think in terms of cutting costs, but I hear something else, too: speed.

Sure, policyholders want cost taken out of the claims process, knowing that the savings will eventually be passed along in the form of lower premiums, but what they really want is to have their claim resolved promptly and to receive payment quickly, so they can get on with their lives.

The need for speed is especially great in workers’ comp. A person is injured and may be isolated at home, which can be disorienting both financially and psychologically. The person needs (and deserves) to feel valued, which not only means prompt attention from a boss, the insurer and medical personnel but also means wrapping up all the details as soon as possible. The injured worker will sleep better once everything is resolved. Employers and insurers will, too, if they find themselves wrapping up more cases before the lawyers get involved.

Looking back at the articles on workers’ comp I’ve published over the years, I see loads of progress. Insurers have become much better at triaging cases, so they can spot those that are most likely to escalate into a courtroom—helping manage costs while giving more injured workers the attention they want. Insurers have also improved how they use technology to detect fraud, again cutting costs while helping workers and employers by reducing the need to increase premiums. More recently, technology has been enabling a move to a Predict & Prevent model: Innovators are, for instance, using cameras to monitor workplaces and construction sites and spot potential problems so managers can work with employees and head off accidents.

Now comes generative AI, which will take another whack at costs while benefiting everyone by expediting the handling of a worker’s claim.

Connor Atchison, CEO of Wisedocs, says in this month’s interview that AI is already cutting claims handling times in half, and the technology is just getting started. He does provide some words of caution, both based on his experience with AI and on a recent survey Wisedocs conducted. For instance, he says you have to focus not just on getting the right answers with your AI but have to work to build trust in those results. He says a survey found that keeping a human in the loop increases trust in the AI by 4X—but the human has to be in the loop at the right spot.

Whether you’re focused on using AI to cut costs or, like me, more interested in how much it can speed workers’ comp claims, I think you’ll find the conversation with Connor enlightening.

Cheers,

Paul

 
 
An Interview

How AI Can Transform Workers' Comp

Paul Carroll

At ITL, we've been encouraging the insurance industry to move to a Predict & Prevent model and away from the traditional repair-and-replace approach. Workers' compensation has been a poster child as organizations make remarkable strides in reducing workplace injuries. But there's significant complexity below the surface. What are the key challenges around volumes, documentation, staff shortages, and legacy systems?

Connor Atchison

I think you summed it up right there. It's the culmination of all of these things over decades that are making things slower and more cumbersome. We have gaps in knowledge as we strive for better care outcomes—to get that worker back to work and make sure we're spending the right amount of money on the right treatment to make that happen.

There are definitely issues around legacy systems. Workers' comp, even more than other insurance lines, is still a little bit behind. But they're catching up and adapting, and they're seeing the need, which is great.

read the full interview >
 

MORE ON WORKERS' COMP

The Future of Workers’ Comp

Workers' compensation systems need cloud-native transformation to address modern workforce challenges and rising claim severity.
Read More

 

Warehouse Tech Transforms Risk Models

Connected warehouse technology forces insurers to abandon static risk models for dynamic, data-driven assessments.
Read More

 

phones

When 2 Records Walk into a Claim…

Workers' comp systems designed to catch duplicate records miss 62% of them, creating costly inefficiencies.
Read More

 

hands in a meeting

Strategies to Fight Workers' Comp Fraud

Advanced AI and predictive fraud models transform workers' compensation fraud detection from costly burden into a strategic risk management advantage.
Read More

 

Reengineering Workers’ Comp Products With Agile

Workers' compensation insurers must shift from inflexible waterfall development to agile frameworks, which promise enhanced collaboration and responsiveness.
Read More

 

megaphones

What Medical Inflation Means for Workers’ Comp

Healthcare inflation surges past general price trends, pressuring P&C carriers to adopt data-driven claims strategies.
Read More

 

 
 

FEATURED THOUGHT LEADERS

James Benham
 
Ellie Gabel
Tiffany Norzagaray
Roberta Mercado
 
Lavanya Rajamani
Pragatee Dhakal
 

MORE FROM OUR SPONSOR

 

Rebuilding Trust in AI: Using Human-Assisted Intelligence for Enterprise Claims Transformation 

Sponsored by Wisedocs

The complexity of claims processing is rising, regulations are tightening, and AI solutions are common — but not all are built for the enterprise. Discover how leading carriers and claims organizations are transforming claims with a measured, compliant, human-assisted AI approach in Wisedocs' Enterprise Claims Guide.

Read More 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Improving Insurance Data Quality

Insurance organizations are deploying AI and semantic ontologies to transform data quality challenges into competitive weapons.

Close up of Computer Hardware

For decades, data quality has been treated as a technical problem—something to be solved through better databases and more rigid validation rules. Yet data quality has become a competitive weapon. 

When data teams can ensure clean, consistent, and contextualized information flows through their organization, everything improves: underwriting decisions become sharper, fraud detection catches sophisticated schemes earlier, and claims get processed faster. Two powerful forces—artificial intelligence models and semantic ontologies—are rewriting what's possible for data teams willing to embrace them.

The Real Cost of Data Quality Problems in Insurance

Before diving into solutions, it's worth understanding just how expensive bad data becomes. The insurance industry processes enormous volumes of information daily, from application submissions to claims documentation to policyholder records. Each piece flows through multiple systems, passes through different hands, and gets interpreted by various teams. When data enters at a broker's desk—sometimes handwritten on paper that gets scanned—errors creep in quickly. These aren't just minor inconveniences. Poor data quality directly undermines the foundation that AI models depend on. When machine learning models train on flawed historical data, they learn to recognize the wrong patterns. They optimize for mistakes rather than truth. The consequence? Models make worse decisions, often confidently.

Consider the downstream damage. Inaccurate underwriting data leads to mispriced policies. Claims teams inherit messy customer histories and struggle to match new claims to existing policies. Fraud detection systems flag legitimate claims as suspicious because they can't reliably recognize patterns through the noise.

How AI Models Are Transforming Data Quality Assurance

Rather than viewing AI as yet another consumer of data, forward-thinking insurance organizations can deploy AI specifically to improve the data that other AI models will eventually use. This creates an interesting dynamic: machine learning becomes both problem and solution simultaneously.

Automated Data Profiling and Anomaly Detection

The first wave of improvement comes from automated systems that profile datasets at scale. Rather than manual spot-checking or waiting for problems to surface downstream, AI systems continuously scan data streams looking for deviations from expected patterns. These systems use various mathematical approaches—from classical statistical methods to modern neural networks—to understand what "normal" looks like within specific data domains. When new data arrives, it gets compared against these learned patterns. If something seems off—a claim amount 500% higher than average for that customer, a date that appears to be in the wrong format, or a relationship that doesn't align with historical context—the system flags it immediately.

What makes this different from traditional validation rules is the adaptability. A hard-coded rule might check "ensure claim amounts are between $0 and $1,000,000." This catches obvious errors but misses the subtle cases where everything looks valid but seems contextually wrong.

Real-Time Data Quality Rules Generation

Another emerging capability involves AI systems that actually generate the validation rules themselves, rather than requiring data stewards to manually write them. Generative AI models can analyze historical datasets and automatically create metadata and quality rules tailored to an organization's specific terminology and standards. This matters more than it might initially seem.

Many insurance organizations have legacy systems that lack proper metadata—documentation about what data means, where it came from, and what constraints should apply. Rather than spending months manually documenting these systems, organizations can point an AI system at the data and have it generate initial documentation and rule sets. Humans then review and refine these suggestions. The result? Metadata standards get created faster, and they're grounded in actual data patterns rather than abstract governance theory.

Natural language processing for unstructured data

Insurance organizations have unstructured data everywhere: claims notes, adjuster observations, medical records, police reports, and customer communications. Traditional data quality approaches struggle here because they're designed for structured, tabular information. Natural language processing (NLP) changes this equation. NLP systems can read through thousands of claim descriptions and identify inconsistencies, flag unusual language patterns, extract structured facts from unstructured text, and even spot potential fraud signals hidden in prose.

One practical application: property damage claims often include written descriptions. NLP systems can extract key details (property type, damage description, estimated repair cost), compare these against the claim's structured fields, and flag mismatches automatically. If an adjuster describes "minor water damage" but the structured claim shows a $500,000 payout, that contradiction gets surfaced for immediate review.

Ontologies and Semantics: Building the Language of Insurance Data

Data quality ultimately depends on shared understanding. The same term—"policyholder," "coverage," "claim"—might mean slightly different things across different systems, departments, or companies. This semantic ambiguity creates a ceiling on how much automation and AI can help. You can throw perfect algorithms at messy semantics, but the output remains limited. This is where business ontologies become transformative.

What Makes Ontologies Different from Traditional Data Models

An ontology is fundamentally different from a traditional data model or database schema. Where a schema defines table structures and fields, an ontology captures meaning. It specifies not just what fields exist, but what they mean, how they relate to business concepts, what synonyms matter, and what business rules should apply. In insurance, an ontology might define that "policyholder" connects to specific attributes (name, address, risk profile), that it relates to policies through an "owns" relationship, and that certain business rules apply (a policyholder must be of legal age, must have a valid address, etc.).

Ontology-Powered Data Integration

Here's where ontologies enable something previously difficult: intelligent data integration. When ingesting data from multiple systems, traditional approaches rely on explicit mappings—field A from system one maps to field B in the warehouse. If a new data source arrives, someone must manually create all new mappings. With semantic ontologies, different systems can describe their data in terms of common business concepts. A policy administration system might use field "POL_STAT" while a claims system uses "CLAIM_POLCY_STATUS," but both can be mapped to the ontology's "policy_status" concept. This semantic layer enables automatic discovery and integration.

The "Enterprise Brain": Knowledge Graphs Built on Ontologies

The most sophisticated implementations combine semantic ontologies with graph database technology to create what some describe as an "enterprise brain"—a knowledge graph that captures not just the data, but the meaning and relationships within the business domain. This goes far beyond traditional data warehouses. In a knowledge graph, entities (customers, policies, claims, agents, providers) become nodes, and relationships become edges. Rather than storing "John Smith has policy 12345," a knowledge graph stores this as a relationship with properties: John Smith (subject) — owns (relationship) — Policy 12,345 (object).

The power becomes apparent in use cases. In claims processing, a knowledge graph can instantly answer complex questions: "Show me all claims filed by customers who have had five or more claims in the past two years AND live within 20 miles of a recent catastrophic event AND have files with identical repair cost estimates in the past six months." This type of query, which might take hours or days in traditional systems, executes in seconds against a well-designed knowledge graph.

Overcoming Implementation Challenges

The journey toward AI-enabled data quality and semantic ontologies isn't frictionless. Three categories of challenges emerge consistently: cultural, regulatory, and technical.

Culturally, data teams and business stakeholders don't always have the same priorities. Data governance teams focus on compliance and consistency. Business units want speed and flexibility. These incentives can conflict. The solution involves establishing cross-functional collaboration frameworks where compliance, risk, and business units align on shared governance structures and standardized communication. When they do, institutions achieve faster issue resolution, stronger controls, and smoother product delivery.

Regulatory challenges run deep. Regulators now scrutinize AI extensively, particularly around explainability. A "black box" model that makes decisions without showing its reasoning creates compliance risk, so organizations may need documentation across all models to address it.

Technically, many organizations face fragmented systems. Core data lives in legacy on-premises systems running alongside newer cloud platforms. Building semantic ontologies and knowledge graphs across this fragmented landscape requires careful architecture. The industry is gradually standardizing on cloud data platforms like Snowflake, Databricks, Palantir or BigQuery which offer better scalability for knowledge graph implementations.

The Convergence: AI and Ontology Working Together

The most exciting developments emerge when AI and semantic ontologies combine. AI systems can learn from data at scale and identify patterns humans would miss. Semantic ontologies provide the business context that AI systems need to make those patterns meaningful. Together, they create a feedback loop: ontologies guide how AI models interpret data, and AI systems suggest refinements to ontologies based on what the data reveals. This is fundamentally more powerful than either approach alone and creates immense value for any data organization.


Pramod Misra

Profile picture for user PramodMisra

Pramod Misra

Pramod Misra is the director of data analytics and AI automation at Snellings Walters Insurance Agency

He has over 20 years of hands-on experience in data analytics, artificial intelligence, and machine learning model development, including as chief AI officer at a healthtech startup and at Vodafone, Novartis, Takeda, P&G, Tune Protect Insurance, and Nuclear Power Corp.

How Auto Insurers Can Strengthen Retention

Rising premiums are driving auto carriers to transform roadside assistance into a strategic retention tool through technology partnerships.

Man Checking Car Engine

In today's volatile market, auto insurers and their policyholders are facing a shared challenge. Drivers are struggling with rising costs and shopping for auto insurance coverage at record rates. Insurers, feeling this pain, are searching for sustainable ways to earn and strengthen customer loyalty. In this environment, the key to retention is no longer just about price – it's about service.

While dealing with a major claim is rare for most drivers, a flat tire, dead battery, or lockout is a far more common and often stressful event. This presents a key opportunity for insurers as strategic roadside assistance evolves from a simple add-on to a powerful tool for brand-building. By delivering a reliable, positive experience to policyholders in a moment of need, insurers can build the trust required to keep policyholders happy and engaged with their brand, turning a moment of stress into a moment of truth.

A High-Impact Retention Tool

Policyholders may shop for price, but they stay for trusted service. According to J.D. Power, while high premiums can decrease satisfaction, these effects can be offset by a high level of trust that an insurer will deliver when needed. The impact on retention is profound: 90% of policyholders with high trust in their insurer are likely to renew.

Roadside assistance provides a tangible, low-cost opportunity to build that trust. Additional research from J.D. Power in the 2025 Insurance Playbook on Customer Retention has shown that customers who have roadside assistance with their auto insurer have a higher overall satisfaction score of 672 – nearly a 30-point increase over those who don't. For a nominal annual cost, often less than $20 per vehicle a year, an insurer can resolve a stressful and frustrating situation, directly influencing a client's perception of their value and offsetting negative sentiment from premium hikes. This transforms roadside assistance from serving as a cost center into a strategic asset for optimizing customer lifetime value.

A New Lens for Viewing Strategic Partners

To get the most out of roadside assistance, auto insurance carriers need to rethink the traditional vendor relationship. The old model of simply outsourcing the service is being replaced by something far more valuable: a deeper, strategic partnership that's dedicated to policyholder satisfaction. The goal is to find a partner who doesn't just handle calls but elevates your overall brand experience through a powerful combination of professional experts and an intelligent technology platform.

For instance, a modern dispatching platform can connect everyone – drivers, roadside service providers, and insurers – in a single, dynamic ecosystem. Such a platform is more than just a dispatching tool; it's a learning machine.

Every interaction, from a simple request to a completed service, feeds data back into the system. This information is used to constantly refine and improve every aspect of the roadside service operation. This process delivers two key advantages:

  1. Smarter, faster service: The platform learns the most efficient ways to operate, optimizing everything from matching the right provider to dispatch times. This ensures a consistently superior policyholder experience.
  2. Actionable carrier insights: The data provides a wealth of insights for auto insurance carriers. It helps you understand the driver's needs better, which can directly inform how you improve product offerings, boost retention, and even acquire new clients.

As more providers and insurers join this ecosystem, it creates a powerful "network effect." More users means more data to inform service. More data means a smarter system. And a smarter system delivers a better experience for everyone involved.

This approach transforms roadside assistance from a simple transactional cost into a deeply integrated, brand-building asset. It's about plugging into a partnership that leverages technology to create an unbeatable policyholder experience, every single time.

How Carriers Can Capitalize

As leading carriers recognize the opportunity to leverage roadside assistance as a powerful retention tool, they'll need a plan to unlock its value. Here are three ways a roadside assistance program can help transform their relationship with policyholders:

  1. Promote awareness to build loyalty. Don't let roadside assistance be a hidden benefit. These service events are frequent, low-stakes opportunities to connect with policyholders. Proactively feature this service in all communications – from welcome kits to digital portals. Use QR codes on insurance cards to link directly to your app, preventing risky DIY fixes and turning a moment of distress into a positive, loyalty-building experience.
  2. Deliver a modern, seamless experience. Today's consumers expect digital convenience. Offer a roadside assistance experience that mirrors ride-sharing apps, with intuitive service requests, real-time GPS tracking, and automated chatbots. A transparent, user-friendly digital platform not only solves a problem but also enhances your brand's reputation and deepens customer engagement with your entire digital ecosystem.
  3. Listen to what your customers are saying through your partners. A positive roadside service event is one of your most effective tools for increasing customer retention and your Net Promoter Score (NPS). By leveraging a powerful dispatching platform, you can capture immediate customer feedback with an impressive 50% survey response rate. This creates a direct pipeline for identifying brand promoters who can provide referrals and testimonials. More broadly, this data offers a goldmine of insight, revealing exactly what your customers value and helping you perfect your products and marketing strategies.

As the competitive insurance market continues to evolve, forward-thinking carriers are recognizing roadside assistance as more than an add-on service – it's a strategic differentiator that serves as their most powerful retention tool. By leveraging smart partnerships and technology-driven solutions, insurers can turn moments of stress into loyalty-building opportunities, ensuring policyholders see their insurer as the trusted partner they'll never want to leave.


Henry Stroup

Profile picture for user HenryStroup

Henry Stroup

Henry Stroup is vice president of client success at Agero.

He has more than 20 years of leadership experience in the roadside assistance industry. Before joining Agero, he led strategic planning and contact center operations for a specialty roadside assistance network serving the classic car and recreational vehicle markets.

He holds a bachelor of business administration from the University of Alaska Fairbanks and has served on the boards of the RVDA Education Foundation, the RVIA Taskforce for Quality Excellence, and the Young Presidents’ Organization (YPO).

Advice for Insurers as Ransomware Evolves

Ransomware's evolution from organized supply chains to fragmented chaos is rewriting insurers' assumptions.

Hand Holding a USB Flash Drive with Key Drawn on it

Ransomware has always been a moving target but is now entering a period of volatility unlike anything we've seen before. Tactics are shifting rapidly, tools are becoming more sophisticated and more widely available, and the threat-actor landscape is splintering into a chaotic mix of groups, affiliates and opportunistic newcomers. For insurers, this fragmentation and instability is rewriting assumptions about predictability, frequency and severity.

To understand why ransomware feels more volatile than ever, it's important to start with how organized these operations once were. Historically, major threat groups behaved with a degree of predictability. Their operations had a clear methodology and often resembled a supply chain. One group identified or acquired a zero-day vulnerability; another specialized in gaining credentials and access to victims' networks; a ransomware group purchased that access and deployed their malware; and another entity handled negotiations, payment facilitation and hosting on data-leak sites. While criminal, these actors operated within consistent roles.

Today, that methodology and structure has fractured. Law-enforcement pressure, internal disputes and simple profit incentives have splintered once-dominant ransomware groups. Nowadays there is not just one geography or one group that's doing everything from start to finish. It's now a combination of parties. Coupled with this, their tools, particularly the ransomware variants themselves, have leaked into the wild or been deliberately sold off. As a result, sophisticated malware that was once tightly controlled is now available to operators with minimal skill. Cheaper, less advanced variants such as Dharma and Crysis proliferate broadly, while more refined strains like Akira or LockBit remain selectively distributed, but even those find their way to multiple groups.

This "plug-and-play" ecosystem means that a threat actor with little technical capability can now operate at a level previously reserved for elite cybercriminals. The result is a wave of attacks that are increasingly unpredictable in both frequency and quality. Some are clumsy and quickly detected, while others unfold with alarming precision.

At the same time, attackers have become far more agile once inside an environment. Earlier ransomware operations often unraveled when attackers encountered unexpected security controls. Today, threat actors pivot rapidly. If endpoint detection and response (EDR) tools block one path, adversaries switch tactics, attempt to disable protections or even infiltrate the security tools themselves.

In a recent Akira-related incident, adversaries gained access to a victim's SonicWall EDR environment, used it to disable protections across the entire network and maintained persistent access. A lesser threat actor would have been stopped at the first hurdle. Today's operators adapt with remarkable speed.

This agility is compounded by AI-driven malware development. Threat actors are now capable of generating malware tailored to a victim's specific security gaps. By feeding reconnaissance data into AI coding engines, attackers can produce bespoke code that evades detection. As a result, EDR tools lose some of their efficacy, and traditional antivirus can become entirely ineffective.

AI-generated phishing is also affecting attacker capability. Previously, many phishing attempts were identified by grammar and spelling errors. Today, threat actors can generate credible, fluent communications that mimic native language use, making social engineering exponentially harder to detect. The potential for automated scaling, for example one threat actor deploying hundreds or thousands of simultaneous phishing attempts, also poses a challenge.

While tools and execution are evolving, so too are the extortion tactics, with threat actors now using multifaceted pressure strategies. When improved backups reduced victims' need for decryption keys, threat actors began stealing data and threatening to leak it and cause reputational harm. And when regulators and law enforcement discouraged companies paying for data deletion promises, promises criminals often broke anyway, attackers escalated further.

Recent incidents also show threat actors emailing victims' employees and customers directly, claiming the organization "does not care about your data," or triggering every printer in an organization to output ransom notes - ensuring employees, customers and potentially the media know about the breach. Even more concerning is a trend toward re-attacks, where threat actors revisit a network weeks after an incident to exploit newly discovered gaps and re-encrypt systems, leveraging continuing disruption as a negotiation tool and providing incentives to victims to pay the ransom.

This evolution raises the stakes for incident response and negotiation. Speed, visibility, and technical capability are more critical than ever - and so is insurer preparedness.

For insurance and risk professionals, several priorities stand out in this new environment.

1. Baseline controls are still non-negotiable

Multifactor authentication, managed EDR and reliable offline or immutable backups remain the strongest defenses against ransomware and help to ensure business continuity. These controls buy the time and visibility needed to detect intrusions early and recover without paying a ransom. But they must be properly managed. Too many insureds deploy security tools without the professional oversight required for them to function effectively, just to satisfy an underwriting requirement.

2. Deploy advanced protections

Beyond baseline controls, insureds should also adopt least-privilege models, zero-trust architectures and AI-enhanced security tools that dynamically detect "known good" and "known bad" behavior. Historically, organizations avoided these approaches due to complexity, but modern implementations are increasingly manageable and fill critical gaps left by traditional defenses.

3. Prepare for negotiation scenarios that are more aggressive and less predictable

Extortion is no longer a one-dimensional threat. Insurance companies must partner with response teams experienced in managing multi-vector pressure tactics, from public-facing harassment to second-wave attacks. These partners are capable of advising clients through highly fluid situations.

The ransomware landscape is transforming rapidly, driven by fragmentation, automation and unprecedented agility among threat actors. For insurers and their insureds, adaptability is now a core competence. Those who evolve their incident-response strategies alongside the threat landscape will be far better positioned to protect both their clients and their own business.

Embedded Insurance's Next Leap

Embedded insurance is transforming from a distribution tactic to a customer experience strategy as insurers prioritize seamless, friction-free protection.

A Person Using a Laptop Holding a Credit Card

Residents don't want to search high and low for protection; they expect it to appear where it's most relevant. Insurance has always been about confidence, but during a digital buying journey, confidence depends on timing, relevance, and the ease with which protection blends into the experience itself.

That expectation is reshaping strategy across the industry. The State Of Embedded Insurance 2024 found that 94% of insurers view embedded insurance as a critical part of their future strategy. It's clear that insurers are no longer treating embedded insurance as only a distribution tactic but are treating it as a customer experience (CX) function.

Embedded insurance isn't new. What is new is the maturity of the technology and partnerships behind it. The next step is deepening trust and reducing friction at the emotional peaks of the journey.

TWO MOMENTS THAT MATTER THE MOST

In CX, timing is everything. Embedded insurance delivers its biggest impacts at two places in the customer journey: at checkout and right after purchase.

At checkout, customers are already in decision mode. They're focused and ready to act. When protection is offered right there, without extra steps or redirects, it feels like a natural extension of the transaction, rather than a separate sale. Subtle integration is essential. Research from BCG found that "conversion rates for traditional insurers that have embraced this model are already higher than for separate insurance for the same products," reinforcing the power of being present at the right moment.

The second moment is right after purchase, when the customer starts using what they bought. That's when peace of mind kicks in and becomes tangible. Knowing they're covered from day one reduces post-purchase anxiety and builds trust between buyer and brand. This connection ties into measurable CX gains with higher engagement and improved retention.

These moments also help explain why embedded insurance is expanding so quickly. As smoother, better-timed experiences become the norm, adoption rises. The embedded insurance market is projected to grow from $143.88 billion in 2025 to more than $800 billion by 2032, a CAGR of 28%. This steep trajectory is fueled partly by higher conversion rates and growing customer preferences for protection that appears naturally within the journey.

INSURANCE THAT FEELS LIKE CARE, NOT COMMERCE

For embedded insurance to actually enhance the CX, it has to feel like part of the service. That starts with seamless integration: no pop-ups, no redirects, and no disruption. Protection should appear inside the same interface the customer already trusts.

Clarity matters just as much as placement, so straightforward pricing, quick activation, and simple-language explanations reduce the mental load that often accompanies insurance decisions. The experience also extends beyond the sale. Claims, renewals, and continuing support must feel as intuitive as the initial purchase; otherwise, trust gained in the beginning evaporates quickly.

Four levers determine whether embedded insurance feels like care:

  • Timing: Arriving at the ideal moment in the right emotional window. Too soon and it's irrelevant; too late, and the customer has already mentally moved on. But hit that perfect moment and attention can quickly become willingness.
  • Personalization: This revolves around contextual relevance and offering coverage that fits the user's situation without demographic stereotypes or generic add-ons.
  • Speed: Instant activation reinforces confidence; waiting undermines the very safety insurance is meant to provide.
  • Claims: The ultimate test. A smooth, low-effort claim can turn a customer into a word-of-mouth marketer.

For example, a tenant signs a new lease through a property management portal. They're immediately directed to a co-branded insurance portal to either purchase coverage or upload proof of an existing policy. The transition is simple. If purchasing in the insurance portal, the tenant can then select appropriate limits or choose coverage that protects their personal belongings. And if a pipe bursts after move-in, the tenant can upload a few photos through their digital account and submit a claim within minutes, guided through each step instead of navigating stressful paperwork alone.

These moments define the experience much more than policy language. When embedded insurance removes friction, both emotional and practical, it stops feeling like an upsell and starts feeling like protection. The impact is clear in customer metrics. A 2024 study found a 17-point increase in customer satisfaction with digital insurance claims, driven largely by improvements in the range of services offered on mobile apps and websites, as well as visual appeal. Clearly, showing up with the right design and at the right time can shape customer sentiment at critical moments.

CX LIVES OR DIES IN THE PARTNERSHIP LAYER

No insurer or platform can deliver embedded insurance on its own. And any embedded insurance experience can fall apart if the system behind it isn't prepared and aligned. CX is co-owned: the insurer, the distribution platform, and the underlying technology all shape the moment a customer is offered protection. The strongest partnerships don't feel like transactional business deals; they operate like shared problem-solving.

A BCG report says that "to make the most of their opportunities, insurers will need to support and collaborate extensively with their business partners to become the provider of choice." This means teams jointly determine where insurance should appear in a workflow and how it should feel when it does. Technology, design, and messaging must blend seamlessly with the platform's brand so that customers only see a single experience, not two companies stitched together.

All of this work happens long before the first customer sees an offer. During discovery, both sides typically map the data already available in the platform's journey, such as lease information and account details, to the minimum information an insurer needs to provide a quote. When this is done well, eligibility questions shrink, quoting steps become simpler, and drop-off decreases. Clearer language replaces legal jargon, and forms become shorter and more intuitive. This way, the partnership shapes the ease customers feel long before they think about making a claim.

Customers remember the experiences that remove fear, not the ones that add friction. So the next step for embedded insurance will come from insurers and platforms working in sync and designing for real human moments. The future of insurance hinges on making every step intuitive, predictable, and easy at every touchpoint.

Using Serial Acquisitions to Turbocharge Growth

With organic growth softening, insurance agencies are turning to serial acquisitions to accelerate expansion and build market dominance.

Skyscrapers in City Against Sky

One of the best ways for insurance agencies to grow is through acquiring or consolidating with another agency or book of business, especially since organic growth has softened. And many agency owners are finding that repeating the process – becoming serial acquirers – reaps strong benefits for their businesses if done correctly and effectively. Before heading down that path, it's important to understand the keys to making successful acquisitions and smart financing options.

General benefits of serial acquisitions

Regardless of the industry, serial acquisitions can provide the following advantages for the purchaser:

  • Economies of scale – Serial acquisition allows overhead costs to be spread over progressively larger revenue streams. This advantage grows as the number of acquired firms increases.
  • Instant revenue increases – With each new acquisition comes new revenue. These revenue increases can fund technology upgrades, enhance marketing efforts, and additional acquisitions.
  • Higher company valuations - A Kearney study has shown that serial acquirers create greater shareholder value than companies pursuing fewer acquisitions. In addition, their success leads investors to assign higher values to serial acquirers.
Benefits specific to insurance professionals

In addition to the general advantages, there are several benefits of serial acquisition specific to the insurance industry:

  • Ability to reach higher carrier bonus levels and higher commission levels – Most carriers offer incentives for agencies to meet certain levels of premium. With serial acquisitions, an agency can reach higher levels faster than through organic growth alone.
  • Easier cross-selling and multi-lining – With multiple agencies connected through central ownership, clients can be offered new products and bundles, producing organic growth from inorganic growth.
  • Immediate cash-based revenue – A buying agency takes on the predictable cash flows of the purchased business. With these liquid assets on hand, the buyer has more choices regarding how to continue growth and expansion.
  • Access to talent – The shortage of new talent entering the insurance field is well documented. Serially acquiring other agencies allows a business to continually bring on experienced team members who can produce from day one. However, it's important that they also fit into the overall culture.
  • Access to specialized knowledge and technology – Serial acquisitions are a great way to build up an agency's tech portfolio and expand into new service areas. With a careful eye to each target company's unique strengths, a serial acquirer can assemble a formidable agency with the ability to provide a wider range of services to a growing clientele.

Steve DeLuca, founder and owner of the DeLuca Agency, has successfully acquired more than 10 agencies. He advises agency owners who are just getting started with acquisitions to look at smaller agencies. As his company started to grow, he didn’t want anything “too big that could change the culture of our current business, and something that was not too difficult to roll into our book of business at the time.” Other key factors for good acquisitions, according to DeLuca are low loss ratios, profitability, and book rolls.

Seven keys to successful serial acquisitions

While there are many advantages of acquisitions (serial or individual), it's important to go into the process with one's eyes open. Not every acquisition opportunity is going to be a good fit, so it's wise to keep several points in mind when evaluating a potential acquisition target:

  1. Foster cultural alignment - Adding more agencies to a portfolio is most successful when both have a similar workplace environment and are built on the same principles and goals.
  2. Retain + maintain human capital - As mentioned previously, keeping talented employees on both sides of the acquisition should be a focus - those who can drive success and are flexible in a new environment.
  3. Maintain strategic focus – An acquisition only makes sense if it fits with the agency's overall strategic focus.
  4. Develop expertise – Acquisitions require specialized knowledge. If an agency is planning to pursue serial acquisitions, it's worthwhile to have a dedicated individual or team with the interest and knowledge to take the lead on investigating opportunities and structuring deals. DeLuca says: “You’ve got to have a mentor, someone that you can talk to, someone that can bring you through the process, because it can be very stressful. You’ve got to have a good attorney experienced in mergers and acquisitions. You’ve got to have your purchase agreements and all your forms in place, and you’ve got to have a good banker.”
  5. Be willing to walk away – A deal that sounds great at the start may not look so good as time goes on. An agency owner needs to have the discipline to walk away from a deal that throws up red flags or doesn't prove to match the buyer's objectives.
  6. Plan integration early – An acquisition is only successful if the companies involved integrate well after the sale. Plans for integration need to start early in the negotiations so there are no surprises when the companies come together.
  7. Have a goal – Acquisitions should be part of an overall strategy for growth. Setting a goal for growth – e.g., $1 million in revenue per year – can help a buyer evaluate potential targets. This strategy avoids wasting resources on small fish as well as preventing becoming overwhelmed with targets that are too big to successfully integrate into the current business.
Financing options

Agencies with abundant liquid capital may choose to pay cash outright. However, that approach limits how much can be spent on other areas that drive growth (such as technology investments).

Loans from the Small Business Administration (SBA) are another option. With SBA loans, the borrower's personal assets are often used as collateral, and the paperwork for approval can be daunting. Even so, SBA loans can be a good option for borrowers whose credit is less than ideal.

Financing through specialty lenders who focus on the insurance industry is an appealing choice. These lenders – in contrast to most traditional banks – understand the nature of the insurance industry. They will often use the projected increase in cash flow as collateral for the loan, rather than encumbering other assets.

Summary

“In today's market, it's very hard to grow organically… so if you're going to grow, you've got to get into mergers and acquisitions,” DeLuca said.

In today's market, it's very hard to grow organically… so if you're going to grow, you've got to get into mergers and acquisitions.

Serial acquisitions can be a powerful way to turbocharge an agency's growth. It requires research, focus, and planning, but it can provide a big payoff when well-managed.


Rick Dennen

Profile picture for user RickDennen

Rick Dennen

Rick Dennen is the founder and chief executive officer of Indianapolis-based Oak Street Funding, a First Financial Bank company.

The firm offers customized loan products and services for specialty lines of business, including certified public accountants, registered investment advisors and insurance agents nationwide.

Rate Filing Reimagined

Fragmented rate filing processes constrain P&C insurers, prompting data integration and GenAI solutions.

An artist’s illustration of artificial intelligence

Accelerating P&C product and rate filing is critical to meet dynamic market demands and regulatory requirements. Traditional processes are constrained by manual handoffs, fragmented data, and slow approvals, resulting in delayed product launches and constrained profitability. This article explores how data, GenAI, and Agentic AI can transform rate filing—enabling parallel execution, automated testing, and intelligent workbenches for competitive analysis.

By adopting best practices in architecture, automation, and governance, insurers can compress cycle times, enhance pricing sophistication, and improve compliance. The approach outlined empowers carriers to respond swiftly to market shifts, optimize risk management, and gain a decisive edge.

Problem Statement: What?

Property and casualty (P&C) insurers in the United States face a complex and fragmented regulatory environment when filing new products or rates. The average time to approve rate filings has increased by 40% nationwide (for the period from 2018 to 2024 for homeowners' product). The result is delayed market response, constrained profitability, missed opportunities to reflect a changing risk posture (for example: In California, Proposition 103 limits insurers to base rates on historical losses rather than current and predictive/forward looking models).

While these regulatory complexities add to the delays of rate approvals, insurers also face internal challenges. These are magnified by fragmented data assets affecting rate development/indications, weak/limited integration of policy administration systems and rating engine, manual scenario generation & validations across rating workflows, too many handoffs, and manual state filing preparation.

Understanding Regulatory Complexity in State Filing

The regulatory complexity arises from several related factors:

1. State-Based Regulation and Legal Diversity:

Insurance regulation is primarily state-based, with each state legislature enacting its own rating laws, standards, and filing requirements. These laws may be based on NAIC model laws (e.g., prior approval, file-and-use, use-and-file, flex rating), but significant variation persists in definitions, processes, and compliance expectations across states. Insurers must navigate a patchwork of statutes, administrative rules, and case law, often requiring tailored filings for each jurisdiction.

2. Multiplicity of Filing Types and Entities:

Filings may pertain to rates, rating rules, policy forms, underwriting rules, or combinations thereof. Entities making filings include insurers, advisory organizations, and third-party filers, each subject to different rules and authorities depending on the state and product category.

3. Rigorous Data and Actuarial Standards:

Regulators require extensive supporting data for rate filings, including historical premium and loss data, actuarial analysis, and justification for rating factors. Standards mandate that rates must not be excessive, inadequate, or unfairly discriminatory, but interpretations and required methodologies (e.g., loss ratio vs. pure premium methods, credibility standards, catastrophe modeling) vary by state. Data quality, segregation, and rate adjustment protocols are scrutinized, and regulators may require multi-year data, trend analyses, and loss development triangles.

4. Procedural Complexity and Review Process:

The filing process involves multiple steps and stakeholders: filers must ensure completeness and compliance with state-specific requirements, often using tools like SERFF for electronic submissions. Reviewers conduct detailed checks for statutory and regulatory compliance, issue objection letters for deficiencies, and may require hearings or amendments. The process is iterative, and delays often result from incomplete filings or back-and-forth correspondence.

5. Policy Form Review and Public Policy Considerations:

Beyond rate filings, policy forms are subject to rigorous review for compliance with mandated provisions, prohibited clauses, readability standards, and consistency with pricing memoranda. States may require additional documentation, such as actuarial memoranda or advertising materials, and enforce unique requirements for specific lines of business.

Internal Challenges in Rate Change Management

Rate change management in P&C insurance is challenged by fragmented data sources and limited clarity/disjoint in data/business requirements for rate development & analysis. Insurers must reconcile information from underwriting, claims/loss history, reinsurance, and market trends, which demands extensive data wrangling and preparation. Latency in accessing third-party data and manual handoffs between product, actuarial, and IT teams further slow the process, leading to rework and misalignment.

The absence of integrated platforms for hypothesis development, rate workups, and filing results in inefficiencies and extended cycle times. Compliance steps are repeated for each state, and technical requirements for integration are often relayed indirectly, compounding delays. Manual testing and architectural gaps—such as non-stateless rating engines and scattered product management logic—impede data-driven decision-making and actuarial rigor.

Dislocation analysis, a key actuarial process, is time-consuming due to sequential, repetitive workflows and limited automation. The challenge is to quickly identify segments with disrupted rates and adverse loss ratios, as variable-by-variable reviews are essential but time-consuming. Without robust analytical capabilities, targeted adjustments are delayed, increasing regulatory risk and reducing pricing effectiveness.

Flow Chart
How to bridge the internal challenges?

To accelerate and improve product/rate filing for Personal Auto & Property, insurers must deploy targeted interventions across dimensions such as Planning & Communication, Platform/Architecture, Data Controls & Trust, Validation, and rate filing intelligence—ensuring each stage of the value chain is robust, data-driven, and responsive to market and regulatory demands.

• Planning & Communication: Product / Rate filing has a direct correlation to business or product strategy. Considering its significance and the complex nature of the regulatory, it requires well-architected planning and execution. More often the challenges or delays are due to siloed interactions, lack of integration, gaps in business & IT/data requirements, delayed communication etc. across teams (product management, IT, Data, Actuarial, State filing etc.). Creating a digitized & integrated master rate change plan (by state, LOB, change complexity, filing type, etc.), workflow assignments and tracking ensure timely communication, transparency in timelines, dependencies etc. and enables identifying the choke points to improve execution. For example, Shift left the production IT activities related to configuration and build (i.e., before DOI approval/state filing, pre deploy with future effective dates toggled off until approval). Use emergency change approvals for minor rate updates and enforce strict SLA/OLA for signoffs to cut internal wait times.

• Platform/Architecture: Significant data engineering and configuration effort spent during Dislocation analysis and Post approval implementation. Address duplicate efforts spent in dislocation analysis and implementation (post rate approvals) by choosing appropriate rating engines (e.g.: Akur8, Earnix) with integration accelerators and compatible with modern policy administration systems.

• Data Controls and Trust: Automated data pipeline to ingest information, third-party data from near real time sources (telematics, IoT) on loss characteristics, use of CAT models for rate filings to assess risks like wildfire in California (as part of sustainable insurance strategy) to aid rate factor selection and an Assumptions Data Hub to capture UW assumption, Pricing assumptions, loss data etc., helps to build agility. Similarly, replacing legacy /excel-based models for rate filings with python/modern platforms such as hx Renew for central, version-controlled environment helps to improve collaboration, simplification and drive accurate filings.

• Automated validation: Leverage pricing tools/platforms such as hx Renew to automate scenario analysis (what-if") scenario analysis, automate the assessment of the impact of model changes and changes to assumptions, automated validation rules. Also, pairing provisional rate implementation with automated regression and CI/CD, improves response time via elastic rating engines and enhances rate monitoring, compliance, and traceability.

• Rate filing intelligence – Build & leverage rate filing intelligence powered by insights from SNL insurance product filing datasets (from S&P Global Market Intelligence) to understand market strategies, industry trends, analyze filings/factor changes of peer insurers, insights wrt objections, approval/response timelines of DOI etc. Harnessing these insights provides a feedback loop wrt product strategy, planning & execution adaptation to market conditions and decision-making.

Potential Benefits

Adopting integrated interventions such as master rate change plans and disciplined workflows, modern rating engines and platforms, reducing/eliminating excel based rating models, third-party data integrations, CI/CD and automated regression, market aware rate filing intelligence and effective change management—can significantly increase throughput of rate changes, strengthen rating traceability, reduce refiling/rerating cycles, and leverage richer third-party data for more responsive pricing, improving conversion, loss ratio resilience, and agility to market shifts.

The Way forward

To accelerate rate filing and product launches, insurers should assess their implementation strategy across the dimension such as people, process, technology and data, to evaluate their performance and outcomes. By operationalizing some of the relevant interventions listed above, insurers can compress cycle times, respond swiftly to market shifts, and optimize risk management. Now is the time for industry leaders to champion these changes and drive better outcomes.


Prathap Gokul

Profile picture for user PrathapGokul

Prathap Gokul

Prathap Gokul is head of insurance data and analytics with the data and analytics group in TCS’s banking, financial services and insurance (BFSI) business unit.

He has over 25 years of industry experience in commercial and personal insurance, life and retirement, and corporate functions.

Continuous Underwriting Wants to Scale  

Insurance premiums could fluctuate daily like stock prices, but regulation and reinsurance prevent the scaling of continuous underwriting.

White Clouds on Blue Sky

Ten years ago—has it been that long?—I was working with the largest insurer of churches and religious institutions in the US when we discovered they were incurring an average of $70 million in annual losses from frozen pipes.

It makes sense. Many houses of worship sit empty most of the time, and in the northern half of the country—where most of this carrier's book was concentrated—a power outage or failing furnace leads to frozen pipes, burst lines, and substantial water damage claims.

So we built an IoT service that monitored furnace activity and water pipe temperatures, complete with a call center to alert policyholders before problems escalated. It worked so well that it survives today: insureds receive annual premium discounts for enrolling, and frozen pipe claims have dropped over one-third.

That experience in continuous risk management sparked my fascination with the next frontier: continuous underwriting. In my view, there's no reason insurance premiums shouldn't fluctuate daily—like stock prices or utility bills—as new risk data emerges.

Frustratingly, there are exactly two reasons they don't: regulation and reinsurance.

Tesla Insurance: A Case Study in Market Inertia

Tesla Insurance launched in 2019 in California, leveraging real-time telematics data from connected vehicles to offer up to 30% lower premiums through a Safety Score algorithm that tracks behaviors like hard braking and collision warnings. The system performs real-time scoring—true continuous underwriting—and adjusts premiums monthly.

Today, Tesla Insurance operates in just 12 states. Twelve states in six years represent a glacial pace for a company built on speed, underscoring how state-by-state regulatory approvals and legal roadblocks stifle algorithmic pricing scalability. Elon Musk has joked that SpaceX will reach Mars before Tesla Insurance writes business in all 50 states—a sadly ironic quip, since the technology for continuous underwriting already exists.

Then there's reinsurance. Earlier this year, Tesla accelerated its pivot toward vertical integration by launching full in-house underwriting for California policies, marking a strategic departure from third-party partners like State National Insurance (a Markel subsidiary). This move gives Tesla direct control over risk assessment, pricing, and policy issuance—despite California's Proposition 103 restrictions on dynamic telematics pricing.

This operational autonomy does two critical things: it eliminates reinsurance constraints—such as conservative loss ratio caps that previously stifled Tesla FSD-linked innovations—and positions the company for national expansion, with pilots already running in Texas and Illinois. By year-end, in-house underwriting will cover 40% of Tesla's $1.2 billion premium base.

Cyber Insurance: A Case Study in Market Necessity

Cyber underwriting has traditionally relied on static annual assessments, but accelerating threat velocity—in the first half of '25, incidents grew by 49% YoY—demands a shift to continuous underwriting. Real-time data from AI-driven tools like open-source intelligence (OSINT) scanning and attack surface risk management (ASRM) enables dynamic risk evaluation and premium adjustments.

Cyber insurtechs such as Cowbell are transforming underwriting from a snapshot into a living process. They report a threefold reduction in claims through proactive remediation and adaptive policies tied to evolving security postures.

These cyber insurtechs focus almost exclusively on the SME segment—businesses with less than $1 billion in revenue, fewer than 1,000 employees, and, crucially, simpler IT environments than large enterprises. They're also proactive. Cowbell, for instance, actively monitors and underwrites risk for over 31 million SME entities using continuous external attack-surface scanning (their Cowbell Factors), often before a quote is even requested. This makes them one of the clearest real-world examples of continuous underwriting operating at scale in the small-and-mid-market commercial segment.

Regulation is actually helping here, pressuring carriers to verify real-time adherence to baseline security standards like multi-factor authentication through tools such as Endpoint Detection and Response (EDR) and Managed Detection and Response (MDR).

Reinsurance innovation is providing capacity. Leaders like Munich Re and Swiss Re are investing in advanced modeling and proportional treaties that favor data-rich, quota-share structures—lowering capital needs while supporting AI-enhanced risk portfolios.

Continuous underwriting unlocks growth. Projected global cyber premiums are expected to more than double from $14 billion in 2023 to $29 billion by 2027.

The "Big" Fight to Scale

In this corner, the champ: Big Insurance and Big Legal (has anyone not heard of Morgan & Morgan?). They'll spend upwards of $200 million this year lobbying Washington to preserve the McCarran-Ferguson Act of 1945, keeping arcane insurance regulations frozen in place.

In that corner, the challenger: Big Tech. As continuous underwriting—by definition, fully automated—consumes AI data center capacity, the AI hyperscalers are throwing untold millions into the fray.

The majority of insurance consumers—per recent surveys—are rooting for the challenger.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.