|
MORE FROM OUR SPONSOR
|
ITL FOCUS is a monthly initiative featuring topics related to innovation in risk management and insurance.
|
MORE FROM OUR SPONSOR
|
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.
We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.
Insurance organizations are deploying AI and semantic ontologies to transform data quality challenges into competitive weapons.
For decades, data quality has been treated as a technical problem—something to be solved through better databases and more rigid validation rules. Yet data quality has become a competitive weapon.
When data teams can ensure clean, consistent, and contextualized information flows through their organization, everything improves: underwriting decisions become sharper, fraud detection catches sophisticated schemes earlier, and claims get processed faster. Two powerful forces—artificial intelligence models and semantic ontologies—are rewriting what's possible for data teams willing to embrace them.
Before diving into solutions, it's worth understanding just how expensive bad data becomes. The insurance industry processes enormous volumes of information daily, from application submissions to claims documentation to policyholder records. Each piece flows through multiple systems, passes through different hands, and gets interpreted by various teams. When data enters at a broker's desk—sometimes handwritten on paper that gets scanned—errors creep in quickly. These aren't just minor inconveniences. Poor data quality directly undermines the foundation that AI models depend on. When machine learning models train on flawed historical data, they learn to recognize the wrong patterns. They optimize for mistakes rather than truth. The consequence? Models make worse decisions, often confidently.
Consider the downstream damage. Inaccurate underwriting data leads to mispriced policies. Claims teams inherit messy customer histories and struggle to match new claims to existing policies. Fraud detection systems flag legitimate claims as suspicious because they can't reliably recognize patterns through the noise.
Rather than viewing AI as yet another consumer of data, forward-thinking insurance organizations can deploy AI specifically to improve the data that other AI models will eventually use. This creates an interesting dynamic: machine learning becomes both problem and solution simultaneously.
Automated Data Profiling and Anomaly Detection
The first wave of improvement comes from automated systems that profile datasets at scale. Rather than manual spot-checking or waiting for problems to surface downstream, AI systems continuously scan data streams looking for deviations from expected patterns. These systems use various mathematical approaches—from classical statistical methods to modern neural networks—to understand what "normal" looks like within specific data domains. When new data arrives, it gets compared against these learned patterns. If something seems off—a claim amount 500% higher than average for that customer, a date that appears to be in the wrong format, or a relationship that doesn't align with historical context—the system flags it immediately.
What makes this different from traditional validation rules is the adaptability. A hard-coded rule might check "ensure claim amounts are between $0 and $1,000,000." This catches obvious errors but misses the subtle cases where everything looks valid but seems contextually wrong.
Real-Time Data Quality Rules Generation
Another emerging capability involves AI systems that actually generate the validation rules themselves, rather than requiring data stewards to manually write them. Generative AI models can analyze historical datasets and automatically create metadata and quality rules tailored to an organization's specific terminology and standards. This matters more than it might initially seem.
Many insurance organizations have legacy systems that lack proper metadata—documentation about what data means, where it came from, and what constraints should apply. Rather than spending months manually documenting these systems, organizations can point an AI system at the data and have it generate initial documentation and rule sets. Humans then review and refine these suggestions. The result? Metadata standards get created faster, and they're grounded in actual data patterns rather than abstract governance theory.
Natural language processing for unstructured data
Insurance organizations have unstructured data everywhere: claims notes, adjuster observations, medical records, police reports, and customer communications. Traditional data quality approaches struggle here because they're designed for structured, tabular information. Natural language processing (NLP) changes this equation. NLP systems can read through thousands of claim descriptions and identify inconsistencies, flag unusual language patterns, extract structured facts from unstructured text, and even spot potential fraud signals hidden in prose.
One practical application: property damage claims often include written descriptions. NLP systems can extract key details (property type, damage description, estimated repair cost), compare these against the claim's structured fields, and flag mismatches automatically. If an adjuster describes "minor water damage" but the structured claim shows a $500,000 payout, that contradiction gets surfaced for immediate review.
Data quality ultimately depends on shared understanding. The same term—"policyholder," "coverage," "claim"—might mean slightly different things across different systems, departments, or companies. This semantic ambiguity creates a ceiling on how much automation and AI can help. You can throw perfect algorithms at messy semantics, but the output remains limited. This is where business ontologies become transformative.
What Makes Ontologies Different from Traditional Data Models
An ontology is fundamentally different from a traditional data model or database schema. Where a schema defines table structures and fields, an ontology captures meaning. It specifies not just what fields exist, but what they mean, how they relate to business concepts, what synonyms matter, and what business rules should apply. In insurance, an ontology might define that "policyholder" connects to specific attributes (name, address, risk profile), that it relates to policies through an "owns" relationship, and that certain business rules apply (a policyholder must be of legal age, must have a valid address, etc.).
Ontology-Powered Data Integration
Here's where ontologies enable something previously difficult: intelligent data integration. When ingesting data from multiple systems, traditional approaches rely on explicit mappings—field A from system one maps to field B in the warehouse. If a new data source arrives, someone must manually create all new mappings. With semantic ontologies, different systems can describe their data in terms of common business concepts. A policy administration system might use field "POL_STAT" while a claims system uses "CLAIM_POLCY_STATUS," but both can be mapped to the ontology's "policy_status" concept. This semantic layer enables automatic discovery and integration.
The "Enterprise Brain": Knowledge Graphs Built on Ontologies
The most sophisticated implementations combine semantic ontologies with graph database technology to create what some describe as an "enterprise brain"—a knowledge graph that captures not just the data, but the meaning and relationships within the business domain. This goes far beyond traditional data warehouses. In a knowledge graph, entities (customers, policies, claims, agents, providers) become nodes, and relationships become edges. Rather than storing "John Smith has policy 12345," a knowledge graph stores this as a relationship with properties: John Smith (subject) — owns (relationship) — Policy 12,345 (object).
The power becomes apparent in use cases. In claims processing, a knowledge graph can instantly answer complex questions: "Show me all claims filed by customers who have had five or more claims in the past two years AND live within 20 miles of a recent catastrophic event AND have files with identical repair cost estimates in the past six months." This type of query, which might take hours or days in traditional systems, executes in seconds against a well-designed knowledge graph.
The journey toward AI-enabled data quality and semantic ontologies isn't frictionless. Three categories of challenges emerge consistently: cultural, regulatory, and technical.
Culturally, data teams and business stakeholders don't always have the same priorities. Data governance teams focus on compliance and consistency. Business units want speed and flexibility. These incentives can conflict. The solution involves establishing cross-functional collaboration frameworks where compliance, risk, and business units align on shared governance structures and standardized communication. When they do, institutions achieve faster issue resolution, stronger controls, and smoother product delivery.
Regulatory challenges run deep. Regulators now scrutinize AI extensively, particularly around explainability. A "black box" model that makes decisions without showing its reasoning creates compliance risk, so organizations may need documentation across all models to address it.
Technically, many organizations face fragmented systems. Core data lives in legacy on-premises systems running alongside newer cloud platforms. Building semantic ontologies and knowledge graphs across this fragmented landscape requires careful architecture. The industry is gradually standardizing on cloud data platforms like Snowflake, Databricks, Palantir or BigQuery which offer better scalability for knowledge graph implementations.
The most exciting developments emerge when AI and semantic ontologies combine. AI systems can learn from data at scale and identify patterns humans would miss. Semantic ontologies provide the business context that AI systems need to make those patterns meaningful. Together, they create a feedback loop: ontologies guide how AI models interpret data, and AI systems suggest refinements to ontologies based on what the data reveals. This is fundamentally more powerful than either approach alone and creates immense value for any data organization.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Pramod Misra is the director of data analytics and AI automation at Snellings Walters Insurance Agency.
He has over 20 years of hands-on experience in data analytics, artificial intelligence, and machine learning model development, including as chief AI officer at a healthtech startup and at Vodafone, Novartis, Takeda, P&G, Tune Protect Insurance, and Nuclear Power Corp.
Rising premiums are driving auto carriers to transform roadside assistance into a strategic retention tool through technology partnerships.
In today's volatile market, auto insurers and their policyholders are facing a shared challenge. Drivers are struggling with rising costs and shopping for auto insurance coverage at record rates. Insurers, feeling this pain, are searching for sustainable ways to earn and strengthen customer loyalty. In this environment, the key to retention is no longer just about price – it's about service.
While dealing with a major claim is rare for most drivers, a flat tire, dead battery, or lockout is a far more common and often stressful event. This presents a key opportunity for insurers as strategic roadside assistance evolves from a simple add-on to a powerful tool for brand-building. By delivering a reliable, positive experience to policyholders in a moment of need, insurers can build the trust required to keep policyholders happy and engaged with their brand, turning a moment of stress into a moment of truth.
Policyholders may shop for price, but they stay for trusted service. According to J.D. Power, while high premiums can decrease satisfaction, these effects can be offset by a high level of trust that an insurer will deliver when needed. The impact on retention is profound: 90% of policyholders with high trust in their insurer are likely to renew.
Roadside assistance provides a tangible, low-cost opportunity to build that trust. Additional research from J.D. Power in the 2025 Insurance Playbook on Customer Retention has shown that customers who have roadside assistance with their auto insurer have a higher overall satisfaction score of 672 – nearly a 30-point increase over those who don't. For a nominal annual cost, often less than $20 per vehicle a year, an insurer can resolve a stressful and frustrating situation, directly influencing a client's perception of their value and offsetting negative sentiment from premium hikes. This transforms roadside assistance from serving as a cost center into a strategic asset for optimizing customer lifetime value.
To get the most out of roadside assistance, auto insurance carriers need to rethink the traditional vendor relationship. The old model of simply outsourcing the service is being replaced by something far more valuable: a deeper, strategic partnership that's dedicated to policyholder satisfaction. The goal is to find a partner who doesn't just handle calls but elevates your overall brand experience through a powerful combination of professional experts and an intelligent technology platform.
For instance, a modern dispatching platform can connect everyone – drivers, roadside service providers, and insurers – in a single, dynamic ecosystem. Such a platform is more than just a dispatching tool; it's a learning machine.
Every interaction, from a simple request to a completed service, feeds data back into the system. This information is used to constantly refine and improve every aspect of the roadside service operation. This process delivers two key advantages:
As more providers and insurers join this ecosystem, it creates a powerful "network effect." More users means more data to inform service. More data means a smarter system. And a smarter system delivers a better experience for everyone involved.
This approach transforms roadside assistance from a simple transactional cost into a deeply integrated, brand-building asset. It's about plugging into a partnership that leverages technology to create an unbeatable policyholder experience, every single time.
As leading carriers recognize the opportunity to leverage roadside assistance as a powerful retention tool, they'll need a plan to unlock its value. Here are three ways a roadside assistance program can help transform their relationship with policyholders:
As the competitive insurance market continues to evolve, forward-thinking carriers are recognizing roadside assistance as more than an add-on service – it's a strategic differentiator that serves as their most powerful retention tool. By leveraging smart partnerships and technology-driven solutions, insurers can turn moments of stress into loyalty-building opportunities, ensuring policyholders see their insurer as the trusted partner they'll never want to leave.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Henry Stroup is vice president of client success at Agero.
He has more than 20 years of leadership experience in the roadside assistance industry. Before joining Agero, he led strategic planning and contact center operations for a specialty roadside assistance network serving the classic car and recreational vehicle markets.
He holds a bachelor of business administration from the University of Alaska Fairbanks and has served on the boards of the RVDA Education Foundation, the RVIA Taskforce for Quality Excellence, and the Young Presidents’ Organization (YPO).
Ransomware's evolution from organized supply chains to fragmented chaos is rewriting insurers' assumptions.
Ransomware has always been a moving target but is now entering a period of volatility unlike anything we've seen before. Tactics are shifting rapidly, tools are becoming more sophisticated and more widely available, and the threat-actor landscape is splintering into a chaotic mix of groups, affiliates and opportunistic newcomers. For insurers, this fragmentation and instability is rewriting assumptions about predictability, frequency and severity.
To understand why ransomware feels more volatile than ever, it's important to start with how organized these operations once were. Historically, major threat groups behaved with a degree of predictability. Their operations had a clear methodology and often resembled a supply chain. One group identified or acquired a zero-day vulnerability; another specialized in gaining credentials and access to victims' networks; a ransomware group purchased that access and deployed their malware; and another entity handled negotiations, payment facilitation and hosting on data-leak sites. While criminal, these actors operated within consistent roles.
Today, that methodology and structure has fractured. Law-enforcement pressure, internal disputes and simple profit incentives have splintered once-dominant ransomware groups. Nowadays there is not just one geography or one group that's doing everything from start to finish. It's now a combination of parties. Coupled with this, their tools, particularly the ransomware variants themselves, have leaked into the wild or been deliberately sold off. As a result, sophisticated malware that was once tightly controlled is now available to operators with minimal skill. Cheaper, less advanced variants such as Dharma and Crysis proliferate broadly, while more refined strains like Akira or LockBit remain selectively distributed, but even those find their way to multiple groups.
This "plug-and-play" ecosystem means that a threat actor with little technical capability can now operate at a level previously reserved for elite cybercriminals. The result is a wave of attacks that are increasingly unpredictable in both frequency and quality. Some are clumsy and quickly detected, while others unfold with alarming precision.
At the same time, attackers have become far more agile once inside an environment. Earlier ransomware operations often unraveled when attackers encountered unexpected security controls. Today, threat actors pivot rapidly. If endpoint detection and response (EDR) tools block one path, adversaries switch tactics, attempt to disable protections or even infiltrate the security tools themselves.
In a recent Akira-related incident, adversaries gained access to a victim's SonicWall EDR environment, used it to disable protections across the entire network and maintained persistent access. A lesser threat actor would have been stopped at the first hurdle. Today's operators adapt with remarkable speed.
This agility is compounded by AI-driven malware development. Threat actors are now capable of generating malware tailored to a victim's specific security gaps. By feeding reconnaissance data into AI coding engines, attackers can produce bespoke code that evades detection. As a result, EDR tools lose some of their efficacy, and traditional antivirus can become entirely ineffective.
AI-generated phishing is also affecting attacker capability. Previously, many phishing attempts were identified by grammar and spelling errors. Today, threat actors can generate credible, fluent communications that mimic native language use, making social engineering exponentially harder to detect. The potential for automated scaling, for example one threat actor deploying hundreds or thousands of simultaneous phishing attempts, also poses a challenge.
While tools and execution are evolving, so too are the extortion tactics, with threat actors now using multifaceted pressure strategies. When improved backups reduced victims' need for decryption keys, threat actors began stealing data and threatening to leak it and cause reputational harm. And when regulators and law enforcement discouraged companies paying for data deletion promises, promises criminals often broke anyway, attackers escalated further.
Recent incidents also show threat actors emailing victims' employees and customers directly, claiming the organization "does not care about your data," or triggering every printer in an organization to output ransom notes - ensuring employees, customers and potentially the media know about the breach. Even more concerning is a trend toward re-attacks, where threat actors revisit a network weeks after an incident to exploit newly discovered gaps and re-encrypt systems, leveraging continuing disruption as a negotiation tool and providing incentives to victims to pay the ransom.
This evolution raises the stakes for incident response and negotiation. Speed, visibility, and technical capability are more critical than ever - and so is insurer preparedness.
For insurance and risk professionals, several priorities stand out in this new environment.
1. Baseline controls are still non-negotiable
Multifactor authentication, managed EDR and reliable offline or immutable backups remain the strongest defenses against ransomware and help to ensure business continuity. These controls buy the time and visibility needed to detect intrusions early and recover without paying a ransom. But they must be properly managed. Too many insureds deploy security tools without the professional oversight required for them to function effectively, just to satisfy an underwriting requirement.
2. Deploy advanced protections
Beyond baseline controls, insureds should also adopt least-privilege models, zero-trust architectures and AI-enhanced security tools that dynamically detect "known good" and "known bad" behavior. Historically, organizations avoided these approaches due to complexity, but modern implementations are increasingly manageable and fill critical gaps left by traditional defenses.
3. Prepare for negotiation scenarios that are more aggressive and less predictable
Extortion is no longer a one-dimensional threat. Insurance companies must partner with response teams experienced in managing multi-vector pressure tactics, from public-facing harassment to second-wave attacks. These partners are capable of advising clients through highly fluid situations.
The ransomware landscape is transforming rapidly, driven by fragmentation, automation and unprecedented agility among threat actors. For insurers and their insureds, adaptability is now a core competence. Those who evolve their incident-response strategies alongside the threat landscape will be far better positioned to protect both their clients and their own business.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Brent Riley is VP of digital forensics and incident response (NA) at CyXcel.
Embedded insurance is transforming from a distribution tactic to a customer experience strategy as insurers prioritize seamless, friction-free protection.
Residents don't want to search high and low for protection; they expect it to appear where it's most relevant. Insurance has always been about confidence, but during a digital buying journey, confidence depends on timing, relevance, and the ease with which protection blends into the experience itself.
That expectation is reshaping strategy across the industry. The State Of Embedded Insurance 2024 found that 94% of insurers view embedded insurance as a critical part of their future strategy. It's clear that insurers are no longer treating embedded insurance as only a distribution tactic but are treating it as a customer experience (CX) function.
Embedded insurance isn't new. What is new is the maturity of the technology and partnerships behind it. The next step is deepening trust and reducing friction at the emotional peaks of the journey.
In CX, timing is everything. Embedded insurance delivers its biggest impacts at two places in the customer journey: at checkout and right after purchase.
At checkout, customers are already in decision mode. They're focused and ready to act. When protection is offered right there, without extra steps or redirects, it feels like a natural extension of the transaction, rather than a separate sale. Subtle integration is essential. Research from BCG found that "conversion rates for traditional insurers that have embraced this model are already higher than for separate insurance for the same products," reinforcing the power of being present at the right moment.
The second moment is right after purchase, when the customer starts using what they bought. That's when peace of mind kicks in and becomes tangible. Knowing they're covered from day one reduces post-purchase anxiety and builds trust between buyer and brand. This connection ties into measurable CX gains with higher engagement and improved retention.
These moments also help explain why embedded insurance is expanding so quickly. As smoother, better-timed experiences become the norm, adoption rises. The embedded insurance market is projected to grow from $143.88 billion in 2025 to more than $800 billion by 2032, a CAGR of 28%. This steep trajectory is fueled partly by higher conversion rates and growing customer preferences for protection that appears naturally within the journey.
For embedded insurance to actually enhance the CX, it has to feel like part of the service. That starts with seamless integration: no pop-ups, no redirects, and no disruption. Protection should appear inside the same interface the customer already trusts.
Clarity matters just as much as placement, so straightforward pricing, quick activation, and simple-language explanations reduce the mental load that often accompanies insurance decisions. The experience also extends beyond the sale. Claims, renewals, and continuing support must feel as intuitive as the initial purchase; otherwise, trust gained in the beginning evaporates quickly.
Four levers determine whether embedded insurance feels like care:
For example, a tenant signs a new lease through a property management portal. They're immediately directed to a co-branded insurance portal to either purchase coverage or upload proof of an existing policy. The transition is simple. If purchasing in the insurance portal, the tenant can then select appropriate limits or choose coverage that protects their personal belongings. And if a pipe bursts after move-in, the tenant can upload a few photos through their digital account and submit a claim within minutes, guided through each step instead of navigating stressful paperwork alone.
These moments define the experience much more than policy language. When embedded insurance removes friction, both emotional and practical, it stops feeling like an upsell and starts feeling like protection. The impact is clear in customer metrics. A 2024 study found a 17-point increase in customer satisfaction with digital insurance claims, driven largely by improvements in the range of services offered on mobile apps and websites, as well as visual appeal. Clearly, showing up with the right design and at the right time can shape customer sentiment at critical moments.
No insurer or platform can deliver embedded insurance on its own. And any embedded insurance experience can fall apart if the system behind it isn't prepared and aligned. CX is co-owned: the insurer, the distribution platform, and the underlying technology all shape the moment a customer is offered protection. The strongest partnerships don't feel like transactional business deals; they operate like shared problem-solving.
A BCG report says that "to make the most of their opportunities, insurers will need to support and collaborate extensively with their business partners to become the provider of choice." This means teams jointly determine where insurance should appear in a workflow and how it should feel when it does. Technology, design, and messaging must blend seamlessly with the platform's brand so that customers only see a single experience, not two companies stitched together.
All of this work happens long before the first customer sees an offer. During discovery, both sides typically map the data already available in the platform's journey, such as lease information and account details, to the minimum information an insurer needs to provide a quote. When this is done well, eligibility questions shrink, quoting steps become simpler, and drop-off decreases. Clearer language replaces legal jargon, and forms become shorter and more intuitive. This way, the partnership shapes the ease customers feel long before they think about making a claim.
Customers remember the experiences that remove fear, not the ones that add friction. So the next step for embedded insurance will come from insurers and platforms working in sync and designing for real human moments. The future of insurance hinges on making every step intuitive, predictable, and easy at every touchpoint.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Brandon Tobman is the chief executive officer of Get Covered.
With organic growth softening, insurance agencies are turning to serial acquisitions to accelerate expansion and build market dominance.
One of the best ways for insurance agencies to grow is through acquiring or consolidating with another agency or book of business, especially since organic growth has softened. And many agency owners are finding that repeating the process – becoming serial acquirers – reaps strong benefits for their businesses if done correctly and effectively. Before heading down that path, it's important to understand the keys to making successful acquisitions and smart financing options.
Regardless of the industry, serial acquisitions can provide the following advantages for the purchaser:
In addition to the general advantages, there are several benefits of serial acquisition specific to the insurance industry:
Steve DeLuca, founder and owner of the DeLuca Agency, has successfully acquired more than 10 agencies. He advises agency owners who are just getting started with acquisitions to look at smaller agencies. As his company started to grow, he didn’t want anything “too big that could change the culture of our current business, and something that was not too difficult to roll into our book of business at the time.” Other key factors for good acquisitions, according to DeLuca are low loss ratios, profitability, and book rolls.
While there are many advantages of acquisitions (serial or individual), it's important to go into the process with one's eyes open. Not every acquisition opportunity is going to be a good fit, so it's wise to keep several points in mind when evaluating a potential acquisition target:
Agencies with abundant liquid capital may choose to pay cash outright. However, that approach limits how much can be spent on other areas that drive growth (such as technology investments).
Loans from the Small Business Administration (SBA) are another option. With SBA loans, the borrower's personal assets are often used as collateral, and the paperwork for approval can be daunting. Even so, SBA loans can be a good option for borrowers whose credit is less than ideal.
Financing through specialty lenders who focus on the insurance industry is an appealing choice. These lenders – in contrast to most traditional banks – understand the nature of the insurance industry. They will often use the projected increase in cash flow as collateral for the loan, rather than encumbering other assets.
“In today's market, it's very hard to grow organically… so if you're going to grow, you've got to get into mergers and acquisitions,” DeLuca said.
In today's market, it's very hard to grow organically… so if you're going to grow, you've got to get into mergers and acquisitions.
Serial acquisitions can be a powerful way to turbocharge an agency's growth. It requires research, focus, and planning, but it can provide a big payoff when well-managed.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Rick Dennen is the founder and chief executive officer of Indianapolis-based Oak Street Funding, a First Financial Bank company.
The firm offers customized loan products and services for specialty lines of business, including certified public accountants, registered investment advisors and insurance agents nationwide.
Fragmented rate filing processes constrain P&C insurers, prompting data integration and GenAI solutions.
Accelerating P&C product and rate filing is critical to meet dynamic market demands and regulatory requirements. Traditional processes are constrained by manual handoffs, fragmented data, and slow approvals, resulting in delayed product launches and constrained profitability. This article explores how data, GenAI, and Agentic AI can transform rate filing—enabling parallel execution, automated testing, and intelligent workbenches for competitive analysis.
By adopting best practices in architecture, automation, and governance, insurers can compress cycle times, enhance pricing sophistication, and improve compliance. The approach outlined empowers carriers to respond swiftly to market shifts, optimize risk management, and gain a decisive edge.
Property and casualty (P&C) insurers in the United States face a complex and fragmented regulatory environment when filing new products or rates. The average time to approve rate filings has increased by 40% nationwide (for the period from 2018 to 2024 for homeowners' product). The result is delayed market response, constrained profitability, missed opportunities to reflect a changing risk posture (for example: In California, Proposition 103 limits insurers to base rates on historical losses rather than current and predictive/forward looking models).
While these regulatory complexities add to the delays of rate approvals, insurers also face internal challenges. These are magnified by fragmented data assets affecting rate development/indications, weak/limited integration of policy administration systems and rating engine, manual scenario generation & validations across rating workflows, too many handoffs, and manual state filing preparation.
The regulatory complexity arises from several related factors:
1. State-Based Regulation and Legal Diversity:
Insurance regulation is primarily state-based, with each state legislature enacting its own rating laws, standards, and filing requirements. These laws may be based on NAIC model laws (e.g., prior approval, file-and-use, use-and-file, flex rating), but significant variation persists in definitions, processes, and compliance expectations across states. Insurers must navigate a patchwork of statutes, administrative rules, and case law, often requiring tailored filings for each jurisdiction.
2. Multiplicity of Filing Types and Entities:
Filings may pertain to rates, rating rules, policy forms, underwriting rules, or combinations thereof. Entities making filings include insurers, advisory organizations, and third-party filers, each subject to different rules and authorities depending on the state and product category.
3. Rigorous Data and Actuarial Standards:
Regulators require extensive supporting data for rate filings, including historical premium and loss data, actuarial analysis, and justification for rating factors. Standards mandate that rates must not be excessive, inadequate, or unfairly discriminatory, but interpretations and required methodologies (e.g., loss ratio vs. pure premium methods, credibility standards, catastrophe modeling) vary by state. Data quality, segregation, and rate adjustment protocols are scrutinized, and regulators may require multi-year data, trend analyses, and loss development triangles.
4. Procedural Complexity and Review Process:
The filing process involves multiple steps and stakeholders: filers must ensure completeness and compliance with state-specific requirements, often using tools like SERFF for electronic submissions. Reviewers conduct detailed checks for statutory and regulatory compliance, issue objection letters for deficiencies, and may require hearings or amendments. The process is iterative, and delays often result from incomplete filings or back-and-forth correspondence.
5. Policy Form Review and Public Policy Considerations:
Beyond rate filings, policy forms are subject to rigorous review for compliance with mandated provisions, prohibited clauses, readability standards, and consistency with pricing memoranda. States may require additional documentation, such as actuarial memoranda or advertising materials, and enforce unique requirements for specific lines of business.
Rate change management in P&C insurance is challenged by fragmented data sources and limited clarity/disjoint in data/business requirements for rate development & analysis. Insurers must reconcile information from underwriting, claims/loss history, reinsurance, and market trends, which demands extensive data wrangling and preparation. Latency in accessing third-party data and manual handoffs between product, actuarial, and IT teams further slow the process, leading to rework and misalignment.
The absence of integrated platforms for hypothesis development, rate workups, and filing results in inefficiencies and extended cycle times. Compliance steps are repeated for each state, and technical requirements for integration are often relayed indirectly, compounding delays. Manual testing and architectural gaps—such as non-stateless rating engines and scattered product management logic—impede data-driven decision-making and actuarial rigor.
Dislocation analysis, a key actuarial process, is time-consuming due to sequential, repetitive workflows and limited automation. The challenge is to quickly identify segments with disrupted rates and adverse loss ratios, as variable-by-variable reviews are essential but time-consuming. Without robust analytical capabilities, targeted adjustments are delayed, increasing regulatory risk and reducing pricing effectiveness.

To accelerate and improve product/rate filing for Personal Auto & Property, insurers must deploy targeted interventions across dimensions such as Planning & Communication, Platform/Architecture, Data Controls & Trust, Validation, and rate filing intelligence—ensuring each stage of the value chain is robust, data-driven, and responsive to market and regulatory demands.
• Planning & Communication: Product / Rate filing has a direct correlation to business or product strategy. Considering its significance and the complex nature of the regulatory, it requires well-architected planning and execution. More often the challenges or delays are due to siloed interactions, lack of integration, gaps in business & IT/data requirements, delayed communication etc. across teams (product management, IT, Data, Actuarial, State filing etc.). Creating a digitized & integrated master rate change plan (by state, LOB, change complexity, filing type, etc.), workflow assignments and tracking ensure timely communication, transparency in timelines, dependencies etc. and enables identifying the choke points to improve execution. For example, Shift left the production IT activities related to configuration and build (i.e., before DOI approval/state filing, pre deploy with future effective dates toggled off until approval). Use emergency change approvals for minor rate updates and enforce strict SLA/OLA for signoffs to cut internal wait times.
• Platform/Architecture: Significant data engineering and configuration effort spent during Dislocation analysis and Post approval implementation. Address duplicate efforts spent in dislocation analysis and implementation (post rate approvals) by choosing appropriate rating engines (e.g.: Akur8, Earnix) with integration accelerators and compatible with modern policy administration systems.
• Data Controls and Trust: Automated data pipeline to ingest information, third-party data from near real time sources (telematics, IoT) on loss characteristics, use of CAT models for rate filings to assess risks like wildfire in California (as part of sustainable insurance strategy) to aid rate factor selection and an Assumptions Data Hub to capture UW assumption, Pricing assumptions, loss data etc., helps to build agility. Similarly, replacing legacy /excel-based models for rate filings with python/modern platforms such as hx Renew for central, version-controlled environment helps to improve collaboration, simplification and drive accurate filings.
• Automated validation: Leverage pricing tools/platforms such as hx Renew to automate scenario analysis (what-if") scenario analysis, automate the assessment of the impact of model changes and changes to assumptions, automated validation rules. Also, pairing provisional rate implementation with automated regression and CI/CD, improves response time via elastic rating engines and enhances rate monitoring, compliance, and traceability.
• Rate filing intelligence – Build & leverage rate filing intelligence powered by insights from SNL insurance product filing datasets (from S&P Global Market Intelligence) to understand market strategies, industry trends, analyze filings/factor changes of peer insurers, insights wrt objections, approval/response timelines of DOI etc. Harnessing these insights provides a feedback loop wrt product strategy, planning & execution adaptation to market conditions and decision-making.
Adopting integrated interventions such as master rate change plans and disciplined workflows, modern rating engines and platforms, reducing/eliminating excel based rating models, third-party data integrations, CI/CD and automated regression, market aware rate filing intelligence and effective change management—can significantly increase throughput of rate changes, strengthen rating traceability, reduce refiling/rerating cycles, and leverage richer third-party data for more responsive pricing, improving conversion, loss ratio resilience, and agility to market shifts.
To accelerate rate filing and product launches, insurers should assess their implementation strategy across the dimension such as people, process, technology and data, to evaluate their performance and outcomes. By operationalizing some of the relevant interventions listed above, insurers can compress cycle times, respond swiftly to market shifts, and optimize risk management. Now is the time for industry leaders to champion these changes and drive better outcomes.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Prathap Gokul is head of insurance data and analytics with the data and analytics group in TCS’s banking, financial services and insurance (BFSI) business unit.
He has over 25 years of industry experience in commercial and personal insurance, life and retirement, and corporate functions.
Insurance premiums could fluctuate daily like stock prices, but regulation and reinsurance prevent the scaling of continuous underwriting.
Ten years ago—has it been that long?—I was working with the largest insurer of churches and religious institutions in the US when we discovered they were incurring an average of $70 million in annual losses from frozen pipes.
It makes sense. Many houses of worship sit empty most of the time, and in the northern half of the country—where most of this carrier's book was concentrated—a power outage or failing furnace leads to frozen pipes, burst lines, and substantial water damage claims.
So we built an IoT service that monitored furnace activity and water pipe temperatures, complete with a call center to alert policyholders before problems escalated. It worked so well that it survives today: insureds receive annual premium discounts for enrolling, and frozen pipe claims have dropped over one-third.
That experience in continuous risk management sparked my fascination with the next frontier: continuous underwriting. In my view, there's no reason insurance premiums shouldn't fluctuate daily—like stock prices or utility bills—as new risk data emerges.
Frustratingly, there are exactly two reasons they don't: regulation and reinsurance.
Tesla Insurance launched in 2019 in California, leveraging real-time telematics data from connected vehicles to offer up to 30% lower premiums through a Safety Score algorithm that tracks behaviors like hard braking and collision warnings. The system performs real-time scoring—true continuous underwriting—and adjusts premiums monthly.
Today, Tesla Insurance operates in just 12 states. Twelve states in six years represent a glacial pace for a company built on speed, underscoring how state-by-state regulatory approvals and legal roadblocks stifle algorithmic pricing scalability. Elon Musk has joked that SpaceX will reach Mars before Tesla Insurance writes business in all 50 states—a sadly ironic quip, since the technology for continuous underwriting already exists.
Then there's reinsurance. Earlier this year, Tesla accelerated its pivot toward vertical integration by launching full in-house underwriting for California policies, marking a strategic departure from third-party partners like State National Insurance (a Markel subsidiary). This move gives Tesla direct control over risk assessment, pricing, and policy issuance—despite California's Proposition 103 restrictions on dynamic telematics pricing.
This operational autonomy does two critical things: it eliminates reinsurance constraints—such as conservative loss ratio caps that previously stifled Tesla FSD-linked innovations—and positions the company for national expansion, with pilots already running in Texas and Illinois. By year-end, in-house underwriting will cover 40% of Tesla's $1.2 billion premium base.
Cyber underwriting has traditionally relied on static annual assessments, but accelerating threat velocity—in the first half of '25, incidents grew by 49% YoY—demands a shift to continuous underwriting. Real-time data from AI-driven tools like open-source intelligence (OSINT) scanning and attack surface risk management (ASRM) enables dynamic risk evaluation and premium adjustments.
Cyber insurtechs such as Cowbell are transforming underwriting from a snapshot into a living process. They report a threefold reduction in claims through proactive remediation and adaptive policies tied to evolving security postures.
These cyber insurtechs focus almost exclusively on the SME segment—businesses with less than $1 billion in revenue, fewer than 1,000 employees, and, crucially, simpler IT environments than large enterprises. They're also proactive. Cowbell, for instance, actively monitors and underwrites risk for over 31 million SME entities using continuous external attack-surface scanning (their Cowbell Factors), often before a quote is even requested. This makes them one of the clearest real-world examples of continuous underwriting operating at scale in the small-and-mid-market commercial segment.
Regulation is actually helping here, pressuring carriers to verify real-time adherence to baseline security standards like multi-factor authentication through tools such as Endpoint Detection and Response (EDR) and Managed Detection and Response (MDR).
Reinsurance innovation is providing capacity. Leaders like Munich Re and Swiss Re are investing in advanced modeling and proportional treaties that favor data-rich, quota-share structures—lowering capital needs while supporting AI-enhanced risk portfolios.
Continuous underwriting unlocks growth. Projected global cyber premiums are expected to more than double from $14 billion in 2023 to $29 billion by 2027.
In this corner, the champ: Big Insurance and Big Legal (has anyone not heard of Morgan & Morgan?). They'll spend upwards of $200 million this year lobbying Washington to preserve the McCarran-Ferguson Act of 1945, keeping arcane insurance regulations frozen in place.
In that corner, the challenger: Big Tech. As continuous underwriting—by definition, fully automated—consumes AI data center capacity, the AI hyperscalers are throwing untold millions into the fray.
The majority of insurance consumers—per recent surveys—are rooting for the challenger.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence.
He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.
Technology slowly replaces insurance professionals' systemic value rather than eliminating their jobs outright.
I'm not here to scare anyone by saying, "Tech will replace all insurance professionals." That line is boring now.
What I want to talk about is something else: Tech may not replace your job immediately, but it is slowly replacing your worth in the system.
We are entering a phase where some changes in insurance are no longer a choice. They are inevitable. I call this "inevitablism" in insurance.
Inevitablism in insurance refers to the mindset that certain industry shifts — such as automation, AI adoption, data-driven decision-making, and modernization of legacy systems — are not optional but unavoidable.
It's the belief that these changes will happen regardless of current comfort, resistance, or preparedness, and that insurers must adapt rather than delay, because the future will arrive with to without them.
There is no shortage of talent in insurance. The real problem is how that talent is being used.
Across the industry, many bright professionals spend their day on low-value tasks:
They are capable of designing better products, rethinking portfolios, and solving complex risk problems. But because technology inside many insurers is underused or outdated, people become the "glue" holding legacy processes together.
And let's be honest — we all know insurance adopts technology at a speed of 0.1× compared with the rest of the world. When the world is moving toward no-code workflows, instant software creation, and autonomous systems, insurers are only now preparing to give GenAI controlled access to production environments.
The gap is not just between tech and customers; it is between tech and talent.
Instead of using technology to free people for higher-value work, we often use people to compensate for the lack of technology. That's where the fear of AI comes from. It's not just, "Will AI replace my work?" It's also, "Have we allowed our roles to become so basic that any decent system could replace them?"
We are already in a world shaped by Web 3.0, emerging platforms and decentralized technologies. Bitcoin's rise is just one signal of how digital value and infrastructure are shifting. On top of this, AI is accelerating innovation at a speed the industry has never seen before.
In this environment, insurers do not have the option to "wait and watch". They will be forced to adopt technology and create products that match how people actually live, work and transact today.
Innovation will not grow linearly; it will grow exponentially with the help of AI.
Automation will not be a luxury; it will be a necessity.
With open-source AI tools, startups can build, iterate and launch at a fraction of the cost and time. This new tech wave can easily create the next 10 major insurance players for the world—born digital, data-native and globally connected from day one.
In the future, most people will have their own AI agent helping them choose the right policies from hundreds of options. Most interactions—advice, onboarding, even parts of claims—could happen through VR or AR environments, especially for complex or high-value risks.
Behind the scenes, risk and portfolio decisions will rely on far more computing power than today, with advanced simulation and optimization. At the same time, connections between insurance and reinsurance will become more streamlined, with better data-sharing, real-time insights, and smarter capital allocation.
Some leaders still believe that sticking to legacy systems and old processes is the safest path. They focus on short-term stability, minimal change and being answerable upwards, rather than looking ahead.
And this creates another silent problem — there is no real plan to make the transition easier for the next generation of leaders. Very few leaders think 10 years ahead. They avoid solving foundational issues like unstructured data, fragmented systems, or outdated architecture. But if today's leaders don't streamline data, modernize infrastructure, and clean the technical debt, how will the next leader build, innovate, or scale?
Without this groundwork, every new initiative becomes a retrofit, every improvement becomes a patch.
On top of that, most organizations don't have a clear plan to upskill employees before introducing new technology. Instead of preparing talent for next-level work, new tools get dropped in suddenly. This creates anxiety, resistance, and the fear of being replaced. A thoughtful, long-term upskilling roadmap not only protects employees — it empowers them to drive the transition and elevate the organization to its next stage.
Others think long term. They understand that the next generation of executives will not just "manage operations" but will be expected to embrace innovation, work with AI and data fluently, and redesign how insurance is delivered.
The organizations that win will be the ones where leaders:
The choice is simple: either leadership shapes the transition, or the transition happens to them.
If we get this right, the future of insurance is not something to fear—it's a future no one will want to miss.
Insurance will work much more globally than it does today. Risks will not only be priced and held locally; they can be pooled globally, with capital, data and exposure flowing more smoothly across borders.
With the help of Web 3.0 and digital identity, we may see unique decentralized IDs created for individuals, businesses and even digital assets. These IDs can carry verified risk information, claims history, behavior patterns and coverage details in a secure, portable way. That means faster underwriting, smarter risk selection and better pricing for those who manage risk well.
For customers, protection becomes something that quietly works in the background—across countries, platforms and channels—instead of a one-time, paperwork-heavy transaction.
At the same time, insurers may rely on an entire army of AI agents to handle day-to-day tasks: answering queries, comparing products, monitoring exposures, flagging anomalies, and triggering workflows. These agents will effectively act on behalf of both the insurer and the customer.
That raises a new question for the industry: we won't just be insuring people and organizations — we will also need to think about how to insure the agents and the risks created by their decisions, errors, or failures.
As more processes are automated and more intelligence is built into systems, something important happens on the human side: we actually get more time and space to think.
More time to:
AI, automation and advanced computing handle the volume and speed. Humans handle the nuance and direction.
The future will not argue with any of us.
We can continue to debate whether Al will really reach certain capabilities, whether regulators will permit specific models, or whether customers will fully trust automated decisions. Many of these discussions are valid and necessary.
But some trends do not wait for our full intellectual comfort. They advance quietly through small projects, pilot programs and incremental upgrades.
The future does not require large numbers of people to keep legacy processes alive. It requires fewer people doing higher-value work, supported by smarter tools and more connected systems.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Manjunath Krishna is a property and casualty underwriting consultant at Accenture.
He has nearly a decade of experience supporting global underwriters and carriers. He holds CPCU, AU, AINS, and AIS designations.
Satellite technology transforms agricultural insurance, enabling parametric solutions that protect entire supply chains, not just farmers.
Farmers have been managing the risks to productivity throughout human history, for example by selecting the most appropriate choice of crop to plant according to state of soil moisture at the time. This is efficient, dynamic risk management the old-fashioned way.
From the late 19th century, the traditional way of protecting against the risks of perils, including hail, drought, flood, frost, heatwave and windstorm, has been indemnity insurance.
But just as farming techniques have evolved, farmers today benefit from new sources of data and technology, combined with alternative risk transfer options, to better protect their interests. What's more, these alternative solutions can allow farming supply chain partners, from processors, manufacturers and retailers, to protect their particular interest in the primary inputs into global food and beverage industries.
Traditional crop insurances rely on accurate measurements taken at field or farm level. However, visiting farms and fields, often in remote locations, can prove time-consuming and may not give farmers the payouts they need to recover from losses when they need them.
Also, if a loss event is widespread, affecting many growers at the same time, there may not be enough experienced individuals to carry out the necessary loss evaluation work fast enough.
That's when alternative insurance arrangements, such as parametric solutions, can benefit both farmers and their supply chain partners.
Parametric solutions differ from more traditional, indemnity-based insurance contracts. They don't rely on on-the-ground loss adjustment because there is no need to prove loss, as in indemnity insurance. Instead, the insurance contract provides a payment based on a threshold being met on a pre-agreed scale or index. Such an index may be quite simple, for example the millimeters of rainfall recorded during the growing season or a critical part of it. Indices can also be temperature based: how many hot (or cold) days at prescribed temperatures are recorded.
Parametric insurance also differs from traditional insurance because payments are made automatically when contract terms have been met, without any need to 'claim' in the conventional sense. While the index will have been calibrated to reflect conditions that are likely to have caused a crop loss, the actual condition of the crop and resulting harvest are not considered when the payment is calculated.
Parametric solutions can be applied in varying forms and to address distinct risks that affect the supply chain, including cropping (both annual and perennial) and also livestock, aquaculture and forestry.
The routine availability of remotely observed data from satellite sources removes the need for insurers to visit the location of the insured assets for either risk or loss assessment. Such data sources let insurers measure vegetation health and evidence of burning remotely.
Such data, when combined with parametric insurance arrangements, enables interested parties up and down the food chain to protect their interests. If your business relies, for example, on the successful harvest of coffee in Brazil but you're not the grower of that coffee crop, you can still protect your interest with an appropriately designed parametric contract.
Traditional contracts of insurance are typically regulated so the policyholder must have an 'insurable interest' and, in the event of a claim against the policy, to show a 'proof of loss.' Parametric contracts can operate outside traditional constraints. This flexibility enables partners across supply chains to achieve a broader range of risk management objectives.
Parametric insurance may sound complicated and sophisticated, but, in practice, almost the reverse may be true. While it may take the careful input of highly skilled experts to construct such products and to ensure they are fit for purpose, for the end user they should be easy to understand with payments, when due, being swiftly settled.
If you're a farmer or would like to explore protecting an agricultural supply chain partner with parametric insurance, your first step would be to assess your supply chains and their vulnerabilities. Geospatial analytical tools, for example, can help you quantify the likelihood and severity of multiple perils across global supply chains.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Julian Roberts is managing director of risk & analytics, alternative risk transfer solutions at WTW.