Download

First, AI Slop. Now, 'AI Beige.'

AI slop is just weird. "AI beige" is more insidious because it can deceive you into thinking you're smart when you're just being bland. 

Image
wavy lines

Ever since the word "content" began to be used as a generic description of all the video, audio and writing that people like me do, I've not-so-quietly seethed about the leveling that word connotes. Nobody sets out to write the Great American Content. Authors aspire to write the Great American Novel. I don't write Six Things so I can email some "content" to you. I try to provide some perspective, some useful insight. 

"Content" springs to mind because generative AI sure is producing a lot of it, and much of it is as bad as the word suggests. To this point, the concern has mostly been about AI slop--slapdash writing and oddly formed images. But there's another, more insidious type of material that AI is producing: what I think of as "AI beige." 

It's not as clearly off as those pictures where a stray bit of an arm is floating in midair or a hand has six fingers. The problem is that you can easily convince yourself that your AI is generating smart visuals and writing, when it's actually producing a forgettable beige that leaves you at a competitive disadvantage.

I'll explain.

My realization about the danger of AI beige began when my older daughter wrote an article for Quartz about what AI claimed it could do for online dating. She wrote:

"Generative everything — bios, prompts, openers — risks pushing profiles toward a smooth, samey median, making it harder to tell whether you like someone or just their autocomplete. Profile refiners can make dating apps worse by sanding off the idiosyncrasies that signal real, human compatibility....

"What happens when two people send each other messages with a chatbot?

"Do the chatbots fall in love?"

More recently, EY produced a report that explained what it called "the sameness trap." EY wrote about conducting an exercise hundreds of times across the globe, in which people used AI to develop a brand image. Everybody seemed to find the exercise fun and inspiring, and "each team believed it had created something novel, [but].collectively they had created the same thing."

Assorted matcha snack packages including chocolate bites, bars, and matcha latte bites.

Imagine doing all the work to go to market with one of those brands and finding the other two on the shelf right next to you. Differentiation is out the window.

AI will often produce results like that, because the models work in the same way, drawing on the same data (having all basically Hoovered up everything on the internet) and trying to develop the same best practices. 

AI can still be plenty useful and help with creativity, but you have to use it right. You won't get a Think Different or Just Do It slogan if you ask an AI to narrow in on a recommendation, but you might get something that starts you toward a very different, innovative sort of brand if you ask the AI to get a bit wild, or even very wild. You'd have to brainstorm from there and have the humans take over, but the AI can help broaden the range of ideas you consider.

EY suggests putting AI at the end of the process. Don't let the AI "speak" first on a topic, because it carries a high-tech cachet that makes it come across as the smartest in the room, and people become reluctant to voice ideas once the oracle has spoken. EY says to frame AI in an adversarial position:

"AI brings the patterns and the data of what has already happened. The human takes that intelligence and forms a position. Then we ask AI to challenge it. Tell us what we’re missing. Generate the counterfactual. The argument we haven’t considered. What would someone who disagreed with us say that isn’t in here?"

In either case —at the front end or the back end — you want to be aware that your competitors are using AI, too, and are probably being steered in the same direction you are. 

It's common for businesses to pay too little attention to what others are doing. Long before AI became a factor, every computer company started telling me in the 1980s that "We don't sell boxes; we sell solutions." In the 1990s, every startup began its presentation to me by showing a PowerPoint slide that read, "We have the best people." And so on.

In some parts of the business, insurers don't need to worry about having their AIs produce differentiated results. With communications with customers, for instance, if you come across as concerned and professional, the customer isn't going to call up your email and figure out how it compares with a competitor's on the same topic. So having an AI guide you toward best practices is fine, however beige they might be.

But when it comes to branding, sales pitches and strategy, you need to be sure to Think Different.

Just Do It.

Cheers,

Paul

 

 

 

 

 

Uninsured Driver Problem Isn't What You Think

Non-standard auto insurers' fee structures may be producing the very uninsured population they're designed to avoid.

Backseat view of a man driving a car during the daytime

One in five. That's roughly how many drivers in states like Florida get behind the wheel without insurance, according to the Insurance Research Council's most recent data. The standard explanation is economic. Coverage costs are often too much, so some people go without. The policy response follows: steeper penalties, higher surcharges for lapsed drivers trying to come back. The diagnosis is not wrong, exactly. But it is incomplete in one critical respect: it treats the uninsured rate as something that happens to the insurance industry, rather than something the insurance industry has, in meaningful part, produced. I'd argue that a clear look at how non-standard auto products are designed in Florida suggests the latter and that the implication, for those of us who build these products, is more uncomfortable than the industry has typically been willing to acknowledge.

The Fee Cascade

Picture a driver who has been paying premiums faithfully for months. Then one paycheck comes up short. One missed installment. What happens next isn't bad luck; it's a sequence that was designed. Many non-standard carriers respond to a missed payment by assessing a Late Payment Fee. That fee gets added to the arrears, inflating what's already owed. If the swollen balance tips the driver over the edge, the policy cancels. Then comes the Reinstatement Fee. Now the driver is staring down up to four compounding obligations at once: the original missed amount, the late fee, the reinstatement fee, and potentially a catch-up payment to get back in good standing.

For a household running on variable income, that cascade is often the breaking point. Not a choice. Not a misunderstanding of consequences. The product made recovery too expensive at exactly the moment financial strain was most acute. This isn't an edge case. It's the mechanism by which the non-standard market, in aggregate, produces and sustains a meaningful share of the uninsured population.

The Price of Re-Entry

The compounding doesn't end at cancellation. When a lapsed driver's financial position stabilizes and they try to get back on the road legally, the industry often greets them with a surcharge. The lapse, the very outcome the fee structure helped produce, is now a rating factor. Re-entry premiums are higher than they were before cancellation. Down payments may be steeper. Carriers often treat the interrupted tenure as a non-payment risk signal, so the customer who couldn't clear a compounded reinstatement balance may now face a bigger first-payment obligation than they would have had they never lapsed at all.

The cycle sustains itself. Fee structures, reinstatement terms, and rating factors are deliberate product choices, not features that emerged without anyone's involvement. The uninsured rate is, among other things, a record of their cumulative effects.

A Different Product Design

When we built Clearcover's non-standard product in Florida, we started from a different premise: The fee cascade isn't an inevitable cost of serving a financially volatile segment. It's a design choice, and design choices can be remade.

We replaced the typical compounding structure with a single, knowable charge that doesn't grow during periods of financial strain. Paired with payment flexibility built around the income variability that defines much of the non-standard segment, the goal is straightforward: design products for the reality of how customers in this market actually manage money, and price the risk accordingly.

We're not arguing this is the only way to design a non-standard product. We're just saying it's a way worth trying, and that the early signal is promising enough to invite the broader segment to keep experimenting too.

The Honest Reckoning

Product design isn't the only reason drivers go uninsured. But honest reckoning requires acknowledging that the industry's fee structures and rating rules have not been neutral. They have worked, systematically, to make re-entry harder for the drivers most likely to lapse, compounding financial strain in a population that had already demonstrated it was operating at the margin. That's not an accident. It's a policy choice, and it has consequences that show up in uninsured rate data every year.

The philosophical shift the moment calls for isn't complicated, even if the execution is. As an industry, we all need to stop designing products that treat a missed payment as an opportunity. We need to build them for the reality of how customers in this segment manage money. The uninsured driver problem isn't a compliance problem to be resolved through enforcement. It is the predictable output of product decisions often made by this industry and we have the capacity to rebuild those decisions intentionally.


Seth Henderson

Profile picture for user SethHenderson

Seth Henderson

Seth Henderson serves as the senior vice president of insurance product and growth at Clearcover,

Prior to Clearcover, Henderson held key roles at The Hartford and GEICO, where he contributed to the development and refinement of rating programs across both auto and home lines of business.

He holds a bachelor’s degree in history from Kennesaw State University.

 

Platform Modernization in Insurance: Why Now Is the Time to Accelerate

AI is transforming the way platforms are built. Open integration, flexible data structures, and meeting partners where they are will define the next market leaders.

Blue Backdrop

Consider agriculture. It is one of the oldest industries in human history, and among the last you might expect artificial intelligence (AI) to meaningfully reshape. Yet precision agriculture is doing exactly that. Satellite imagery, soil sensors, weather models and other tools are being integrated and synthesized by a new generation of AI models to guide planting decisions, predict yield variability, and optimize irrigation at the individual acre level. Crop insurance underwriting, for an example closer to home, once driven almost entirely by historical loss tables and weather averages, is being rewritten around real-time field data that only machine learning models can interpret at scale. An industry defined by tradition and seasonality is being transformed by technology faster than some financial services firms have updated their customer portals.

The insurance industry is at a similar turning point. For years, insurers have orbited platform modernization, making small improvements and then pulling back due to operational risks. Legacy systems have kept organizations in a holding pattern: stable enough to operate, but less agile in adapting to the pace the market now demands.

That dynamic is shifting. AI is fundamentally transforming the way insurance platforms are built and run, turning modernization from a long-term goal into an immediate strategic priority. This is no longer only about small efficiency gains. Platform modernization now takes center stage in competitiveness, partnerships and making better decisions at scale.

Why legacy platforms keep insurers grounded

Many insurers operate within monolithic core systems that integrate policy administration, billing, claims, underwriting and reporting within a tightly coupled environment. Often customized over decades, these systems are deeply embedded in daily operations. As a result, modernization can feel less like a technology upgrade and more like open-heart surgery.

The limitation is not age but adaptability, and at a more fundamental level, the design philosophy of what a core transaction system should be. Legacy platforms were not architected to be open. They are walled gardens with narrow access, mostly through user interfaces, built to control entire workflows and departments within a single environment. This philosophy benefits software vendors but limits an insurer’s ability to customize, adapt and integrate AI capabilities. The issues go deeper than closed systems: Many use data models that evolved haphazardly over time, which hinders external integration, limits automation, and makes large-scale changes slower and more costly than organizations would like.

This creates a frustrating paradox. To leverage AI-assisted development or intelligent automation, insurers must first invest in foundational data cleanup and restructuring. These efforts are costly, time-consuming and out of sync with the pace of innovation today. For technology leaders, the question is no longer whether to modernize, but how to sequence it without destabilizing the business.

The data mindset that determines success

Modern, open systems help deliver faster underwriting, improved claims outcomes, sharper risk selection and scalable automation. However, these outcomes depend heavily on the quality of the underlying data, which, for many insurers, is the main limitation.

Specialty insurers working with diverse distribution networks across many lines of business encounter partners spanning a wide spectrum of technical maturity. From small, focused underwriters with spreadsheet-based toolsets to large organizations with dedicated engineering teams, each engagement brings its own data structures, conventions and integration requirements. The challenge is not only ingesting that data, but normalizing and validating it to support actuarial analysis, financial reporting and program oversight across a complex book of business.

When data foundations are weak, the consequences appear across everyday operations:

  • Program onboarding processes stall because agents and brokers cannot quickly answer questions that existing data should already resolve.
  • Claims adjudication is fragmented, with processes and details scattered across systems and inaccessible to all stakeholders in real time.
  • Bordereau files remain the standard, with limited adoption of modern data integration methods such as APIs, leaving validation manual and error prone.
  • Reporting remains rigid, depending on static PDFs and IT assistance for even minor updates.

These are not merely edge cases; they are the natural result of platforms built before today’s data and integration requirements fully took shape.

Forward-thinking insurers are already addressing these issues by validating data earlier in the submission flow, streamlining ingestion pipelines, and offering program-level analytics that improve transparency for distribution partners. The ability to exchange accurate, timely data is becoming a meaningful competitive differentiator.

Knowing where, and where not, to apply AI

One of the most consequential decisions technology leaders face during modernization is not which AI tools to adopt, but where to deploy them. AI delivers outsized returns in specific contexts and introduces risk when applied in the wrong ones.

The highest-value, lowest-risk applications tend to cluster around workflows and customer interactions: automating bordereau validation, surfacing claims anomalies, generating underwriting summaries, accelerating document review, or guiding agents through submission requirements. These are areas where AI augments human judgment, reduces friction, and operates alongside existing systems without requiring those systems to change.

Replacing core transaction systems is a different conversation. Policy administration, billing, and claims settlement involve regulatory compliance, audit trails and financial integrity requirements that demand extreme care. Applying AI directly to these systems, without strong data governance and testing frameworks, introduces risk that often outweighs the short-term gain. The better path is typically to modernize the underlying architecture first, then build AI capabilities on a stable foundation.

Organizations that conflate “apply AI everywhere” with a modernization strategy often find themselves with sophisticated models sitting on unreliable data, or automated workflows breaking at the points where legacy systems assert themselves. Discipline about where AI creates value, and where foundational work must come first, is what separates effective transformation from expensive experimentation.

How AI changes the modernization equation

AI is not only speeding up platform modernization in insurance; it is transforming how it occurs. In the past, transformation has often been seen as a large-scale, multi-year project to replace core systems. For platforms handling high transaction volumes, the cost, complexity and operational risk of this “big bang” method often outweighed the advantages.

AI shifts that calculation in two distinct but complementary ways: how new applications and tools are built and deployed and how AI is embedded directly into workflows to support and automate decisions. These are not the same thing, and conflating them leads to poorly sequenced investments.

AI development tools: Building and deploying faster

The first wave of AI impact for most technology organizations is on the build side: using AI-assisted development tools to compress the time it takes to design, build, test and ship new internal applications. Tools that generate code, write tests, scaffold architectures and accelerate documentation review are not marginal productivity improvements. They are changing what a small team of engineers can deliver in a quarter.

For insurers, this means that internal tools, which previously required months or years of development, in addition to a vendor and system integrator relationship, can now be prototyped in weeks by a small internal team: a partner portal that consolidates program reporting, a claims intake tool that pre-populates fields from submitted documents, and a bordereau ingestion utility that catches data errors at intake rather than surfacing them days into the processing cycle. These applications do not require replacing the core system; they sit alongside it, connect via APIs, and deliver immediate operational value, if the core system supports it.

Technology teams that embrace AI development tooling can reclaim capabilities that have historically required large vendor programs or costly system integrators. They can move faster, iterate based on user feedback, and build institutional knowledge rather than external dependency. The organizations deploying these tools today are already compressing timelines that once seemed fixed.

Embedding AI in workflows: decisions at scale

The second wave is more fundamental: embedding AI directly into operational workflows to improve and automate the decisions that drive the business. This is where the economic case for modernization becomes clearest, and where the data foundation matters most.

Workflow-embedded AI is not a tool a user opens and closes. It is:

  • Judgment built into the process itself;
  • An underwriting workflow that scores submission quality before a human reviews it;
  • A claims triage model that routes cases by complexity and coverage signals in real time; and
  • A renewal pricing engine that incorporates loss history, external data, and portfolio exposure without requiring manual assembly. 

These are structural changes to how decisions get made, not incremental improvements to existing processes.

 The distinction between these two modes matters for sequencing. AI development tools can deliver value relatively quickly, even in environments with imperfect data, because they accelerate human work rather than depend on it. Workflow-embedded AI, by contrast, is only as reliable as the data it operates on. A claims-routing model built on incomplete or inconsistently coded data will produce inconsistent decisions. Getting the data foundation right is a prerequisite for this second wave, not a parallel workstream.

Together, these shifts fundamentally change the economics of modernization, lowering barriers to entry and expanding what is possible for more organizations.

Choosing the right retirement strategy for legacy systems

How an organization exits its legacy systems matters as much as what it builds next. The right strategy depends on transaction volume, regulatory complexity, partner dependencies and appetite for operational risk. Three patterns emerge repeatedly in practice.

The strangler pattern

Rather than replacing a legacy system wholesale, new functionality is built alongside it. The modern system gradually takes over individual capabilities a microservice here, an API layer there until the legacy platform is functionally surrounded and can be decommissioned without a disruptive cutover. This approach minimizes operational risk and is particularly effective for large, tightly coupled systems where a full replacement is not feasible.

Microservicing and modular decomposition

Some organizations carve specific domains out of a monolithic system and rebuild them as independent, API-driven services, such as claims intake, document generation, or rating, while leaving the core transaction engine intact for now. This creates optionality: Each domain can evolve independently, integrations become cleaner, and the organization builds modern engineering capability without betting the business on a single transformation program.

Sunsetting and runoff

For legacy systems supporting books of business with short or reasonably short policy periods, managed wind-down is often the most pragmatic answer. New business moves to the modern platform immediately; the legacy system is maintained, but not invested in, for the life of the in-force policies. This approach is less visible than transformation but is frequently the most cost-effective and operationally sound path for systems that are not worth rebuilding around.

A mature modernization strategy typically combines elements of all three: strangling core transaction systems, decomposing specific domains into services, and sunsetting legacy platforms that no longer justify investment. Recognizing which pattern applies where is itself a strategic discipline.

The right conditions for change

Since the insurance ecosystem will never be entirely uniform, achieving complete alignment across platforms or data models is neither practical nor essential.

What is achievable is better data exchange. More interactive, near-real-time data integration can deliver measurable value without requiring a complete system overhaul. Progress depends as much on collaboration as on technology, emphasizing the need for open, practical discussions about current data flows and how they can be enhanced for the future.

Ultimately, success will not be measured by who creates the most advanced platform, but by who develops the most adaptable one. Open integration, flexible data structures, and the ability to meet partners where they are will define the next wave of market leaders. The industry has spent years addressing this challenge. With the right tools, patterns, and organizational discipline now in place, the conditions for meaningful change are finally within reach.

About the author

Joe Lettween is Chief Innovation, Data Science, and Technology Officer for global specialty insurer Fortegra

 

Sponsored by: Fortegra


Fortegra

Profile picture for user Fortegra

Fortegra

An industry leader for more than 45 years, we help businesses and individuals manage risk by creating and delivering reliable insurance and risk management solutions. Learn more about who we are.  

May 2026 ITL FOCUS: Workers' Comp

ITL FOCUS is a monthly initiative featuring topics related to innovation in risk management and insurance.

ITL Focus: Workers' Comp

FROM THE EDITOR

Workers' compensation has always been a line of business defined by complexity — rising medical costs, shifting workforce dynamics, mounting litigation, and an ever-changing regulatory landscape. But a new force is reshaping how carriers approach every piece of that puzzle: generative AI.

For many insurers, especially state-affiliated funds shifting to mutual models, the pressure to grow and differentiate has never been greater. The old playbook — focused, single-state, single-line — is no longer enough. Carriers are sitting on significant capital while their core books contract, and the question on everyone's mind is: what's next?

This month, we explore that question through a conversation with Tirath Desai, PwC's insurance core transformation and AI lead, about where GenAI is already delivering real advantage — and where the road ahead still requires careful navigation.

From reimagining the claims experience for injured workers, to streamlining fragmented payment processes, to using AI-powered visual data to prevent accidents, Desai lays out a vision of workers' comp that is faster, smarter, and — crucially — more human-centered. He also tackles the ecosystem question head-on: No carrier can build everything alone, and the winners will be those who know where to invest and where to collaborate.

Whether your organization is just beginning to explore AI or looking to move beyond isolated pilots, Desai's advice is clear: think bigger, build governance first, and get your data house in order. Read the full interview to find out how to position your organization for what's next.

 
 
An Interview

GenAI Reshapes Workers' Comp

Paul Carroll

GenAI is reshaping insurance. Let’s start there—what’s changing in workers’ compensation?

Tirath Desai

It’s becoming a central conversation. Carriers are asking a fundamental question: what’s next? Many are coming out of a soft market and rethinking growth. Workers’ compensation insurers across the globe continue to navigate common issues related to the changing nature of work, rising medical costs, changing workforce, increasing litigation and regulatory changes.
 
That’s especially true for state-affiliated funds transitioning into mutual models. Historically, they’ve been focused—single state, single line. Now growth is harder to find. That creates pressure.
 
Besides competition, there is a need for expanded capabilities. Differentiation in a crowded market. So, the questions shift. How do we grow? Where do we collaborate? What makes us stand out? AI is at the center of that discussion. Not the only answer—but a critical one.

read the full interview >
 

MORE ON WORKERS' COMP

AI Transforms Workers Comp for Brokers

by Adam Price

AI enables overwhelmed workers' comp brokers to shift from transactional quoting to strategic risk advisory relationships that employers increasingly demand.

Read More

 

Gig Workers Reshape Insurance Market

by Michael Giusti

As gig workers untether from employer-sponsored benefits, insurers must reimagine underwriting and distribution for a decentralized workforce.

Read More

 

The Future of Workers’ Comp

by James Benham

Workers' compensation systems need cloud-native transformation to address modern workforce challenges and rising claim severity.

Read More

 

hands in a meeting

Uncovering Hidden Fraud Networks

by Marty Ellingsworth, Jay Mullen

Sophisticated fraud thrives in fragmented data. Entity resolution, knowledge graphs, and geospatial analytics can unite disparate records and expose hidden networks.

Read More

 

Strategies to Fight Workers' Comp Fraud

by Roberta Mercado

Advanced AI and predictive fraud models transform workers' compensation fraud detection from costly burden into a strategic risk management advantage.
Read More
 
hands in a meeting

What Medical Inflation Means for Workers’ Comp

by Pragatee Dhakal

Healthcare inflation surges past general price trends, pressuring P&C carriers to adopt data-driven claims strategies.
Read More
 
 
 

MORE FROM OUR SPONSOR

Reimagining Workers' Compensation in the Age of Generative AI

Sponsored by PwC

While workers' comp has seen improved performance over the past decade, the sector faces mounting pressures—from medical cost inflation and rising mental health claims to litigation exposure and evolving workplace dynamics. This paper from PwC and Guidewire examines how GenAI, one of the fastest-adopted technologies in history, can help insurers navigate these challenges.
Read More

Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Underwriting Fundamentals Are Key Before AI

Insurers rushing to adopt AI risk missing a crucial step: building the underwriting fundamentals that make technology effective.

An artist's illustration of AI

Everyone is talking about AI, automation, and how fast insurance needs to move. Those are important conversations to have. In my role, none of that matters if the fundamentals are not in place.

That was one of the central themes in Send’s recent INFUSE webinar, “Getting the Foundations Right: Building an Underwriting Engine for 2026,” that I participated in recently. The discussion brought together perspectives across technology, consulting, and underwriting, from me, Matt Carter from Altus Consulting and Daryn Upil of The Hartford. What stood out most was how often we came back to the same point: If you want underwriting to be faster, smarter, and more scalable, you must get the foundation right first.

That foundation is not just about technology. It is about underwriting discipline, clarity of decision-making, trustworthy data, and making sure the organization is focused on the things that create value.

The market is changing, and underwriting must respond

Underwriting has always evolved alongside the market, but the pace of change now feels different. In the U.S., we are dealing with increasingly complex catastrophe exposures and continuing legal system abuse. Across markets, customer expectations are rising, data is expanding, and the pressure to make better decisions faster continues to grow. That creates real opportunity, but it also creates risk.

It is easy to get caught up in the promise of new tools. Every insurer is hearing about what AI can do, what automation can unlock, and how quickly operations can be transformed. But underwriting is not improved by technology alone. It improves when technology strengthens sound decision-making.

That, to me, is the key distinction. We should absolutely be looking at how to use AI, data, and automation to build an underwriting engine. But the engine only works when it is built on a solid underwriting foundation.

Technology can accelerate decisions, but it cannot replace underwriting fundamentals

Great underwriting starts with the fundamentals. Do you have clear underwriting rules? Do you understand your appetite? Do your people know what good business looks like? Can you make consistent decisions and explain why they were made?

If those things are not in place, adding more technology does not solve the problem. It just speeds up the wrong process.

That is why the conversation around modernization should begin with the underlying operating model and not just the tools. We need to ask if our processes are designed the right way, is the data useful, and are the outputs we are generating trusted by the people making decisions every day.

There is a lot of noise and urgency in the market right now. Leadership teams are asked every day how they are using AI and how quickly they can implement it. We need to remember that underwriting is a balance of art and science; you can change backwards and forwards, but the art will never go away. We need to stay focused on what matters most to the business and build from there.

Legacy systems are not just a technology problem

Legacy systems remain a challenge for many carriers both operationally and from a people perspective.

A lot of insurance organizations, especially long-established carriers, have systems that have been around for decades. In many cases, those systems have done exactly what they were designed to do. But the industry is now asking more from them…more data, insight, integration, and flexibility in how we deliver underwriting. This is where the strain starts to show.

We also can't ignore the talent piece. When new people enter the industry, they are used to modern technology in almost every part of their lives. If they join a company and immediately work on outdated systems that feel disconnected from how they expect technology to function, it creates friction from day one. Modernization is not just about efficiency. It is also about creating an environment where talented people can do their best work.

Data should support decisions, not create distractions

Data came up repeatedly during the webinar, and better data is one of the biggest unlocks for underwriting.

The goal is not to collect as much data as possible. The goal is to have the right data to support better decisions.

As underwriters, we have more technical information at our fingertips than ever before. We can find out what a building is made of, when it was built, if it is in a hail zone, or sits in a higher crime area. Those, and other risk indicators that may apply, are incredibly valuable. They help us work faster and with greater precision.

We can get all the technical data and risk about a property and still not know enough about the person or business behind it. You may not know how seriously that business owner takes safety. You may not know the quality of their management practices. You may not know how they operate day to day. Those things still matter. They are often what separates an acceptable risk from a great one.

That is why I don't believe technology will replace underwriters. I see it changing where they spend their time. The more we can automate routine tasks and surface technical data quickly, the more valuable underwriters become in the areas where judgment, conversation, and commercial understanding matter most.

The future of underwriting is still human

There is understandable concern in the market about what AI means for the underwriting profession. My view is that the role is not disappearing; it is evolving.

The science side of underwriting is going to become stronger, faster, and more accessible. We will have better tools, broader data sources, and more intelligent workflows helping us evaluate risk.

But underwriting is still a business of judgment. It still requires negotiation, relationship management, pattern recognition, and the ability to see beyond what is immediately visible in the data.

The human element is not going away because as the technical aspects of underwriting become more automated, the softer skills are going to be even more important. Underwriters will need to ask better questions, challenge assumptions, interpret signals, and make thoughtful decisions in situations where there is no perfect answer. That is not something you can simply hand over to a model.

Leadership has to create focus

One of the questions raised during the webinar was how leaders make time to understand the real problem when there is so much pressure to move quickly. I think the answer comes back to focus.

Every leadership team today has more opportunities than they can pursue at one time, so they need to prioritize and decide what matters most to give comfort and confidence to their teams.

The differentiator could be service for some, underwriting expertise, product design, or distribution for others. Technology should help strengthen those advantages and not distract from them.

Leadership should always encourage innovation, but remember they need to be aligned around the right kind of innovation.

Foundations create flexibility

My main takeaway from this webinar is that building an underwriting engine for 2026 and beyond starts with getting the foundations right. If not, technology will just add complexity. This is an exciting time to be in the industry, and we all need to stay focused, prioritize, and bring people along on the journey.

GenAI Reshapes Workers' Comp

GenAI is transforming workers' compensation strategy as insurers navigate rising costs, market pressures, and demands for differentiation.

Text Box with An Interview with Tirath Desai

Paul Carroll

GenAI is reshaping insurance. Let’s start there—what’s changing in workers’ compensation?

Tirath Desai

It’s becoming a central conversation. Carriers are asking a fundamental question: what’s next? Many are coming out of a soft market and rethinking growth. Workers’ compensation insurers across the globe continue to navigate common issues related to the changing nature of work, rising medical costs, changing workforce, increasing litigation and regulatory changes.

That’s especially true for state-affiliated funds transitioning into mutual models. Historically, they’ve been focused—single state, single line. Now growth is harder to find. That creates pressure. 

Besides competition, there is a need for expanded capabilities. Differentiation in a crowded market. So, the questions shift. How do we grow? Where do we collaborate? What makes us stand out? AI is at the center of that discussion. Not the only answer—but a critical one.

Paul Carroll

Workers’ comp has long relied on predict-and-prevent strategies. Now we’re seeing new pressures—medical costs, social inflation. What’s changing?

Tirath Desai

Pressure is built on multiple fronts. Costs are rising. Risk is harder to manage, and expectations are shifting. Many carriers have operated within defined regulatory frameworks for years. Now they’re expanding—into larger risks, more complex products, newer distribution models. They’re asking practical questions. Can we improve fraud detection? Strengthen medical management? Deliver a better experience? Reach new channels? 

At the same time, many are holding significant capital while their core book contracts. That tension—capital available; growth constrained—is driving urgency.

Paul Carroll

GenAI clearly improves efficiency. But where does it create real advantage beyond cost?

Tirath Desai

It starts with better decisions. Stronger underwriting. Earlier fraud detection. Faster, more consistent claims handling. Take claims processing. Today, it’s still heavily manual. Notes, documentation, back-and-forth across multiple parties. It slows everything down. AI changes that. It can extract and synthesize information in real time. Build a clearer view of a claimant’s history. Support faster, more informed decisions. 

Payments are another example. Complex. Fragmented. Often difficult to track. With the proper technology, you can streamline that process end-to-end. Fewer delays. More visibility. So yes—efficiency improves. But the bigger shift is quality. Better outcomes. Better experiences.

Paul Carroll

You’ve spoken about a more worker-centric model. What makes that a shift?

Tirath Desai

Today’s experience isn’t built around the worker. Start by reporting an injury claim. Awareness isn’t always there. The process can feel unclear, slow, and disconnected. Now imagine something different. A digital entry point where a worker can report an incident, check eligibility, upload information, claim status and payment—in one centralized location. That data flows directly into core systems. It’s confirmed, summarized, and ready to act on. Compare that to today. Phone calls. Manual entry. Multiple handoffs. Delays at every step. 

We can help remove a lot of that friction. And we can go further. Real-time guidance. Instant answers to simple questions. Support without always needing human intervention. That’s a meaningful shift—for both the worker and the carrier.

Paul Carroll

What happens when you improve responsiveness for injured workers?

Tirath Desai

You can reduce friction. And that matters. Delays and poor communication often cause dissatisfaction. Dissatisfaction can lead to disputes. And disputes can escalate to litigation. More responsive, more transparent interactions help change that dynamic. Now, AI isn’t a holistic solution. It still requires oversight. Judgment. Human involvement is where it matters. But it can remove many of the pain points in the process.

Faster responses. Clearer communication. More consistent experiences. That’s where the real value shows up.

Paul Carroll

Can AI help prevent accidents?

Tirath Desai

There’s potential—but it’s nuanced. Workplace monitoring isn’t new. What’s changing is how data is captured and used. Some approaches rely on wearable devices. Adoption can be a challenge. Over time, employees may resist if it feels intrusive. Other approaches are less invasive. For example, using existing visual data—images or video—to help identify risks. Detect unsafe conditions. Trigger alerts before an incident occurs. That’s promising. But the results are still evolving. 

Many organizations are still working to define the return on investment. So, the opportunity is real. But it requires balance—between insight and trust.

Paul Carroll

Does GenAI accelerate collaborations and ecosystems?

Tirath Desai

Absolutely. No carrier can—or should—build everything alone. The pace of change is too fast. We’re seeing more ecosystem-driven models. Carriers combining internal capabilities with external innovation. Selecting targeted solutions where they can add greater value.

For example, some organizations are building their own AI capabilities. But in areas like litigation support or document processing, they may choose to integrate external solutions instead. It’s about focus. Invest where it differentiates you. Collaborate where it accelerates you. That’s how you can scale effectively.

Paul Carroll

What’s your advice for carriers getting started with AI?

Tirath Desai

Start broader. Not smaller. Many organizations began with isolated use cases. That made sense early on. Now it’s time to step back. Ask a bigger question: How does AI fit across the value chain and a holistic lifecycle—underwriting, claims, billing? Then build from there. 

Three priorities stand out. First, governance. Clear frameworks. Responsible use. Defined accountability. Second, technology. Flexible platforms that can evolve. Integrate new tools. Adapt quickly. Third, data. This is often the hardest part. Many organizations still lack a unified view of their data. Without that, progress slows.

There’s a real opportunity here. But you don’t need to do everything at once. The focus should be clear. Build a road map. Move with intent. Position your organization for what’s next.

Paul Carroll

Thanks.

About Tirath Desai

Tirath Desai Headshot

Tirath Desai is a seasoned leader in the insurance technology space, with deep expertise in insurance core platforms, digital solutions, and large-scale transformation programs. As PwC’s Insurance Core Transformation and Digital Leader, he partners with carriers to modernize policy, billing, and claims operations, enhance agent distribution, and implement innovative cloud and AI-driven solutions. 

With over two decades of consulting experience, Tirath has led numerous large-scale transformations, particularly within workers’ compensation and commercial lines. His work emphasizes strong IT service management practices to drive service reliability, governance, and continual improvement across the enterprise. His track record includes delivering end-to-end transformation strategies that generate measurable business value, accelerate speed-to-market, and improve operational efficiency and customer experience. Tirath is particularly focused on integrating artificial intelligence into operational frameworks—leveraging predictive analytics, intelligent automation, and machine learning to optimize claims management, enhance decision-making, and proactively manage risk in workers’ compensation. By combining structured service management methodologies with AI innovation, he helps insurers build resilient, scalable, and future-ready operating models. 


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Mobility Data Transforms Auto Insurance Territorial Pricing

As driving patterns outpace traditional claims data, mobility data enables auto insurers to price territorial risk more accurately.

A sleek white car speeding down an urban road.

Why do drivers in Louisiana pay an average of $4,180 annually for full-coverage car insurance while Vermont drivers only pay $1,504? The answer is simple: territorial ratemaking.

Traditionally, auto insurers have used a policyholder's geographic location as a core input in determining premiums. Variables like historical claims losses, traffic density, and weather patterns are used to estimate the risk profile of a given territory, which in turn, determines pricing.

However, driving patterns now shift faster and vary more locally than the traditional signals used in pricing decisions. Many of the data sources used for territorial ratemaking update too slowly to spot emerging risk shifts and enable timely corrective rate action. At the same time, auto insurers often miss meaningful variations in driving behavior at the ZIP code level due to limited claims information.

In other words, the importance of territory hasn't changed, but the nature of the risk it's meant to represent has.

To more confidently model risk and set accurate rates, auto insurers need a current, granular view of how people in specific ZIP codes actually drive today — not how they drove months or years ago.

Why traditional data alone can't fully reflect today's driving risk

Auto insurers rely heavily on historical claims and loss data to assess territorial risk, but this data is inherently backward-looking and often takes months or years to reflect changes in driving behavior.

This lag is problematic due to the fluid nature of driving patterns. For example, Arity research found that after rising 30% from 2019 to 2023, overall rates of distracted driving declined in 2024 and early 2025.

Driving behavior also varies significantly among ZIP codes within the same state, or even the same county. Consider a residential neighborhood versus a busy commercial area. While the residential area may have steady, low-volume traffic, the commercial area may be a hot spot for stop-and-go driving.

When analyzed at the ZIP code level, claims data alone is often too sparse to produce statistically credible insights. As a result, auto insurers may not detect localized differences and group drivers from the same territory into a single risk profile, potentially overcharging safer customers.

The issue isn't territorial ratemaking itself, but rather the limitations of the data used to inform it. With greater access to driving behavior signals, auto insurers can capture dimensions of risk that many traditional ratemaking factors weren't designed to observe at a territorial level.

How mobility data can transform territorial ratemaking

As driving behavior continues to shift across geographies, auto insurers can't rely on static historical data alone — and fortunately, they don't have to.

With mobility data, insurers can use driving behavior signals like braking, speeding, phone distraction, and time-of-day exposure mapped to specific ZIP codes to enhance territorial pricing strategies.

For actuarial and pricing leaders, this shift does more than introduce a new rating factor. It helps close the visibility gap between how risk is priced and how people are actually driving today.

  1. Strengthen data credibility in low-volume areas

    Because claims are relatively infrequent events, data at the ZIP code level is often too sparse to be statistically credible. Likewise, commonly used third-party proxies, like surveys or census data, are updated infrequently and may not reflect the most current driving conditions.

    These blind spots affect model accuracy, along with file and use confidence, competitive pricing decisions, and how defensible a carrier's territorial assumptions are to regulators.

    In contrast, mobility data enables auto insurers to identify local changes in risk before they aggregate to state-level loss trends. This can help supplement sparse loss experience, especially for regional carriers with more limited data.

    By incorporating a regularly refreshed dataset that captures current driving patterns mapped to ZIP codes, auto insurers can identify misalignment with historical territorial assumptions and build a more accurate view of risk.
     
  2. Increase pricing precision at the local level

    Driving behavior is becoming increasingly variable across ZIP codes within the same state or rating territory. Consider developments like return-to-office mandates that affect roadway usage and reshape how, when, and where people drive.

    When auto insurers rely exclusively on inputs like third-party data and claims and loss ratios, pricing decisions may not accurately reflect current risk trends. In contrast, mobility data offers context on how driving behavior is evolving, providing an additional layer that helps validate whether similarly priced territories actually share similar risk profiles.

    With ZIP codes serving as a practical and familiar linking key, auto insurers can integrate these insights into existing models and workflows, making it easier to adjust segmentation as needed.
     
  3. Identify emerging risks to improve rate responsiveness

    The use of historical claims data to assess risk introduces a time lag, since changes in driving behavior often take a year or more to appear in loss experience. This delay limits auto insurers' ability to respond in step with evolving driving behavior, leaving them to react after the fact.

    Mobility data supports more proactive decision-making by capturing risk shifts as they develop. Because driving behavior is continuously observed and regularly refreshed, it can serve as an early indicator of emerging risk, supporting timely rate decisions without forcing insurers to react to short-term noise.

    Additionally, teams can spot emerging risk shifts by tracking year-over-year changes in driving behavior. Those insights can then be built into actuarial narratives, giving pricing decisions and regulatory filings more current, data-backed support.
The future of territorial pricing

Territorial ratemaking has always depended on the quality of the data behind it. But as variability across ZIP codes increases, carriers that rely solely on historical signals risk falling behind trends that competitors can already see.

The gap between auto insurers' geographic risk assessments and actual driver behavior will only widen unless pricing and actuarial teams adapt their approach.

Going forward, auto insurers that embrace mobility data to supplement traditional rating factors can strengthen their territorial models, make more confident pricing decisions, and better identify emerging pockets of risk before shifts appear in claims or loss ratios.


Henry Kowal

Profile picture for user HenryKowal

Henry Kowal

Henry Kowal is director, outbound product management, insurance solutions, at Arity, an Allstate subsidiary that tackles underwriting uncertainty with data, data and more data about driving behavior gathered via telematics.

Regulators' Scary Demand on Insurance AI

Regulators aren't asking if your AI works—they're asking which named human was accountable when it didn't. If there isn't one, the person on the hook may be you.

Close-up of a man intensely focused, working indoors in an office environment.

Picture the call.

A state insurance commissioner's office. Your legal team. A customer's attorney. An AI-generated claim denial that affected someone's home, their health, and their livelihood. The question on the table is not whether your model was accurate. The question is who in your organization reviewed that specific decision, what they actually checked, and where the documentation is.

You look around the room.

The data science team points to the risk function. The risk function points at the business unit. The business unit points at the model. The model has no name. The model cannot be deposed. The model's directors and officers (D&O) liability policy does not exist.

Yours does.

The question moving through every insurance boardroom right now is not whether your AI works. It is whether you can prove a human being — a named, accountable, documentable human being — was genuinely in the loop when it didn't.

I have spent two decades working inside financial services organizations across North America, Asia Pacific, and EMEA — in insurance, banking, and enterprise technology. I have been in the rooms where this question lands. The silence it produces is not incompetence. It is the sound of an industry that built extraordinary AI capability and forgot to build the accountability architecture around it.

That silence is becoming expensive.

Your Accuracy Dashboard Is Not a Defense

Here is what your AI governance documentation almost certainly shows: model performance metrics. Accuracy rates. Loss ratios. Straight-through processing volumes. Fraud detection rates. These numbers are real, and the investment behind them is genuine.

Here is what your AI governance documentation almost certainly does not show: the name of the human who reviewed the decision that is now in dispute. What they were trained to look for. How long they spent on it. Whether they had the authority — and the actual expectation — to override the model's recommendation.

Those are two entirely different documents. Most insurers have the first. Almost none have the second.

Under the EU AI Act, OSFI B-15, and SR 11-7, the second document is what matters. Regulators are not asking whether your model performs well in aggregate. They are asking whether a specific decision — the one in front of them — had meaningful human oversight. Meaningful. Not ceremonial. Not a click-through.

Accuracy metrics tell you how often the AI is right. They tell you nothing about whether the human in the loop actually understood what they were approving.

Most insurers have the checkbox. Very few have a defensible record. That gap — between the checkbox and the defensible record — is where the liability lives.

What Happened in the Netherlands Will Happen Here

In 2020, the Dutch government's benefits AI flagged 26,000 families as suspected fraud. Most were innocent. The algorithm ran for years. The humans trusted it. No one built a mechanism for those humans to meaningfully question what the system was telling them.

By the time the full picture emerged, families had lost homes. Children had been taken into care. Careers had been destroyed. The prime minister resigned. The government fell.

Not because the AI was malicious, but because no one could name the human responsible for any specific decision. The accountability architecture was missing. And when it was missing at scale — across 26,000 families — there was no one to hold accountable except the institution itself.

That story is not a European warning. It is a preview.

The same structural failure exists in US healthcare AI, in automated claims systems, in credit decision making, and in hiring algorithms. The technology performs as designed. The human layer — the named, documented, trained, empowered human layer — is absent or ceremonial. When something goes wrong at scale, the institution absorbs the liability because no individual can be identified as responsible.

Unfair AI doesn't just break trust between a customer and a machine. It collapses trust across your entire organization — retroactively. And the collapse travels up the chain until it finds someone with a name.

That name will be on your org chart. It may be yours.

Run This Test Before You Read the Next Section

Pull three recent AI-denied claims from your system. Any three.

For each one, answer these questions: Who is the named human reviewer in the audit trail? What specific aspects of the AI recommendation did they evaluate? Is there documentation showing they genuinely interrogated the output — not just approved it?

If you can produce complete, defensible answers for all three in under 10 minutes, your AI governance is in reasonable shape.

If you cannot — if the trail goes cold at "the system flagged it" or "the team reviewed it" — you have just identified your exposure. That is not a criticism. It is a diagnostic. It is also, increasingly, what plaintiff attorneys run on insurers before they file. What D&O underwriters are beginning to check at renewal. What state insurance commissioners are starting to request in market conduct examinations.

The gap you just found is the gap this article is about.

Three Ways to Close the Gap — Before Someone Closes It for You

Name the human — in the system, in the record, in the audit trail. Every high-stakes AI decision — claim denial, underwriting declination, fraud escalation, pricing exception — needs a named individual reviewer, not a team, not a role, not a function. A person. Because when the commissioner's office calls, they will ask for that person. If you cannot produce a name, you cannot produce a defense.

Build the authority to say no — and document when it is used. The difference between meaningful oversight and rubber-stamping is whether your reviewers have explicit authority to override the AI, training to know when they should, and time to exercise that judgment. If your straight-through processing rates are above 95%, ask yourself honestly: is that efficiency, or is it the absence of human judgment? Regulators are beginning to ask the same question.

Audit fairness separately from accuracy. Your model validation process measures performance. It does not measure whether the outcomes your AI produces are perceived as fair by the people affected. Consistency of treatment across demographics. Accessibility of recourse. Clarity of explanation. These are legitimacy measures and they require a different audit. The insurers who build this capability now will be positioned as leaders. The ones who wait will be building it during an investigation.

The Verdict Is Already Being Written

The insurance industry did not get here through negligence. It got here through speed. AI capability moved faster than governance frameworks. Deployment timelines outran accountability infrastructure. The checkbox appeared because it was faster than the defensible record. None of that was malicious.

But 2025 is not 2019. The EU AI Act is not a distant concern — it is setting the global documentation standard, and US regulators are actively incorporating its logic. D&O underwriters are beginning to ask about AI governance at renewal. state insurance commissioners are starting to include AI decision audit trails in market conduct examinations. Class action attorneys are looking for patterns in AI-driven denials.

The verdict on your AI governance is being written right now — by regulators, by courts, by customers who received a decision they couldn't understand or challenge. It is being written in the audit trails you do or do not have. In the names, you can or cannot produce. In the documentation that proves a human being was genuinely, meaningfully in the loop.

The algorithm will not appear in that verdict. It cannot be deposed. It cannot be held accountable. It does not have a name.

You do.

"The algorithm decided" is not a name. It's a future deposition headline.


Rachel Hor

Profile picture for user RachelHor

Rachel Hor

Rachel Hor is a doctoral candidate at Saint Mary's University, where her research focuses on how trust fractures when AI, human judgment, and institutional systems collide in insurance. 

She has nearly two decades of industry experience at IBM, Accenture, and Cognizant. 

Time for Some Pet Peeves

Weak writing undermines the insurance industry's messages. I have suggestions. 

Image
Green and Yellow Lit Up Squares

Given my education, experience and, I'll admit, personality, mistakes in writing jump up and bite me on the nose. Once, as I flipped through a book, I stopped because something felt vaguely wrong. I read the page I had just glanced at and found a typo about two-thirds of the way down.

Given how much copy I see every day, I see a lot of mistakes, and I think some patterns are worth pointing out. Today I'll focus on the repetition that creeps into our phrasing (no, you shouldn't say people "mutually agree"; by definition, any agreement has to be mutual) and undercuts the crisp confidence we want to project.

These aren't the kinds of mistakes that spellcheck or even Grammarly, in most cases, will flag for you, but they're like termites in a wooden structure. They weaken our writing, while insurance needs to be projecting competence and strength.

Let's have a look.

To me, phrases such as "mutually agree" are like a record with a scratch in it. The phrases quickly repeat themselves, and they hit me with the same sort of screech that a record player can. I realize my reaction is unusually harsh — an occupational hazard and perhaps a personality defect — but such phrases are still worth purging. When you say people mutually agreed to do something, you sound defensive — "Honest, when I say we agreed, I meant it. Really." In fact, in a lot of cases, "mutual agreement" is a euphemism. A coach "mutually agreed" with a team that it was time to part? Yeah, he was fired. Just say "agreed" and get on with it. Your readers will sense your confidence, even if they don't react as viscerally to language as I do. 

If you look a bit, I think you'll mutually agree that there are lot of such screechy phrases. Here are just some that have crossed my desk since I started keeping a list a couple of weeks ago:

  • Two people share a common trait. If you share a trait with someone, you have that trait in common, by definition.
  • Some number of different people. Why different? You can't have more than one of the same person. But I see "different people," "different businesses," "different" this, "different" that.
  • Closely scrutinize. To scrutinize is to look closely at something. You can't look closely closely.
  • Major crisis, major catastrophe, major disaster. Can there be a crisis/catastrophe/disaster that isn't major?
  • Advance warning. Warning after the fact isn't actually warning.
  • Pre-planned. Planning after the fact isn't actually planning.
  • Proactive risk management. Reactive risk management isn't actually risk management, at least not for whatever loss you just suffered.
  • Someone successfully accomplished something. If you accomplished something, you succeeded. There are many variants of this issue. A New York Times column yesterday, for instance, redundantly said that something "successfully came to fruition" — a new one for me. "Successfully" gets sprinkled into articles and bios like fairy dust. Some aren't inherently repetitive. For instance, bios often say that someone "successfully launched" a product or business. It's certainly possible to launch a product or business that flops, but you wouldn't be telling us about a flop. "Success" is overrated. The word feels needy.
  • Speaking of being used like fairy dust, I'll re-up my disdain for new, which I've expressed in earlier rants on language. I appreciate the temptation. We're trying to stir up excitement and move the industry forward, but not everything is new and shouldn't be labeled as such. I'd say the most common (mis)usage I see is "created a new" something (as though you can create an old something). The phrases that most set my teeth on edge are "new record" (as though you could set an old record) and "new innovations" (the root of "innovation" is "-nov-," which means new). Talking about new innovations makes us sound like an old late-night commercial — This product "is new, new, all new. And wait... there's more!"
  • Proven track record. The whole point of a track record is that it's proven. It's written down. It's verifiable. You don't need to trust what the tout is telling you about a horse. You can see the track record for yourself.
  • Most-well-known. This isn't a redundancy, but it's bizarre, and I'm seeing it a lot, so I'm tossing it in here. The progression goes "good," "better," "best." It doesn't go "good," "better," "most well." So why would the progression about how famous something or someone is go "known," "better-known," "most-well-known"? It doesn't. Yes, "well-known" is a legitimate phrase, but "most well" isn't a thing, so "most-well-known" surely isn't. I think people chicken out because "best" seems like an endorsement. They don't want to use "best" in connection with, say, a notorious criminal, but the only superlative available to you is "best-known." "Most well" simply doesn't exist in the English language, not even if you're describing how done you want your steak to be.

You get the idea. You probably even already go through the sort of self-editing I'm suggesting. You were probably harangued in elementary school to avoid the passive voice and may have been counseled to delete "very" every time you used it. I'm merely suggesting adding something to your to-don't list. 

Your writing will come across as more confident if you eliminate the weak redundancies I've listed — and the million others you'll spot once you start looking.

Fixing these redundancy issues may feel like a small thing, and even a grump like me will acknowledge that the changes will fly under the radar for most people, but I'm reminded of a saying that was my mantra when I used to take long bicycle trips and was packing: "If you take care of the ounces, the pounds will take care of themselves." Customers are demanding that insurance become more understandable, even friendlier. No more of the "whereofs" and "wherefores" in arcane documents that only a lawyer could love. So I don't think it's possible to pay too much attention to the language we use. Every little thing we do becomes part of how customers perceive us.

You now have your advance warning. You can proceed with your proactive pre-planning.

Cheers,

Paul

P.S. Here are some of my favorite previous rants on language: "Can We Please Tone Down All the 'Inflection Point' Talk?"; "Let's Stop With the Gibberish"' "May I Rant for a Moment?"; and "Two Words We Must Stop Using." 

 

Long-Term Impact of Today's Oil Crisis

Even once the war in Iran ends, vehicle demand will shift toward EVs while auto insurance costs will rise sharply.

Bright red gas station illuminated against a black night

For some reason, most Americans seem to think that when the U.S.-Iran conflict comes to an end, oil prices and the broader economy will quickly bounce back to normal. Unfortunately, that is just not realistic, and the longer-term damage is already set in motion. Subject matter experts are predicting a 12- to 18-month correction period once the situation stabilizes. The backup of oil tankers in the Strait of Hormuz will take at least a year to clear.

A year‑long oil crisis would hit both automobile sales and auto insurance in ways that go far beyond just higher gas prices. The short version: vehicle demand would likely shift sharply toward fuel‑efficient and electric models, overall sales could soften, and auto insurance costs would almost certainly rise due to inflation, repair costs, and economic stress. Below is a structured breakdown grounded in recent reporting and economic analysis.

Impact on Automobile Sales

Demand will shift toward fuel‑efficient and electric vehicles. When fuel becomes expensive for a long period, consumers rethink what they drive. Economic theory treats vehicles and gasoline as complementary goods, meaning high fuel prices suppress demand for gas‑heavy vehicles. Buyers tend to move away from trucks and large SUVs and toward smaller, more efficient cars or EVs.

Overall auto sales could decline. A prolonged oil crisis raises household expenses across the board. With budgets squeezed, many consumers delay big purchases like cars. This effect is amplified if the crisis also disrupts supply chains or raises production costs—both of which are likely when oil prices stay high for months.

Higher vehicle prices due to supply chain strain. Geopolitical disruptions tied to oil crises often spill into shipping and parts availability. Recent reporting shows that conflicts affecting oil supply also cause shipping delays, higher transport costs, and production cuts by major automakers. Toyota, for example, has already reduced output in response to Middle East instability. Fewer cars produced means higher prices for both new and used vehicles, further dampening sales.

Impact on Auto Insurance

Rising premiums driven by inflation and repair costs. Auto insurers are already facing a "severity crisis": repair costs have surged due to inflation, supply chain issues, and the increasing complexity of modern vehicles. A prolonged oil crisis would worsen these pressures by raising transportation and parts costs. Insurers have been "racing to take rate," and pessimistic outlooks suggest continued premium increases.

Higher replacement costs due to vehicle shortages. If automakers produce fewer vehicles because of high energy costs or supply disruptions, replacement vehicles become more expensive. Insurers must pay more for totaled cars, which pushes premiums higher. This dynamic has already been observed during labor strikes and supply chain disruptions.

Changes in customer retention because of Increased financial stress. When households face sustained high fuel costs, they may struggle to keep up with insurance payments. Analysts warn that squeezed budgets can lead to policy lapses, reduced coverage levels, or shopping for cheaper (and sometimes inadequate) policies.

More accidents in stressed industries. In sectors tied to oil and gas, worker shortages and fatigue have historically increased accident rates, which in turn raise liability claims and insurance costs. While this is industry‑specific, it contributes to overall market pressure.

The Big Picture

If the oil crisis lasts a year or more, the most likely outcome is:

  • Automobile sales soften overall, with a strong shift toward efficient and electric models.
  • Large SUVs and trucks lose market share, unless essential for work.
  • Vehicle prices rise due to supply chain strain and higher transport costs.
  • Auto insurance premiums continue climbing, driven by inflation, repair costs, and higher replacement values.
  • Consumers face financial strain, leading to more lapses, reduced coverage, and slower sales cycles.

Reality bites, but understanding these outcomes and challenges will enable all participants to plan and adjust accordingly.


Stephen Applebaum

Profile picture for user StephenApplebaum

Stephen Applebaum

Stephen Applebaum, managing partner, Insurance Solutions Group, is a subject matter expert and thought leader providing consulting, advisory, research and strategic M&A services to participants across the entire North American property/casualty insurance ecosystem.


Alan Demers

Profile picture for user AlanDemers

Alan Demers

Alan Demers is founder of InsurTech Consulting, with 30 years of P&C insurance claims experience, providing consultative services focused on innovating claims.