Download

The Smartest Things I've Read Lately About AI

As we move up the learning curve on implementing generative AI, some are challenging, for instance, the idea that AI agents should be treated as employees. 

Image
Fog

My older daughter just lost a writing job to an AI (that she had to train to replace her), so I don't currently have the kindest thoughts about where AI is headed, but the technology is going to keep barreling forward whether we like it or not, and we all have to adapt.

So let's take a look at the smartest pieces I've seen recently about where generative AI is headed. We'll look at the "fog of AI," which is making it so very hard to make investment decisions. We'll look at the insurance industry's quandary about how to handle all the data centers being built (maybe). We'll look at lessons learned from early attempts at scaling AI, to see what separates the winners from the losers. 

But let's start with a piece that contradicts the conventional wisdom that AI agents should be treated as employees.

An article in Harvard Business Review says: 

"Leaders assume that anthropomorphizing AI will make the technology feel less foreign to workers or that it will signal the company’s AI ambitions to investors, customers, or internal stakeholders. But it turns out that treating AI as an employee is not so straightforward. 

"In a randomized experiment, we found that humanizing AI can shift accountability away from individuals, increase escalation, reduce review quality, and erode professional identity and trust. What’s more, it doesn’t meaningfully increase people’s intent to adopt the technology and integrate it into workflows—which remain the key obstacle to capturing AI’s enormous value creation promise."

The most striking findings to me were that AIs treated as an employee, rather than a tool, were more likely to lead to humans sloughing off responsibility for any problems that occurred and to more often asking their managers for additional review. The article doesn't argue for slowing down implementation of AI, by any means, but does make a case for changing how many of us think about describing their role.

Another HBR article, titled "The Future Is Shrouded in an AI Fog," offers some comfort for those of us confused about how to proceed with implementing AI. The piece says we pretty much have to be paralyzed by indecision because of the "extreme opacity" about the future of AI:

"Given all the things that might change because of AI, it feels like a fog has descended that occludes our ability to see the future. And right now, that’s its most important—and perhaps most underappreciated—economic effect.... This extreme uncertainty challenges the criteria we use to commit to forward-looking investments."

The opacity doesn't just affect businesses, either. It also hits us as individuals. The article asks, for instance, why smart kids would want to spend a decade training to be a doctor when it's not clear what being a doctor will mean in the age of AI.

Again, self-pity isn't allowed, at least not for very long. The article lays out an approach designed to help us sense change sooner and react with more agility, then tells us to get on it.

Mick Moloney of Oliver Wyman articulates a question I've heard lots of insurance executives pondering lately: How should insurers handle the hundreds of billions of dollars of data centers being built to accommodate the AI rush?

As Mick puts it:

"The six largest AI data center projects currently under construction or formally committed in the United States represent a combined investment of over $120 billion and a combined power capacity target of more than 10 gigawatts — deployed not over decades, as comparable infrastructure has always been, but over three to five years. They are being built by technology companies, AI laboratories, and private equity platforms that have never operated infrastructure at this scale. And they are being financed with instruments that did not exist eighteen months ago."

He doesn't have a silver bullet, but he does offer keen insights into how insurers should think about these six projects based on their power strategy, their financing structures and the risk management capabilities (or, more likely, the lack thereof) of the builders.

The insurance industry will be wrestling with the data center issue for years, but Mick's piece is a good start.

Finally, McKinsey published "The AI Transformation Manifesto," with a dozen observations about what separates the winners from the losers in the age of AI. For instance:

  • Technology alone doesn’t create advantage; enduring capabilities do. Who are the early winners at AI? The same companies that have been winning before by building capabilities that allow them to harness any technology effectively.... When these new capabilities are built—and they take time to build—the company accelerates its business transformation with technology and outperforms its peers. The capabilities become the competitive advantage....
  • Economic leverage points are your best focal points. Any business model has a few key economic leverage points that provide the biggest impact when improved with AI. In mining, for example, process yield and throughput is a key economic leverage point, and that’s where Freeport-McMoRan achieved game-changing impact. In automotive, supply chain integration is a key leverage point, and that’s where Toyota had its AI breakthrough. Most companies have long lists of use cases. Successful ones focus on achieving deep business transformation in the few areas that matter strategically. That’s where they double down to build AI systems....
  • Building the tech and AI muscle of your senior business leaders should be a top priority. We don’t have a single success story where senior business leaders were not in the driver’s seat. IT leaders can support the transformation, of course, but it’s business leaders who need to drive it.

Again, I don't see a silver bullet, but we're learning....

Cheers,

Paul

P&C Insurance's AI Problem Isn't What You Think

Insurers direct 72% of AI spending to technology and just 28% to change management, creating a critical architecture mismatch.

Futuristic AI

Budgets have grown, pilots have multiplied, and AI is now a fixture in virtually every P&C strategic plan. Yet 42% of insurers track no AI metrics at all, which means they have no way to validate what works, no playbook to scale it, and no mechanism to stop what doesn't work. Insurers' investment pattern confirms that this is an organizational constraint, rather than a technology one: on average, 72% of AI spending goes to technology and only 28% to change management.

Technology creates capability. But change management determines whether that capability becomes performance. That imbalance is the first signal of what Capgemini identifies in the 19th edition of its 2026 World Property and Casualty Insurance Report as an "architecture mismatch." This is a structural gap that runs deeper than the technology stack, and that no amount of additional AI investment will close on its own.

Three dimensions, one ceiling

The first dimension is a strategy and talent gap. Among the top 20 global P&C insurers, only 35% have explicitly linked their AI strategy to business outcomes beyond efficiency. That narrow framing has consequences: Strategy tends to direct investment toward quick wins rather than the capabilities AI needs to grow over time. In most cases, the result is an incomplete strategy that optimizes the present while leaving the future underbuilt.

The second dimension is technical constraints. Legacy architectures fragment data across functions, making it harder for AI to reason across underwriting judgments, claims assessments, and distribution decisions that depend on context-rich, unstructured information. The barrier is less about the AI itself and more about the environment it must operate in – one that was not designed with AI in mind and does not easily accommodate it.

The third – and arguably most decisive – dimension is organizational. Over half (55%) of insurers cite unclear ownership of AI initiatives as a key constraint. Without clear accountability, programs stay dependent on individual champions rather than building institutional capability. And despite all the work underway, 47% of employees report no meaningful change in their day-to-day work after 18 months of using AI. That points less to a deployment failure than a design flaw.

The problem with fixing one thing at a time

These three dimensions are entangled, which is precisely what makes the conventional response insufficient. Assess, prioritize, sequence: Fix strategy first, then technology, then organization. In practice, addressing one while leaving the others untouched tends to limit progress, rather than unlock it.

Our research identifies the emergence of intelligence trailblazers – the top 10% of P&C insurers – who treat AI as a core operating capability rather than a program to be managed, aligning strategy, technology, and organizational adoption in tandem. Over three years, trailblazers have achieved 21% higher revenue growth and 51% greater share price increases compared with the rest of the industry.

Despite their growth, this group has also not fully solved the problem. AI still largely operates at the task level, workflows remain built for human execution, and the organizational model that closes those gaps – one where human expertise and synthetic execution are deliberately organized around where each creates the most value – is still being built. The opportunity to redesign is real. But it remains an opportunity, not yet an achievement, even for those furthest ahead.

The harder conversation

An uncomfortable question to raise is why this is so difficult to change, even for organizations that understand the problem.

The answer is that the architecture mismatch was not built through bad decisions. Legacy systems were the right investment at the time. Prioritizing technology over change management made sense when AI was unproven, and the organizational implications were unclear. It is not evidence of poor judgment, but the accumulated consequence of individually rational choices made in a different context.

Moving forward requires asking a more challenging question: Do the investments already made, and the ones being considered now, still pay back on the original terms? Most organizations haven't asked that question systematically, because who defines success, who is accountable for outcomes, and how progress is measured beyond deployment were all designed for a time when decisions were quintessentially human. And until that question gets asked, the architecture underneath the pilots stays unchanged – regardless of how many new tools are deployed on top of it.

Trailblazers are not ahead because they have solved the problem or because they run better pilots. They are ahead because they made a different decision earlier: to address the architecture underneath the pilots, not just the pilots in isolation. The next decision is harder: to redesign the organization itself. That decision has not yet been fully made by anyone. But the insurers who make it first will define what competitive advantage looks like in the intelligence era.

Insurance AI Requires Specialized Guardrails

Generic AI safety tools can't address insurance's unique risks; specialized guardrails are essential for responsible deployment.

Road Guardrail

For the insurance industry, where decisions have significant consequences, general-purpose safety controls aren't enough to ensure the safe deployment of large language models. Insurance-specific guardrails, which control all aspects of the interaction of artificial intelligence, from input validation to output verification, are a necessity. 

1. The Opportunity: AI Is Reshaping Insurance

AI is already transforming core insurance operations across the value chain. According to ACORD research, 77% of insurers now use AI somewhere in their operations, and early implementations have demonstrated claims processing time reductions of as much as 75% — compressing multi-day workflows into under an hour.¹ The global AI in insurance market, valued at $4.6 billion in 2022, is projected to reach $79.9 billion by 2032.

Core applications already in production include:

  • Claims automation and straight-through processing
  • Computer vision for property and vehicle damage assessment
  • NLP-based document parsing and policy review
  • Fraud detection and anomaly identification
  • Customer-facing chatbots and virtual agents
  • Underwriting analytics and risk scoring

These applications can enhance customer satisfaction, resolving claims faster, and even help employees deal with the sheer volume of policy documents. But the very attributes that make LLMs so appealing to businesses — fluency, speed, and language breadth — also pose the biggest risk to using them in regulated environments like insurance.

2. The Core Problem: Hallucinations in a Regulated Domain

LLM hallucination occurs when a model generates content that is factually incorrect, fabricated, or unsupported by the context provided. In insurance, that could mean:

  • Misstating coverage terms or policy limits
  • Inventing exclusions or endorsements that do not exist
  • Providing inaccurate claims guidance
  • Citing non-existent regulations or procedures
  • Expressing unwarranted confidence where escalation is required

The scale of this risk is not trivial. Research published in peer-reviewed AI benchmarks has found hallucination rates of 15–30% in general-domain LLMs.² Even in legal AI applications — a domain with similar stakes — clause-review accuracy in the 86–92% range still implies error rates of up to 14% in some contexts.³

For insurance organizations, a single inaccurate coverage explanation or claims instruction can trigger downstream complaints, regulatory disputes, or litigation. Unlike casual consumer applications, insurance AI interacts with financial protection, legal obligations, and sensitive personal information — where errors carry real consequences.

3. Why Generic AI Safety Tools Are Not Enough

Most commercially available AI safety frameworks focus on broad categories such as:

  • Toxic content filtering
  • Personally identifiable information (PII) detection
  • Basic prompt injection defense

These controls are necessary, but they are insufficient for insurance. Standard safety tools do not adequately address insurance-specific factual accuracy, policy compliance, or regulatory conformance. A response can be polite and harmless in tone while still being operationally dangerous if it mischaracterizes a coverage provision or misquotes a policy term.

That is why insurers need domain-specific guardrails rather than generic content filters layered onto general-purpose models.

4. Guardrails as a Business and Compliance Requirement

Guardrails should be understood as a control framework, not a technical add-on. They enforce boundaries across the full AI interaction lifecycle — from what a user inputs to what the system delivers.

Input Guardrails - filter harmful or manipulative requests, detect prompt injection attempts, and prevent users from circumventing policy or compliance constraints.

Dialog Guardrails - manage conversation flow and enforce interaction boundaries, keeping the assistant within approved topics and triggering appropriate escalation pathways.

Retrieval Guardrails - validate external documents and knowledge sources before the model incorporates them into a response, reducing the risk of answers based on outdated or unsupported information.

Execution Guardrails - control external actions and API calls, ensuring that when the AI is connected to claims, policy, or customer systems, operations remain within authorized boundaries.

Output Guardrails - analyze generated responses before delivery, checking for factual grounding, safety, privacy risks, and regulatory alignment.

Together, this architecture transforms AI from a probabilistic text generator into a governed enterprise system — one whose behavior can be monitored, explained, and audited.

5. Why Insurance Requires Specialized Guardrails

Insurance use cases demand a stricter standard because the domain combines four compounding risk factors:

High-Consequence Decisions. Claims settlements, coverage explanations, underwriting support, and fraud workflows directly affect customers' financial rights and legal standing. Errors are not minor UX failures — they are potential compliance events.

Complex Source Material. Policy language, endorsements, exclusions, and jurisdiction-specific requirements are difficult to interpret even for trained professionals. LLMs must be grounded in the actual policy documents, not a generalized approximation.

Regulatory Oversight. The NAIC framework for the "AI Model Bulletin" has five areas of expectations: AI Governance, Transparency, Risk Management, Auditability, and Vendor Oversight.⁴ It is evident from these expectations that insurers need to explain, monitor, and control their AI in production, which is not possible without guardrails.

Sensitive Data Handling. Insurance workflows routinely involve health information, financial records, claim narratives, and other protected personal data. Privacy failures are not just technical issues; they are compliance violations and trust failures with lasting customer impact.

6. A Practical Implementation Approach

Rather than attempting a broad enterprise rollout, insurers should begin with a focused use case that offers high visibility and measurable outcomes. Property and casualty claims processing is a natural starting point: the use case is well-defined, the documents are structured, and accuracy in coverage explanations can be measured against ground-truth policy language.

A phased implementation model should unfold across three stages:

Phase 1 — Foundation (Months 1–3). Establish the guardrail architecture on a single claims workflow. Configure input and output guardrails using the insurer's own policy documents as the knowledge base. Define escalation rules for ambiguous or high-value claims. Instrument logging from day one.

Phase 2 — Validation (Months 4–6). At this phase, human-in-the-loop validation is conducted in conjunction with AI results to verify accuracy, detect hallucination behaviors, and refine retrieval threshold values. Initial bias tests should be performed across various customer types and geography. Compliance and legal should also be involved in validation.

Phase 3 — Expansion (Months 7–12). At this phase, the guardrail methodology is extended to adjacent applications like underwriting support, customer service, and/or document review based on learnings from Phase 1.

The key stakeholders in implementation include claims operations, IT architecture, compliance and legal, data privacy, and a designated AI governance stakeholder responsible for continuing oversight and audit readiness.

7. Ethical AI Must Be Designed In, Not Added Later

One of the most important principles in responsible AI deployment is that ethical safeguards must be built into the architecture from the start — not retrofitted after problems emerge. In insurance, ethics failures can be systemic rather than singular, affecting entire customer segments before they are detected.

The primary ethical considerations for insurance AI are:

Bias Mitigation. Insurers must proactively test AI outputs for differential treatment across customer segments. Research has found that insurance-specific testing can uncover disparate coverage explanations correlated with geography — patterns that generic safety filters are not designed to detect.⁵ Ongoing testing should be built into the governance model, not treated as a one-time validation step.

Transparency. Customers should know when they are interacting with an AI system. The AI should also be able to explain the basis of its response — citing the specific policy document, section, or regulatory reference that underlies its answer.

Human-in-the-Loop Oversight. For complex, ambiguous, or high-stakes interactions — large claim settlements, potential coverage denials, or situations with regulatory implications — the system must escalate to human review. Automation should accelerate decisions, not replace human judgment where judgment is most consequential.

Privacy Protection. PII detection must be robust, particularly in claims workflows involving health information or sensitive personal circumstances. Data minimization practices should be built into the retrieval architecture so that the AI accesses only the information needed to answer the question at hand.

Fairness Auditing. Disparate impact testing across customer segments should be a recurring operational practice, with results informing both model behavior and underlying policy review. Fairness is not a one-time certification — it is a continuing obligation.

8. Conclusion

The case for AI in insurance is compelling. Faster claims resolution, more consistent customer service, and improved operational efficiency are achievable outcomes — and insurers who delay adoption risk falling behind on all three.

But speed without guardrails is not an advantage. LLM deployment introduces real risks of factual inaccuracy, regulatory non-compliance, privacy exposure, and biased decision-making. In a domain where a single miscommunicated coverage term can escalate into a dispute or regulatory inquiry, those risks are not acceptable.

Insurance-specific guardrails are not optional features to be layered on once a system is live. They are the prerequisite that makes responsible deployment possible. Insurers who build control frameworks into the foundation — rather than treating governance as an afterthought — will not only move faster. They will move with the trust, auditability, and regulatory confidence the industry demands.

References

¹ ACORD, "AI in Insurance: State of the Market," 2023; DataGrid, "30 AI in Insurance Statistics," citing ACORD and Risk & Insurance data.

² Ji et al., "Survey of Hallucination in Natural Language Generation," ACM Computing Surveys, 2023.

³ Bommarito & Katz, "GPT Takes the Bar Exam," 2023; see also related empirical work on LLM accuracy in legal clause review, SSRN 2023.

⁴ National Association of Insurance Commissioners, "Model Bulletin on the Use of Artificial Intelligence Systems by Insurers," 2023.

⁵ See emerging literature on algorithmic fairness in P&C insurance, including Casualty Actuarial Society Actuarial Review, 2023–2024.

First, AI Slop. Now, 'AI Beige.'

AI slop is just weird. "AI beige" is more insidious because it can deceive you into thinking you're smart when you're just being bland. 

Image
wavy lines

Ever since the word "content" began to be used as a generic description of all the video, audio and writing that people like me do, I've not-so-quietly seethed about the leveling that word connotes. Nobody sets out to write the Great American Content. Authors aspire to write the Great American Novel. I don't write Six Things so I can email some "content" to you. I try to provide some perspective, some useful insight. 

"Content" springs to mind because generative AI sure is producing a lot of it, and much of it is as bad as the word suggests. To this point, the concern has mostly been about AI slop--slapdash writing and oddly formed images. But there's another, more insidious type of material that AI is producing: what I think of as "AI beige." 

It's not as clearly off as those pictures where a stray bit of an arm is floating in midair or a hand has six fingers. The problem is that you can easily convince yourself that your AI is generating smart visuals and writing, when it's actually producing a forgettable beige that leaves you at a competitive disadvantage.

I'll explain.

My realization about the danger of AI beige began when my older daughter wrote an article for Quartz about what AI claimed it could do for online dating. She wrote:

"Generative everything — bios, prompts, openers — risks pushing profiles toward a smooth, samey median, making it harder to tell whether you like someone or just their autocomplete. Profile refiners can make dating apps worse by sanding off the idiosyncrasies that signal real, human compatibility....

"What happens when two people send each other messages with a chatbot?

"Do the chatbots fall in love?"

More recently, EY produced a report that explained what it called "the sameness trap." EY wrote about conducting an exercise hundreds of times across the globe, in which people used AI to develop a brand image. Everybody seemed to find the exercise fun and inspiring, and "each team believed it had created something novel, [but].collectively they had created the same thing."

Assorted matcha snack packages including chocolate bites, bars, and matcha latte bites.

Imagine doing all the work to go to market with one of those brands and finding the other two on the shelf right next to you. Differentiation is out the window.

AI will often produce results like that, because the models work in the same way, drawing on the same data (having all basically Hoovered up everything on the internet) and trying to develop the same best practices. 

AI can still be plenty useful and help with creativity, but you have to use it right. You won't get a Think Different or Just Do It slogan if you ask an AI to narrow in on a recommendation, but you might get something that starts you toward a very different, innovative sort of brand if you ask the AI to get a bit wild, or even very wild. You'd have to brainstorm from there and have the humans take over, but the AI can help broaden the range of ideas you consider.

EY suggests putting AI at the end of the process. Don't let the AI "speak" first on a topic, because it carries a high-tech cachet that makes it come across as the smartest in the room, and people become reluctant to voice ideas once the oracle has spoken. EY says to frame AI in an adversarial position:

"AI brings the patterns and the data of what has already happened. The human takes that intelligence and forms a position. Then we ask AI to challenge it. Tell us what we’re missing. Generate the counterfactual. The argument we haven’t considered. What would someone who disagreed with us say that isn’t in here?"

In either case —at the front end or the back end — you want to be aware that your competitors are using AI, too, and are probably being steered in the same direction you are. 

It's common for businesses to pay too little attention to what others are doing. Long before AI became a factor, every computer company started telling me in the 1980s that "We don't sell boxes; we sell solutions." In the 1990s, every startup began its presentation to me by showing a PowerPoint slide that read, "We have the best people." And so on.

In some parts of the business, insurers don't need to worry about having their AIs produce differentiated results. With communications with customers, for instance, if you come across as concerned and professional, the customer isn't going to call up your email and figure out how it compares with a competitor's on the same topic. So having an AI guide you toward best practices is fine, however beige they might be.

But when it comes to branding, sales pitches and strategy, you need to be sure to Think Different.

Just Do It.

Cheers,

Paul

 

 

 

 

 

Life Insurance Modernization Accelerates in 2026

Life insurers are advancing from AI experimentation to execution, prioritizing ecosystem integration to address escalating retirement security demands.

Robotic hand with its palm open holding a digital brain against a dark teal background

The life insurance and annuities industry has spent the past several years accelerating digital transformation. But as we move through 2026, the conversation is shifting from experimentation to execution.

The core technologies driving modernization – artificial intelligence, advanced data infrastructure, and digital distribution platforms – are no longer optional capabilities. They are becoming the foundation of how insurers design products, engage customers, and support advisors.

At the same time, deeper economic forces are reshaping the industry's mission. Millions of Americans remain underprepared for retirement. According to recent BlackRock data, by 2050, the population over age 80 is expected to triple, while median savings rates slip 17% from 2020 levels.

That gap between longer lifespans and the savings needed to support longer retirement is one of today's defining challenges, placing life insurance and annuities at the center of the solution.

Looking ahead, several connected shifts will define how the industry responds.

AI moves from experimentation to embedded capability

Artificial intelligence has been a dominant topic across financial services for several years. In the insurance sector, however, the real transformation is only beginning.

Early AI deployments focused on narrow applications such as fraud detection, chatbots, and basic automation. Those uses delivered incremental efficiencies but left much of the broader value untapped.

In 2026, the industry is moving beyond those isolated pilots as AI becomes increasingly integrated throughout the insurance lifecycle from application intake and underwriting triage to product design, distribution insights, and servicing.

The practical impact is speed.

Processes that historically required multiple handoffs, manual reviews, or weeks of back-and-forth can now be dramatically shortened. Advisors can evaluate product options and compare them more quickly, applications can be pre-populated using verified data sources, and underwriting decisions can be generated faster through advanced data analysis.

But AI's greatest potential is in enabling intelligent decision-making across the entire insurance ecosystem.

That means helping advisors identify the most appropriate product solutions for individual households, enabling carriers to refine risk assessments, and guiding customers through complex financial decisions with clarity and confidence.

Integration across the ecosystem will determine who leads

Despite the excitement around AI, one reality remains clear: technology is only as effective as the broader ecosystem it operates within.

Many insurers still work across fragmented systems built over decades. Customer information, underwriting data, product illustrations, servicing workflows, and distribution tools often exist in separate systems that do not connect as seamlessly as they should.

That lack of integration creates friction across the insurance lifecycle. Even the most promising technologies can only deliver limited value if the systems, partners, and processes around them remain disconnected.

In 2026, the most competitive insurers will prioritize creating more connected ecosystems that bring together internal operations, external partners, and advisor workflows in a coordinated way. That means enabling smoother handoffs across the journey, improving visibility between stakeholders, and ensuring that information flows more consistently from application through underwriting, issuance, and service.

When that level of integration is in place, innovation becomes more realistic and scalable.

In practical terms, this means fewer process bottlenecks, better coordination across the distribution chain, more consistent customer experiences, and a stronger foundation for delivering speed, clarity, and trust at every step.

Personalization will redefine product experiences

Consumers increasingly expect financial services to feel tailored to their individual circumstances.

In industries such as ecommerce retail and media streaming, personalization has become standard practice. Insurance is beginning to follow a similar path.

Advances in analytics and machine learning are allowing carriers to better evaluate demographic, behavioral, and geographic signals when designing products or recommending coverage levels.

Instead of presenting customers with a broad menu of generic options, insurers can guide individuals toward solutions that align with their financial goals, risk tolerance, and life stage.

In the near term, personalization will focus primarily on improving product fit and simplifying decision-making. Customers will encounter clearer choices, better explanations of tradeoffs, and more relevant options while they are evaluating coverage.

Over time, the industry may move toward more flexible product structures that allow coverage elements to be assembled in modular ways. While regulatory considerations will shape how quickly that evolution occurs, the trajectory toward greater customization is clear.

For consumers who have historically found life insurance complicated or intimidating, this shift has the potential to make protection solutions far more accessible.

Data-driven distribution is transforming advisor engagement

Distribution has always been one of the most complex elements of the life and annuities market.

Advisors must navigate product comparisons, regulatory requirements, suitability documentation, and application processes, often across multiple carriers and systems.

As digital capabilities improve, the distribution model itself is becoming more intelligent.

Advanced analytics can help advisors identify households that are most likely to benefit from protection or retirement income products. Instead of relying primarily on broad marketing campaigns or cold outreach, firms can engage potential clients with greater precision.

At the same time, technology is improving operational visibility. Advisors increasingly expect to track applications, underwriting progress, and case status in real time rather than waiting for manual updates.

This transparency is quickly becoming a competitive necessity.

When advisors can move quickly, provide clear updates to clients, and eliminate unnecessary friction in the process, the entire customer experience improves.

Retirement pressures are reshaping demand

Technology may be transforming how insurance products are delivered, but demographic forces are driving why they are needed.

The United States is entering a period often referred to as "Peak 65," when record numbers of Americans reach retirement age each year. At the same time, trillions of dollars are expected to transfer from Baby Boomers to younger generations over the coming decades.

Yet many households remain financially vulnerable.

Research suggests the typical worker's retirement savings are far below recommended targets, and Social Security alone generally replaces only about 40% of pre-retirement income for the average beneficiary.

This reality is fueling growing interest in lifetime income solutions and protection products that can provide greater financial stability.

Younger investors are also approaching retirement planning differently. They expect digital tools, transparent comparisons, and on-demand information, but they still value professional guidance when navigating complex decisions.

That dynamic reinforces the importance of equipping advisors with technology that strengthens their ability to educate clients and provide personalized recommendations.

Digital journeys must match modern consumer expectations

One of the most important shifts underway in insurance is the rising influence of digital commerce standards.

Consumers increasingly evaluate financial experiences through the same lens they apply to online banking or investment platform apps. They expect simple navigation, clear information, and immediate feedback.

For insurers, this means the digital experience cannot stop at the first interaction.

Customers expect continuity from initial research through application, underwriting, policy issuance, and continuing service. Advisors likewise need tools that allow them to guide clients seamlessly across those stages.

Modern electronic applications, automated validation checks, and integrated illustration platforms can dramatically reduce the number of "not-in-good-order" applications mired by incomplete or incorrect information.

Reducing these friction points improves efficiency for carriers and distributors while creating a smoother experience for policyholders.

The industry's next chapter: integration and trust

Taken together, the forces shaping the life and annuities sector in 2026 tell a larger story.

Artificial intelligence is accelerating decision-making. More connected ecosystems are enabling new levels of personalization. Distribution networks are becoming more precise and efficient. Demographic shifts are reinforcing the need for reliable retirement income solutions.

But none of these trends operate in isolation.

The organizations that lead the next phase of the industry will be those that align technology, data, and human expertise into a cohesive operating model.

Modernization is no longer about deploying the latest tool or launching a new digital initiative. It is about creating an ecosystem where every element – from underwriting to distribution to customer engagement – works together seamlessly.

For insurers, advisors, and technology partners alike, the opportunity ahead is significant.

At a time when millions of Americans are searching for greater financial security, the life and annuities industry has the chance to deliver not only innovation, but also clarity, stability, and trust.

And in 2026, those qualities may prove to be the most valuable differentiators of all.

Severe Convective Storms Drive Record Insurance Losses

Severe convective storms caused over $200 billion in losses since 2023, demanding businesses adopt AI-driven mitigation strategies.

Dust cloud and storm on the horizon under cloudy sky

From lightning and downpours to twisters and hailstones the size of tennis balls, severe convective storms (SCS) can manifest in dramatic and destructive ways. A complex cocktail of meteorological factors can lead to their formation, including the rapid ascent of warm moist air into cooler air higher up, and shifts in wind speed or direction.

Unlike regular thunderstorms, SCS must exhibit at least one of the following characteristics: winds exceeding 100kph (62mph), hailstones that are at least inch in diameter, or a tornado. SCS can produce a variety of weather patterns including straight-line winds, derechos, microbursts and macrobursts.

Unlike hurricanes, SCS events can strike with little or no warning, unleashing significant localized damage and triggering knock-on effects such as flash flooding. These unpredictable events have emerged as a major annual loss driver for the insurance industry, accounting for nearly half of all insured natural catastrophe losses last year, totaling over $60 billion.

SCS Losses Increasing

Between 2023 and 2025, losses exceeded $200 billion, according to Gallagher Re. The US is the number one SCS hotspot, accounting for more than 80% of the value of insured losses globally. This trend is reflected in the latest Allianz Risk Barometer where natural catastrophes ranked No. 5, remaining a consistent presence in the annual business risk ranking.

While tornadoes often dominate the headlines, the most significant SCS losses are caused by hailstorms, which are estimated to account for as much as 50%-80% of all losses. Again, the US is the global hotspot for these events, as well as the top loss location for hail claims, but Allianz Commercial analysis shows many other regions have also suffered substantial hailstorm damages.

Building resilience against this peril needs to be on the agenda of every company with exposed assets in high-risk areas. Addressing this peril requires more than traditional scenario planning. New approaches leveraging AI are able to identify physical vulnerabilities in advance, enabling risk mitigation to build resilience.

Inflation and Expanding Footprints Fuel Losses

SCS exposures have been intensified by population growth and development into hazard-prone areas. Rapid urbanization, aging infrastructure and assets, and building codes out of step with current exposures can all heighten the risk and value of losses. The limited spatial footprint and brief duration of SCS belie their capacity for concentrated destruction, particularly in densely populated regions.

After hail, damaging winds are the second major loss driver, consisting primarily of tornadoes and derechos. Severe hailstorms primarily affect buildings – especially roofs – and all kinds of vehicles, which are major drivers of expensive insurance claims. The damage to physical assets from hailstones can be extensive. A baseball-sized hailstone can carry the same kinetic energy as a Major League fastball, reaching speeds of up to 100mph (160kph) or more. Hail-related losses are not only growing in frequency but also shifting in character. What was once considered routine property damage now increasingly involves high-value assets, from aircraft fleets to solar installations, driving claim severity to levels that demand a fundamentally different response, Allianz Commercial analysis shows.

Inflation has driven up the costs of rebuilding and repairing property, an increase that is compounded by supply chain disruptions such as shortages in skilled labor and materials. A clear example of inflation's effect can be seen in the case of roof replacements, which are a significant factor in insured losses from SCS. Since the year 2000, the cost of built-up roof replacements has reportedly surged by 250% in some regions, with costs rising 45% in the last five years alone, according to Willis Re.

Business Must Prioritize Risk Mitigation

Mitigation measures for withstanding SCS will vary depending on the nature of a business's activities and the local weather systems it is subject to. A data center in Tornado Alley in the central US will need different resilience strategies than an automotive dealership in hail-prone northern Spain.

Scenario analysis is essential for assessing risk exposure and building resilience to climate perils. Instead of reacting to losses after a major storm, organizations can now use AI-supported insights to identify weak points in roofs, facades or critical equipment and prioritize upgrades that minimize future damage. This helps organizations understand how different climate futures could affect their assets, operations, and long-term performance.

Scenario analysis can also reveal hidden vulnerabilities, identify possible tipping points, and show how risks may change over time. This forward-looking approach strengthens decision-making by allowing organizations to test adaptation options, focus investments, and design strategies that remain effective under multiple future conditions.

Read the full report at Allianz Commercial Severe Convective Storms.

Insurance Hiring Practices Hamper Transformation (Part 1)

Insurance companies hire for sector expertise, but transformation demands cross-boundary judgment that traditional filters miss.

Businesswoman in professional attire shaking hands with recruiter in an office setting.

There is something that happens inside large insurance organizations that is easy to observe and hard to argue with.

Technical expertise accumulates over years, sometimes decades. The people who have it know things that cannot be quickly learned — the product complexity, the regulatory relationships, the underwriting logic, the claims nuances that separate a defensible decision from an expensive one. That knowledge is real. It was hard-won. And organizations that prize it aren't wrong to do so.

But at some point, pride in expertise stops being a competitive advantage and becomes a closed door. "We know how it's done" is a statement that can mean two very different things. It can mean: we have deep capability that outsiders underestimate. Or it can mean: the way we've always done it is the way it will be done.

Those two meanings lived comfortably together for a long time. In a stable, regulated environment where the job was consistent execution at scale, they were essentially the same thing. They are not the same thing any more.

The role has changed. The hiring criteria haven't.

My direct experience is in life insurance, which gave me a specific window into one part of a much larger and more varied industry. But the structural pattern I'm describing shows up across the sector in the data.

Insurance has one of the highest employee tenure rates in the U.S. economy. According to the Bureau of Labor Statistics, median tenure in insurance was 4.9 years as of January 2024, compared with 3.5 years across the private sector overall. That gap reflects something real: Insurance is technically complex enough that the learning curve is steep, the career pathways are well-defined, and once someone has built genuine expertise, there are good reasons to stay.

The result is an industry with deep organizational memory, strong internal culture, and — this is the part worth sitting with — a hiring logic built around selecting for people who already fit that culture. Sector experience as the primary filter isn't laziness. In a domain this technical, it looks like prudence.

The problem is that the middle manager role in a transforming insurance organization now requires something sector experience doesn't reliably build. It requires what I'd call cross-boundary judgment: the ability to synthesize signals across domains that didn't used to talk to each other, to make decisions without a clear precedent in the playbook, to manage a workforce whose skills and expectations are shifting while simultaneously absorbing a strategic pivot and maintaining execution velocity. All at once. Often with the same or reduced resources.

That is not a job description. That is a description of what transformation actually asks of the people accountable for making it happen. And it is a set of demands that years of deep sector experience — on its own — does not prepare you for. In some cases, it prepares you against it. The longer you've succeeded by applying known patterns, the harder it becomes to recognize when the pattern no longer fits.

What the wrong filter produces

Steve Jobs made a version of this argument decades ago, about the need for people who could move fluently between technical depth and human experience. George Anders developed it further in his work on what he called "jagged resumes" — candidates whose career paths crossed domains in ways that looked unconventional on paper and proved, in practice, to be exactly the flexibility that complex environments require.

Insurance has its own version of this problem, and it is structural. The sector-experience filter isn't applied by accident. It's applied because the technical complexity is real, because the regulatory environment — state-by-state in the U.S., country-by-country for global insurers — demands people who understand the stakes, and because the consequences of a bad judgment call in a regulated environment are not abstract. These are legitimate reasons to prize expertise.

But the filter is being used to solve a different problem than the one that now exists. The technical complexity of insurance hasn't disappeared. What's changed is that operating in that complexity now requires people who can also navigate conditions that have no established pattern — AI-driven workflows that are being invented in real time, workforce dynamics that have no precedent, competitive pressure from insurtechs that are unburdened by the infrastructure that makes large insurers what they are.

The sector is not short of people who know how insurance works. It is short of people who know how insurance works and can operate effectively when the rules of how it works are being rewritten around them.

Vertafore's 2023 survey found that roughly one-third of insurance professionals entered the industry from another sector. That means cross-sector pathways are already significant — the question is whether those entrants are being placed in roles where their cross-boundary capability actually gets used, or filtered past hiring managers who default to the most familiar profile.

What this costs

The talent shortage pressure is real and accelerating. Industry projections suggest approximately 400,000 workers will leave the insurance industry through attrition and retirement in the near term. That is not a diversity initiative argument. It is a pipeline arithmetic argument. The experienced cohort is aging out faster than it is being replenished, and the incoming generation has different expectations.

A 2025 survey by Young Risk Professionals found that 69% of insurance workers ages 21 to 35 believe AI will improve their workflow — but only 8.5% report being strongly encouraged to use it at work. That is not a technology gap. That is a judgment gap. The people who could help the sector absorb what is coming are already inside the building. The question is whether the organization is structured to hear them.

The hiring filter problem compounds this. If the primary selection criterion remains sector experience, the incoming talent pool shrinks precisely when the need for new capability is at its highest. And if the organizational culture treats unfamiliarity with established patterns as a disqualification, it will systematically exclude the cross-boundary judgment that transformation now requires.

The question worth asking

Insurance organizations know they need to transform. The evidence is visible everywhere: AI pilots underway, digital initiatives announced, transformation programs staffed and funded. The commitment is real.

What is less clear is whether the talent strategy is keeping pace with the transformation ambition. The sector's technical and regulatory complexity hasn't diminished — it has grown. The expertise required to navigate a multi-state regulatory environment, to underwrite complex risk, to manage claims with precision — none of that is going away. Those capabilities still matter enormously.

The question is whether organizations are also building muscle in what transformation now additionally requires: the ability to synthesize across boundaries, to act under genuine uncertainty, to lead people through conditions that have no established pattern. These are not soft skills. They are the core operating requirements of change leadership in this era — capabilities like cross-functional coalition building, decision quality under ambiguity, and the organizational readiness to absorb what AI and digitization are actually asking of the people responsible for making them work.

Are insurance organizations finding the right balance between deep sector expertise and these newer demands? Are they developing change leaders built for this era — or are they still relying on the 20th-century model of change management, which assumed that expertise plus a clear playbook was enough?

Those are the talent questions that will determine whether transformation investments produce results — or produce another round of pilots that never quite scale.

This is the first in a two-part series. Part Two examines why organizations that hire differently still struggle to deploy better judgment when they find it.


Amy Radin

Profile picture for user AmyRadin

Amy Radin

Amy Radin is a strategic advisor, keynote speaker, and Columbia University lecturer focused on why transformation succeeds or stalls in large, complex organizations. 

Drawing on senior leadership roles at Citi, American Express, and AXA, including one of the world’s first corporate chief innovation officer roles, she helps leaders build the capabilities required to absorb, scale, and sustain change.

Learn more at amyradin.com.

 

Uninsured Driver Problem Isn't What You Think

Non-standard auto insurers' fee structures may be producing the very uninsured population they're designed to avoid.

Backseat view of a man driving a car during the daytime

One in five. That's roughly how many drivers in states like Florida get behind the wheel without insurance, according to the Insurance Research Council's most recent data. The standard explanation is economic. Coverage costs are often too much, so some people go without. The policy response follows: steeper penalties, higher surcharges for lapsed drivers trying to come back. The diagnosis is not wrong, exactly. But it is incomplete in one critical respect: it treats the uninsured rate as something that happens to the insurance industry, rather than something the insurance industry has, in meaningful part, produced. I'd argue that a clear look at how non-standard auto products are designed in Florida suggests the latter and that the implication, for those of us who build these products, is more uncomfortable than the industry has typically been willing to acknowledge.

The Fee Cascade

Picture a driver who has been paying premiums faithfully for months. Then one paycheck comes up short. One missed installment. What happens next isn't bad luck; it's a sequence that was designed. Many non-standard carriers respond to a missed payment by assessing a Late Payment Fee. That fee gets added to the arrears, inflating what's already owed. If the swollen balance tips the driver over the edge, the policy cancels. Then comes the Reinstatement Fee. Now the driver is staring down up to four compounding obligations at once: the original missed amount, the late fee, the reinstatement fee, and potentially a catch-up payment to get back in good standing.

For a household running on variable income, that cascade is often the breaking point. Not a choice. Not a misunderstanding of consequences. The product made recovery too expensive at exactly the moment financial strain was most acute. This isn't an edge case. It's the mechanism by which the non-standard market, in aggregate, produces and sustains a meaningful share of the uninsured population.

The Price of Re-Entry

The compounding doesn't end at cancellation. When a lapsed driver's financial position stabilizes and they try to get back on the road legally, the industry often greets them with a surcharge. The lapse, the very outcome the fee structure helped produce, is now a rating factor. Re-entry premiums are higher than they were before cancellation. Down payments may be steeper. Carriers often treat the interrupted tenure as a non-payment risk signal, so the customer who couldn't clear a compounded reinstatement balance may now face a bigger first-payment obligation than they would have had they never lapsed at all.

The cycle sustains itself. Fee structures, reinstatement terms, and rating factors are deliberate product choices, not features that emerged without anyone's involvement. The uninsured rate is, among other things, a record of their cumulative effects.

A Different Product Design

When we built Clearcover's non-standard product in Florida, we started from a different premise: The fee cascade isn't an inevitable cost of serving a financially volatile segment. It's a design choice, and design choices can be remade.

We replaced the typical compounding structure with a single, knowable charge that doesn't grow during periods of financial strain. Paired with payment flexibility built around the income variability that defines much of the non-standard segment, the goal is straightforward: design products for the reality of how customers in this market actually manage money, and price the risk accordingly.

We're not arguing this is the only way to design a non-standard product. We're just saying it's a way worth trying, and that the early signal is promising enough to invite the broader segment to keep experimenting too.

The Honest Reckoning

Product design isn't the only reason drivers go uninsured. But honest reckoning requires acknowledging that the industry's fee structures and rating rules have not been neutral. They have worked, systematically, to make re-entry harder for the drivers most likely to lapse, compounding financial strain in a population that had already demonstrated it was operating at the margin. That's not an accident. It's a policy choice, and it has consequences that show up in uninsured rate data every year.

The philosophical shift the moment calls for isn't complicated, even if the execution is. As an industry, we all need to stop designing products that treat a missed payment as an opportunity. We need to build them for the reality of how customers in this segment manage money. The uninsured driver problem isn't a compliance problem to be resolved through enforcement. It is the predictable output of product decisions often made by this industry and we have the capacity to rebuild those decisions intentionally.


Seth Henderson

Profile picture for user SethHenderson

Seth Henderson

Seth Henderson serves as the senior vice president of insurance product and growth at Clearcover,

Prior to Clearcover, Henderson held key roles at The Hartford and GEICO, where he contributed to the development and refinement of rating programs across both auto and home lines of business.

He holds a bachelor’s degree in history from Kennesaw State University.

 

Platform Modernization in Insurance: Why Now Is the Time to Accelerate

AI is transforming the way platforms are built. Open integration, flexible data structures, and meeting partners where they are will define the next market leaders.

Blue Backdrop

Consider agriculture. It is one of the oldest industries in human history, and among the last you might expect artificial intelligence (AI) to meaningfully reshape. Yet precision agriculture is doing exactly that. Satellite imagery, soil sensors, weather models and other tools are being integrated and synthesized by a new generation of AI models to guide planting decisions, predict yield variability, and optimize irrigation at the individual acre level. Crop insurance underwriting, for an example closer to home, once driven almost entirely by historical loss tables and weather averages, is being rewritten around real-time field data that only machine learning models can interpret at scale. An industry defined by tradition and seasonality is being transformed by technology faster than some financial services firms have updated their customer portals.

The insurance industry is at a similar turning point. For years, insurers have orbited platform modernization, making small improvements and then pulling back due to operational risks. Legacy systems have kept organizations in a holding pattern: stable enough to operate, but less agile in adapting to the pace the market now demands.

That dynamic is shifting. AI is fundamentally transforming the way insurance platforms are built and run, turning modernization from a long-term goal into an immediate strategic priority. This is no longer only about small efficiency gains. Platform modernization now takes center stage in competitiveness, partnerships and making better decisions at scale.

Why legacy platforms keep insurers grounded

Many insurers operate within monolithic core systems that integrate policy administration, billing, claims, underwriting and reporting within a tightly coupled environment. Often customized over decades, these systems are deeply embedded in daily operations. As a result, modernization can feel less like a technology upgrade and more like open-heart surgery.

The limitation is not age but adaptability, and at a more fundamental level, the design philosophy of what a core transaction system should be. Legacy platforms were not architected to be open. They are walled gardens with narrow access, mostly through user interfaces, built to control entire workflows and departments within a single environment. This philosophy benefits software vendors but limits an insurer’s ability to customize, adapt and integrate AI capabilities. The issues go deeper than closed systems: Many use data models that evolved haphazardly over time, which hinders external integration, limits automation, and makes large-scale changes slower and more costly than organizations would like.

This creates a frustrating paradox. To leverage AI-assisted development or intelligent automation, insurers must first invest in foundational data cleanup and restructuring. These efforts are costly, time-consuming and out of sync with the pace of innovation today. For technology leaders, the question is no longer whether to modernize, but how to sequence it without destabilizing the business.

The data mindset that determines success

Modern, open systems help deliver faster underwriting, improved claims outcomes, sharper risk selection and scalable automation. However, these outcomes depend heavily on the quality of the underlying data, which, for many insurers, is the main limitation.

Specialty insurers working with diverse distribution networks across many lines of business encounter partners spanning a wide spectrum of technical maturity. From small, focused underwriters with spreadsheet-based toolsets to large organizations with dedicated engineering teams, each engagement brings its own data structures, conventions and integration requirements. The challenge is not only ingesting that data, but normalizing and validating it to support actuarial analysis, financial reporting and program oversight across a complex book of business.

When data foundations are weak, the consequences appear across everyday operations:

  • Program onboarding processes stall because agents and brokers cannot quickly answer questions that existing data should already resolve.
  • Claims adjudication is fragmented, with processes and details scattered across systems and inaccessible to all stakeholders in real time.
  • Bordereau files remain the standard, with limited adoption of modern data integration methods such as APIs, leaving validation manual and error prone.
  • Reporting remains rigid, depending on static PDFs and IT assistance for even minor updates.

These are not merely edge cases; they are the natural result of platforms built before today’s data and integration requirements fully took shape.

Forward-thinking insurers are already addressing these issues by validating data earlier in the submission flow, streamlining ingestion pipelines, and offering program-level analytics that improve transparency for distribution partners. The ability to exchange accurate, timely data is becoming a meaningful competitive differentiator.

Knowing where, and where not, to apply AI

One of the most consequential decisions technology leaders face during modernization is not which AI tools to adopt, but where to deploy them. AI delivers outsized returns in specific contexts and introduces risk when applied in the wrong ones.

The highest-value, lowest-risk applications tend to cluster around workflows and customer interactions: automating bordereau validation, surfacing claims anomalies, generating underwriting summaries, accelerating document review, or guiding agents through submission requirements. These are areas where AI augments human judgment, reduces friction, and operates alongside existing systems without requiring those systems to change.

Replacing core transaction systems is a different conversation. Policy administration, billing, and claims settlement involve regulatory compliance, audit trails and financial integrity requirements that demand extreme care. Applying AI directly to these systems, without strong data governance and testing frameworks, introduces risk that often outweighs the short-term gain. The better path is typically to modernize the underlying architecture first, then build AI capabilities on a stable foundation.

Organizations that conflate “apply AI everywhere” with a modernization strategy often find themselves with sophisticated models sitting on unreliable data, or automated workflows breaking at the points where legacy systems assert themselves. Discipline about where AI creates value, and where foundational work must come first, is what separates effective transformation from expensive experimentation.

How AI changes the modernization equation

AI is not only speeding up platform modernization in insurance; it is transforming how it occurs. In the past, transformation has often been seen as a large-scale, multi-year project to replace core systems. For platforms handling high transaction volumes, the cost, complexity and operational risk of this “big bang” method often outweighed the advantages.

AI shifts that calculation in two distinct but complementary ways: how new applications and tools are built and deployed and how AI is embedded directly into workflows to support and automate decisions. These are not the same thing, and conflating them leads to poorly sequenced investments.

AI development tools: Building and deploying faster

The first wave of AI impact for most technology organizations is on the build side: using AI-assisted development tools to compress the time it takes to design, build, test and ship new internal applications. Tools that generate code, write tests, scaffold architectures and accelerate documentation review are not marginal productivity improvements. They are changing what a small team of engineers can deliver in a quarter.

For insurers, this means that internal tools, which previously required months or years of development, in addition to a vendor and system integrator relationship, can now be prototyped in weeks by a small internal team: a partner portal that consolidates program reporting, a claims intake tool that pre-populates fields from submitted documents, and a bordereau ingestion utility that catches data errors at intake rather than surfacing them days into the processing cycle. These applications do not require replacing the core system; they sit alongside it, connect via APIs, and deliver immediate operational value, if the core system supports it.

Technology teams that embrace AI development tooling can reclaim capabilities that have historically required large vendor programs or costly system integrators. They can move faster, iterate based on user feedback, and build institutional knowledge rather than external dependency. The organizations deploying these tools today are already compressing timelines that once seemed fixed.

Embedding AI in workflows: decisions at scale

The second wave is more fundamental: embedding AI directly into operational workflows to improve and automate the decisions that drive the business. This is where the economic case for modernization becomes clearest, and where the data foundation matters most.

Workflow-embedded AI is not a tool a user opens and closes. It is:

  • Judgment built into the process itself;
  • An underwriting workflow that scores submission quality before a human reviews it;
  • A claims triage model that routes cases by complexity and coverage signals in real time; and
  • A renewal pricing engine that incorporates loss history, external data, and portfolio exposure without requiring manual assembly. 

These are structural changes to how decisions get made, not incremental improvements to existing processes.

 The distinction between these two modes matters for sequencing. AI development tools can deliver value relatively quickly, even in environments with imperfect data, because they accelerate human work rather than depend on it. Workflow-embedded AI, by contrast, is only as reliable as the data it operates on. A claims-routing model built on incomplete or inconsistently coded data will produce inconsistent decisions. Getting the data foundation right is a prerequisite for this second wave, not a parallel workstream.

Together, these shifts fundamentally change the economics of modernization, lowering barriers to entry and expanding what is possible for more organizations.

Choosing the right retirement strategy for legacy systems

How an organization exits its legacy systems matters as much as what it builds next. The right strategy depends on transaction volume, regulatory complexity, partner dependencies and appetite for operational risk. Three patterns emerge repeatedly in practice.

The strangler pattern

Rather than replacing a legacy system wholesale, new functionality is built alongside it. The modern system gradually takes over individual capabilities a microservice here, an API layer there until the legacy platform is functionally surrounded and can be decommissioned without a disruptive cutover. This approach minimizes operational risk and is particularly effective for large, tightly coupled systems where a full replacement is not feasible.

Microservicing and modular decomposition

Some organizations carve specific domains out of a monolithic system and rebuild them as independent, API-driven services, such as claims intake, document generation, or rating, while leaving the core transaction engine intact for now. This creates optionality: Each domain can evolve independently, integrations become cleaner, and the organization builds modern engineering capability without betting the business on a single transformation program.

Sunsetting and runoff

For legacy systems supporting books of business with short or reasonably short policy periods, managed wind-down is often the most pragmatic answer. New business moves to the modern platform immediately; the legacy system is maintained, but not invested in, for the life of the in-force policies. This approach is less visible than transformation but is frequently the most cost-effective and operationally sound path for systems that are not worth rebuilding around.

A mature modernization strategy typically combines elements of all three: strangling core transaction systems, decomposing specific domains into services, and sunsetting legacy platforms that no longer justify investment. Recognizing which pattern applies where is itself a strategic discipline.

The right conditions for change

Since the insurance ecosystem will never be entirely uniform, achieving complete alignment across platforms or data models is neither practical nor essential.

What is achievable is better data exchange. More interactive, near-real-time data integration can deliver measurable value without requiring a complete system overhaul. Progress depends as much on collaboration as on technology, emphasizing the need for open, practical discussions about current data flows and how they can be enhanced for the future.

Ultimately, success will not be measured by who creates the most advanced platform, but by who develops the most adaptable one. Open integration, flexible data structures, and the ability to meet partners where they are will define the next wave of market leaders. The industry has spent years addressing this challenge. With the right tools, patterns, and organizational discipline now in place, the conditions for meaningful change are finally within reach.

About the author

Joe Lettween is Chief Innovation, Data Science, and Technology Officer for global specialty insurer Fortegra

 

Sponsored by: Fortegra


Fortegra

Profile picture for user Fortegra

Fortegra

An industry leader for more than 45 years, we help businesses and individuals manage risk by creating and delivering reliable insurance and risk management solutions. Learn more about who we are.  

May 2026 ITL FOCUS: Workers' Comp

ITL FOCUS is a monthly initiative featuring topics related to innovation in risk management and insurance.

ITL Focus: Workers' Comp

FROM THE EDITOR

Workers' compensation has always been a line of business defined by complexity — rising medical costs, shifting workforce dynamics, mounting litigation, and an ever-changing regulatory landscape. But a new force is reshaping how carriers approach every piece of that puzzle: generative AI.

For many insurers, especially state-affiliated funds shifting to mutual models, the pressure to grow and differentiate has never been greater. The old playbook — focused, single-state, single-line — is no longer enough. Carriers are sitting on significant capital while their core books contract, and the question on everyone's mind is: what's next?

This month, we explore that question through a conversation with Tirath Desai, PwC's insurance core transformation and AI lead, about where GenAI is already delivering real advantage — and where the road ahead still requires careful navigation.

From reimagining the claims experience for injured workers, to streamlining fragmented payment processes, to using AI-powered visual data to prevent accidents, Desai lays out a vision of workers' comp that is faster, smarter, and — crucially — more human-centered. He also tackles the ecosystem question head-on: No carrier can build everything alone, and the winners will be those who know where to invest and where to collaborate.

Whether your organization is just beginning to explore AI or looking to move beyond isolated pilots, Desai's advice is clear: think bigger, build governance first, and get your data house in order. Read the full interview to find out how to position your organization for what's next.

 
 
An Interview

GenAI Reshapes Workers' Comp

Paul Carroll

GenAI is reshaping insurance. Let’s start there—what’s changing in workers’ compensation?

Tirath Desai

It’s becoming a central conversation. Carriers are asking a fundamental question: what’s next? Many are coming out of a soft market and rethinking growth. Workers’ compensation insurers across the globe continue to navigate common issues related to the changing nature of work, rising medical costs, changing workforce, increasing litigation and regulatory changes.
 
That’s especially true for state-affiliated funds transitioning into mutual models. Historically, they’ve been focused—single state, single line. Now growth is harder to find. That creates pressure.
 
Besides competition, there is a need for expanded capabilities. Differentiation in a crowded market. So, the questions shift. How do we grow? Where do we collaborate? What makes us stand out? AI is at the center of that discussion. Not the only answer—but a critical one.

read the full interview >
 

MORE ON WORKERS' COMP

AI Transforms Workers Comp for Brokers

by Adam Price

AI enables overwhelmed workers' comp brokers to shift from transactional quoting to strategic risk advisory relationships that employers increasingly demand.

Read More

 

Gig Workers Reshape Insurance Market

by Michael Giusti

As gig workers untether from employer-sponsored benefits, insurers must reimagine underwriting and distribution for a decentralized workforce.

Read More

 

The Future of Workers’ Comp

by James Benham

Workers' compensation systems need cloud-native transformation to address modern workforce challenges and rising claim severity.

Read More

 

hands in a meeting

Uncovering Hidden Fraud Networks

by Marty Ellingsworth, Jay Mullen

Sophisticated fraud thrives in fragmented data. Entity resolution, knowledge graphs, and geospatial analytics can unite disparate records and expose hidden networks.

Read More

 

Strategies to Fight Workers' Comp Fraud

by Roberta Mercado

Advanced AI and predictive fraud models transform workers' compensation fraud detection from costly burden into a strategic risk management advantage.
Read More
 
hands in a meeting

What Medical Inflation Means for Workers’ Comp

by Pragatee Dhakal

Healthcare inflation surges past general price trends, pressuring P&C carriers to adopt data-driven claims strategies.
Read More
 
 
 

MORE FROM OUR SPONSOR

Reimagining Workers' Compensation in the Age of Generative AI

Sponsored by PwC

While workers' comp has seen improved performance over the past decade, the sector faces mounting pressures—from medical cost inflation and rising mental health claims to litigation exposure and evolving workplace dynamics. This paper from PwC and Guidewire examines how GenAI, one of the fastest-adopted technologies in history, can help insurers navigate these challenges.
Read More

Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.