Download

Carriers Lose Millions on Manual Claims

Insurers hemorrhage millions on manual claims processing; targeted automation slashes costs by half within months.

Us Dollar Bills Crumpled on the Floor

Here is a number that should keep insurance executives up at night: The average cycle time for a property claim is still hovering around 30 days. It's not because the damage itself is complicated; it's because the process is.

From what I've seen across the industry, carriers are burning between $7 and $15 just to manually process a single claim document. Given that the average claim generates 15 to 25 documents, you're looking at up to $375 in administrative overhead before an adjuster even makes a coverage determination. Multiply that by a mid-sized carrier processing 50,000 claims a year, and you're hemorrhaging $10 million to $18 million annually – spent strictly on "paper-pushing."

In my 20-plus years working with insurers, I've watched this problem snowball. The root cause is almost always the same: legacy workflows where "paper" was the default are now buckling under the weight of digital data they were never designed to handle.

Where is the Money Actually Leaking?

Let's break down the anatomy of a manual claim. When a file hits the system, the path is typical: a document arrives – PDF, photo, scan, or email attachment. Someone opens it, reads it, and manually keys the data into the claims management system (CMS). A second person validates that data. A third checks it against policy terms. Only then does it reach an adjuster's desk.

Every handoff is a delay, and every delay is a line item. Your people aren't the problem. The process is. You are forcing high-value specialists to waste their bandwidth on tasks a machine could execute in seconds.

Then there's the "invisible" cost that never makes it onto the spreadsheet: customer churn. A 2025 J.D. Power study found that claim satisfaction drops by 15% for every additional week of processing time. Dissatisfied claimants are 2.5 times more likely to switch carriers at renewal. Manual processes aren't just expensive; they are actively driving your book of business to the competition.

Why Automation Projects Keep Failing

If the solution were as simple as "automate everything," every carrier would have done it by now. The reality is that most initiatives fail because firms try to "boil the ocean."

I've seen carriers sink $2 million and 18 months into building a "total automation" platform, only to find it handles 30% of their claim types while the rest still require a manual "workaround." The project is branded a failure, and the organization develops an allergy to the word "automation" for the next three years.

The mistake is treating claims automation as one monolithic project instead of a series of targeted strikes. You don't need to automate the entire lifecycle on Day One. You need to identify the specific bottlenecks where manual labor drives the highest cost and tackle those first.

What Actually Works?

The carriers successfully modernizing their operations follow a specific blueprint. They start with document ingestion – not because it's the "sexiest" problem, but because it's the costliest.

Intelligent document processing (IDP) powered by large language models (LLMs) can now extract structured data from unstructured sources – medical records, repair estimates, police reports, and invoices – with 90%+ accuracy. It doesn't have to be perfect. Outliers are flagged for human review, while 80–85% of standard documents flow through the system automatically.

The second step is externalizing business rules. If every change to your adjudication logic requires a developer and a release cycle, you'll never move fast enough. Modern firms pull business logic out of the core system and into dedicated rule engines. When a regulation changes or a new fraud pattern emerges, a business analyst updates the rule directly – no IT project required.

The third step – the one most often missed – is building a feedback loop. The system should learn from every decision. Which document types require the most manual corrections? Which rules trigger the most exceptions? That data is gold, but most carriers throw it away because their systems weren't designed to capture it.

The Math Driving the Decision

Let's look at the simulation for a mid-market P&C (property & casualty) insurer:

  • Manual processing (50,000 claims/year): ~$12 million in direct OpEx. Add in indirect costs – churn, leakage, and overtime during CAT (catastrophe) season – and you're nearing $18 million.
  • Phased modernization program: Typically costs $500,000 to $1.2 million (implementation) plus $200,000–$400,000 in annual maintenance.
  • The Result: Optimization through automation typically reduces OpEx by 40–60%.

Even with a conservative 40% savings, that carrier keeps nearly $5 million a year in their pocket. The tech investment pays for itself in just three to four months. Unlike many "moonshot" tech plays, this ROI is driven by hard cost savings, not speculative revenue growth.

The Question You Should Be Asking

The cost of manual claims isn't just what you're spending today. It's what you lose every quarter you delay: in direct costs, in customer NPS, and in market share.

At your next board meeting, I suggest bringing one calculation: the fully loaded cost per claim, including document handling, validation, rework, and churn impact. Benchmark it against these figures. If there's a gap, the business case writes itself.

Don't try to flip the switch on everything at once. Start with document ingestion. Prove the value. Then scale. The winners in this industry won't be the ones with the biggest IT budgets; they'll be the ones who stopped treating claims as a cost center and started seeing it as a competitive edge.e models" (generic concept).

Moving to the Cloud Poses New Risk

Insurers moving to the cloud face a governance challenge that needs to be addressed through a new, shared responsibility model. 

Scenic of Clouds in the Sky

With insurers increasingly operating in hybrid and multi-cloud environments, they have enabled operational agility and advanced data modeling, but they have also introduced a governance challenge: Accountability doesn't migrate just because infrastructure does.

Insurers need to adopt a shared responsibility model (SRM), a strategic risk governance model that has direct implications for regulatory exposure, underwriting integrity, third-party risk management, and board-level oversight.

Why the Shared Responsibility Model Matters at the Executive Level

The SRM defines how security and operational responsibilities are divided between a cloud service provider (CSP) and the enterprise customer. The provider secures the infrastructure of the cloud, and the client remains accountable for what happens in the cloud.

This distinction is operational, but it also shapes your enterprise risk posture. For insurers, the consequences of misunderstanding this model extend beyond cybersecurity incidents. They can affect your financial solvency, regulatory compliance, customer trust, and enterprise valuation.

Governance and Board-Level Accountability

Insurance boards are increasingly expected to demonstrate oversight of cyber and operational resilience. Regulators and rating agencies now view cyber governance as a component of enterprise risk management (ERM), not a standalone IT function.

Delegating infrastructure to a CSP does not eliminate fiduciary responsibility. If policyholder data is exposed due to misconfigured access controls or weak identity governance, accountability rests with the insurer, not the cloud provider.

Security teams must ensure:

  • Clear ownership of cloud-related risks within ERM frameworks
  • Defined reporting lines between IT, risk, compliance, and the board
  • Periodic review of cloud security posture at the governance level
  • Integration of SRM responsibilities into internal control structures

The SRM becomes a tool for governance clarity and helps boards understand where operational responsibility ends and strategic accountability remains.

Regulatory Exposure in a Cloud-Dependent Environment

Insurance is one of the most heavily regulated industries globally. Whether operating under state insurance departments, NAIC guidance, international solvency frameworks, or emerging cyber regulations, insurers must demonstrate control over customer data and operational systems.

Cloud providers may hold certifications, but regulators evaluate how insurers configure, monitor, and govern their own environments.

From an executive perspective, this raises crucial questions:

  • Who validates that cloud configurations meet regulatory requirements?
  • How are audit logs retained and reviewed?
  • What controls govern privileged access?
  • How is compliance continuously monitored in dynamic cloud environments?

As regulatory scrutiny intensifies, insurers should also assess whether their cloud governance aligns with control frameworks like SOC, ISO, or HITRUST, particularly when handling sensitive policyholder and claims data.

The SRM clarifies that compliance responsibility for data handling, access management, and reporting obligations remains with the client. Misunderstanding this boundary can result in fines, enforcement actions, and reputational damage.

Third-Party and Vendor Risk Increase

Cloud adoption heightens traditional vendor risk. Historically, insurers outsourced discrete services. Now, they embed core operations into cloud ecosystems, creating layered dependencies: cloud infrastructure providers, SaaS vendors, analytics platforms, and API integrations. Each additional layer expands the attack surface and complicates accountability.

Executives should view SRM as a foundational element of third-party risk management:

  • Are contractual agreements aligned with actual responsibility boundaries?
  • Do vendor assessments account for the "in-the-cloud" obligations retained internally?
  • Are incident response roles clearly defined between parties?
  • Is there transparency into subcontractors within the cloud supply chain?

Assuming responsibility shifts entirely to vendors is one of the most dangerous misconceptions in modern enterprise environments.

Implications for Underwriting and Risk Transfer Strategy

Understanding SRM is extremely important for insurers underwriting cyber policies. In fact, it directly affects risk assessment.

Policyholders frequently misunderstand their own cloud responsibilities. This creates underwriting blind spots if insurers fail to evaluate how insured organizations manage identity, access, configuration, and monitoring within cloud environments.

Executives overseeing underwriting strategy should consider:

  • Incorporating SRM awareness into cyber risk questionnaires
  • Assessing insureds' cloud governance maturity
  • Evaluating reliance on shared services within documented control frameworks
  • Adjusting pricing or exclusions based on configuration risk

Internally, insurers have to recognize that their own cyber risk profile influences capital allocation, reinsurance negotiations, and rating agency assessments. The SRM affects both sides of the balance sheet, both operational risk and underwriting exposure.

Operational Resilience and Business Continuity

Cloud platforms promise resilience, but resilience is not automatic. Clients are still responsible for:

  • Backup validation and recovery testing
  • Access segregation
  • Configuration management
  • Application-layer security

Executives should require periodic assurance that cloud resilience assumptions are validated through testing, not just vendor documentation. Operational disruption during claims processing or policy administration can create financial and reputation consequences that exceed the cost of the original cyber event.

Strategic Moves for Insurance Leadership

The SRM is more about disciplined accountability than technology for insurance executives. It's a governance discipline that directly affects enterprise value, regulatory standing, and underwriting performance.

Cloud adoption changes how risk is distributed, but it doesn't change who is accountable. Leadership teams have to ensure that responsibility boundaries are clearly understood, contractually aligned, and operationally enforced.

The executive agenda should include several strategic priorities:

  • Embed SRM clarity into enterprise risk management frameworks.
  • Align cloud governance with regulatory compliance oversight.
  • Strengthen third-party risk assessments to reflect real accountability boundaries.
  • Integrate SRM awareness into cyber underwriting practices.
  • Elevate cloud security discussions to the board level as part of fiduciary duty.
Strengthen Security with the Shared Responsibility Model

Cloud transformation will continue to accelerate across many aspects of the insurance industry, including underwriting, claims automation, AI-driven analytics, and customer engagement platforms. The insurers that succeed will not be those who outsource responsibility but those who understand where it remains.

Insurers Must Fix Enterprise Design to Use AI Right

Insurers remain trapped in AI pilot purgatory by layering technology over fractured legacy systems instead of solving core enterprise design problems.

White frequency lines an dots across a gradient purple background

Insurance's value is a myriad of things. Insurers' problems are, too.

We can't move without insurance, yet we don't trust it and often don't value it, either. It's a cost, a necessary evil, essentially a direct debit on the balance sheet of our lives and businesses we would rather not have. 

Here we are at the tipping point where math and neurons can think for us, and at levels of "intelligence" we are often told we can't even comprehend. Despite this, most of what we are artificially trying to make more intelligent is simply what we do today. And to many of us, this doesn't seem right at all.

The issue for strategic thinkers remains "value chain" thinking, where we focus on minimizing costs and maximizing distribution (channels, coverage, capacity). This puts us at a permanent disadvantage, where new value, through new working models in new technology, is pushed aside for cost savings and efficiency. Worse, when we try to do this with prediction token engines, we are constantly backpedaling because we live in an industry that needs us to be highly deterministic. This is one of the key reasons we remain in pilot purgatory with AI far too often.

We need to solve the meaningful problems we face and start to evolve our business and technology architectures into ecosystems capable of maximizing the knowledge of a customer (and their risks) and acting on this as near to real time as needed.

To do this, we have to address major issues or misperceptions:

  • Many insurers are building houses on sand by layering AI over a "messy middle" of fragmented data and customer-blind legacy processes. AI isn't a repair kit for insurers' broken business models.
  • If we apply AI to a fractured, policy-centric design, we just get fractured, policy-centric mistakes - at scale and at speed. We are simply automating the friction, industrializing the silos, and alienating the customer faster than ever before.
  • The insurance industry is obsessed with plugging in AI, but it's still in pilot purgatory. And that's because layering GenAI over outdated data structures and silos means we aren't innovating; we're building a house on quicksand.
Framing the answer to this paradoxical state

This is, therefore, an enterprise design problem, where policy-centric architectures have to give way to customer-centric enterprises.

Building AI into this new model is vital, but so is building in risk, regulations, compliance, auditing and legal. If things move in real time and intelligently, so will all these things as well.

We need to move from a "data & AI" strategic frame where these things become almost self-serving toward an "intelligent" business model, where data is seen as a perishable asset, constantly mined for insight and acted on as close to real time as is needed, but in a controlled, deterministic and responsible way.

To make this possible, we need to deal with the messy middle. That's because operations in insurance are the big unlock - where the magic (or the misery) happens. If the middle is a black box of manual hand-offs and disconnected spreadsheets, AI will choke on it anyway.

Insurance is a process-heavy industry, one where simply making a claim also means the insurer understands the wider context we are in, that it will focus communications on the best resolution path, that other communications or needs are sympathetically managed in this context, like a repairer, and so on. It's multi-faceted, and the operations, customer experience, and data that weave it together need to be symbiotic. We are at the point now where operational efficiencies and better customer experiences are mutually beneficial, and not the opposing forces they are all too often seen as.

To get to the end state where AI actually works and starts to create new value, we need an evolutionary model to aim for. And we need to clean up this messy middle and orchestrate the flow of outcomes more intelligently - I tend to call this intelligent orchestration. Systems of intelligence are hyped and relevant, but systems of outcome are needed to make them count.

In conclusion

Foundationally, we need a robust data orchestration layer (not more data storage), but insurers need a unified data model, built around the customer. Data should be fluid, so events are available and usable when they need to be.

Insurers need to be able to interoperate agents, with telemetry across their estates, all the way into employee and customer use. And they need a deterministic framework that harnesses agentic solutions and ensures human intervention. But it also needs to be deliberately designed to maximize human interaction when it's needed.

AI is an outcome, not the goal, and once insurers solve the enterprise design problem and move from policy-centric to customer-centric via intelligent orchestration, AI likely becomes the hero. A hero they can control, manage the risk of, and interoperate and adapt at will.


Rory Yates

Profile picture for user RoryYates

Rory Yates

Rory Yates is strategic adviser for insurance at Synechron, a digital transformation consulting firm.

He previously was the SVP of corporate strategy at EIS, a core technology platform provider for the insurance sector.

Lemonade Throws Down the Gauntlet

The 10-year-old insurtech carrier claims it has an insurmountable lead in AI — an overly bold assertion, but one that deserves a hard look. 

Image
Robots Using Laptops

For a 10-year-old carrier that still has a combined ratio far above 100, Lemonade has never been reluctant about dissing its established competitors or about patting itself on the back. In that vein, CEO Daniel Schreiber recently published a manifesto titled, "Why Incumbents Won't Catch Up." 

The cheeky claim is that Lemonade was founded as an AI-native and thus has a 10-year head start on State Farm, Allstate, Progressive, GEICO, et al. Schreiber says the incumbents are "optimized for yesterday," while Lemonade is "designed for the world as it’s becoming." He argues that Lemonade's advantage will keep growing. 

Schreiber's argument doesn't make me want to rush out and buy stock in Lemonade, which, after some years in the wilderness, has recently surged and now carries a hefty $5.1 billion market valuation. But I don't dismiss his argument, either. He's certainly right that early movers like Lemonade have an advantage that incumbents need to reckon with. He also poses three measures for AI adoption that all insurance companies should test themselves on.

Let's have a look. 

Schreiber writes that "companies who slap technology on top of their legacy businesses are not changing their DNA: their incentives, capital allocation logic, talent mix, data architecture, distribution dependencies, brand promise, investor expectations, and legacy stacks. Those systems and processes co-evolved over many decades. They cannot be reengineered piecemeal; and untangling them is laborious and risky."

He says Lemonade began as an AI-native: 

"The result is a different cost structure. A faster clock speed. A compounding feedback loop that continuously improves underwriting, customer experience, and efficiency.

"The question, then, is not whether incumbents can “use AI.” Of course they can. And they should. The question is whether they can re-architect themselves to close the gap to Lemonade. 

"That seems unlikely."

To buttress his argument, he suggests three tests for whether an insurer is adopting AI at its core. All three, of course, show Lemonade outpacing incumbents. 

The first is what Schreiber calls The Scaling Quotient. You look at how fast you're growing, by whatever measure you use. You then divide that growth rate by the rate at which your headcount is increasing. If you're growing, say, your policies in force far faster than you're adding people, you're winning. If not, not. 

Second is Loss Adjustment Expense Ratio. You take your loss adjustment expenses and divide by your gross earned premium. If you're spending a lower percentage than the industry average, and the percentage is declining, you're winning. If not, not. 

Third is what Schreiber calls Structural Precision. This involves two calculations of gross profit. First is gross profit divided by your exposure — you want as high a profit as you can get based on the risk you're taking on. Second is gross profit divided by your sales and marketing expenses — you want to acquire customers as efficiently as possible. You add the two calculations, then compare yourself to the industry over time. 

Those all strike me as fair enough measures of efficiency for any carrier, and AI is certainly the main driver these days. I think his approach can be extended to other players in the insurance industry, not just carriers. Agencies, for instance, can measure whether AI is making them more efficient in winning clients, in processing renewals and so on. 

If you take Schreiber's piece as a wake-up call for incumbents, I can get behind that, too. They can't just be tacking on bits of AI to become slightly more efficient, and they can't just wait and see. The carriers developed their cultures over decades, and changing them will take many years. People don't change overnight even if the technology does. Incumbents have to be thinking big — NOW — and experimenting with ways to allow for radical change. That may even mean new service-based business models, such as Predict & Prevent, or very different distribution channels, such as through embedded insurance. 

Schreiber can certainly point to lots of industries where upstarts with a head start and momentum overcame incumbent behemoths — look at Kodak, Blockbuster, Nokia and Blackberry, city taxi monopolies and Sears (as well as every other company in Amazon's path).

Now to quibble.

For one thing, Schreiber is focusing almost entirely on overhead, which accounts for maybe 20% of every premium dollar, while claims in P&C account for north of 60%. You can be as efficient as you want in processing claims, but if you're taking on bad risks you're still going to lose — and even after years in the business, Lemonade's combined ratio in the fourth quarter was 139.

In addition, as Simon Torrance writes in this thorough analysis, the sort of AI that will really matter in the long run is AI agents, and the competition is just beginning in that phase. He says:

"The genuine compounding asset — the one that cannot be replicated by purchasing the same technology at a later date — is not automated claims processing. It is what happens [when] deliberative agentic teams capture structured reasoning with every decision, build institutional memory that compounds across thousands of cases, and encode expert judgment that persists independently of the individuals who generated it. This is Intelligence Capital. The question Lemonade's investors should be asking is whether their architecture has built this — or whether it has built a more efficient version of what every insurer will have by 2027."

Lemonade might also want to be careful about lecturing incumbents just yet, given that it is still small and has so many ways it could slip up as it expands into new lines of business and new geographies. (Here is a good analysis of its opportunities and challenges.)

But I suppose being cheeky is in the company's DNA at least as much as AI is. 

I hope the rest of us take the Lemonade manifesto for what it's worth — and devise real metrics that accurately measure our progress with AI (or lack thereof), think boldly about where AI agents can change everything about our businesses and start reshaping our cultures for, as Schreiber put it, "the world as it's becoming."

Cheers,

Paul

 

Colorectal Cancer Challenges Life Insurers

A 30% rise in colorectal cancer among adults under 50 is forcing life insurers to rethink age-based underwriting models.

Woman in White Scrub Suit Wearing Black and Gray Stethoscope

Colorectal cancer has long been viewed as a condition primarily affecting older adults, but that assumption is rapidly becoming outdated. Over the past two decades, a marked increase in colorectal cancer diagnoses among people under 50 years old has emerged as one of the most concerning epidemiologic shifts confronting both the medical community and the insurance industry. For life insurers, this rise in early-onset colorectal cancer (EOCRC) brings far-reaching implications, from underwriting and pricing to product development and wellness strategy.

A rising trend with industry-level consequences

Early-onset colorectal cancer, defined as diagnosis before age 50, has grown steadily, with incidence climbing by roughly 30% in the last two decades. Although overall case counts remain lower than in older populations, the rate of increase underscores an unsettling trajectory.

Studies now show an approximate 2% annual rise in diagnoses for adults aged 20-50.

For insurers, this change disrupts longstanding mortality expectations built on age-driven risk curves. Younger applicants have traditionally been priced favorably due to low expected cancer incidence. But the rapid emergence of EOCRC means traditional age-based risk assumptions no longer fully capture early life cancer risk. Compounding this challenge, younger patients often present with a more advanced disease. Symptoms — such as abdominal discomfort, rectal bleeding, or shifting digestive patterns — frequently mimic benign conditions, delaying diagnosis and worsening outcomes.

As a result, underwriting models built around the idea that cancer risk accelerates mainly after age 50 must be reassessed.

Understanding the drivers: Lifestyle, genetics, and environmental factors

The rise in EOCRC stems from a complex interplay of behavioral, genetic, and environmental forces. Lifestyle shifts, including diets high in processed meats and low in fiber, reduced consumption of fruits and vegetables, and increased sedentary behavior appear to play substantial roles. The parallel rise in obesity adds another layer of risk, amplifying inflammatory and hormonal pathways associated with colorectal tumor development.

Genetic risk, while present in a smaller segment of the population, carries significant consequences. Inherited conditions, such as Lynch syndrome or familial adenomatous polyposis, sharply elevate lifetime risk. Mutations in genes, including NTHL1, POLE, POLD1, and RNF43 also contribute to susceptibility, and a family history of colorectal or endometrial cancer is a consistent red flag.

Environmental and medical exposures may also be contributors. Frequent antibiotic use can disrupt the gut microbiome, potentially altering protective bacterial profiles. Long-term inflammatory disorders, such as inflammatory bowel disease, create chronic tissue stress that elevates cancer likelihood.

For insurers, recognizing how these variables interact is essential. Incorporating lifestyle, familial, and clinical risk indicators into modern underwriting frameworks helps ensure high-risk younger applicants are identified earlier and more accurately than age-based approaches alone allow.

Screening guidelines shift — and insurers must follow

One of the clearest responses to rising EOCRC has come in the form of revised screening guidelines. The U.S. Preventive Services Task Force and the American Cancer Society now both advise routine colorectal cancer screening beginning at age 45 for average-risk adults — a notable reduction from the longstanding threshold of age 50. In certain high-risk populations, earlier screening may be warranted. Some European health networks are already exploring screening initiation at age 40.

As screening recommendations evolve, early detection will likely improve, which is particularly crucial for younger adults who tend to present later in the disease process. This shift presents an opportunity for insurers to align underwriting expectations with modern preventive care standards and encourage applicants to stay current with screenings.

Advances in screening and diagnostic technology

Beyond guideline changes, screening technologies are rapidly advancing. While colonoscopy remains the most definitive method, emerging modalities are increasingly accessible and appealing to younger adults who may be reluctant to undergo invasive procedures.

Noninvasive stool-based tests, such as fecal immunochemical tests (FIT) and multitarget stool DNA tests (mt-sDNA), offer convenient at-home screening with promising detection capabilities. Frequent use of these tests tends to boost adherence — an important advantage for younger populations.

CT colonography, or virtual colonoscopy, offers a radiologic alternative, while capsule endoscopy provides a swallowable camera platform with future potential for broader colorectal screening use.

Perhaps most transformative is the rise of blood-based biomarker testing, including liquid biopsies that detect circulating tumor DNA or methylated DNA fragments. Machine-learning-enhanced platforms now combine methylation signatures with DNA fragment analysis to pick up cancer indicators at minimal concentrations. Meanwhile, germline multigene panel testing is uncovering meaningful hereditary risks in approximately 14% of colorectal cancer patients, prompting universal recommendations for genetic testing in EOCRC cases.

For insurers, keeping pace with the strengths, limitations, and cost profiles of each screening approach can inform more accurate underwriting guidelines and create opportunities to promote early detection among policyholders.

Underwriting implications: Rethinking risk in younger applicants

The shifts in incidence and screening warrant a reevaluation of underwriting practices. Traditional risk assessments centered heavily on age must now incorporate:

  • More sophisticated risk stratification, combining family history, lifestyle indicators, and screening adherence.
  • Adjusted premium models that account for elevated risk in younger demographics while rewarding proactive health behaviors.
  • Integration of new data sources, such as medical records, wearables, and — in jurisdictions that allow it — genetic testing results to capture emerging risk more precisely.

However, insurers must also guard against anti-selection, as applicants aware of personal risk may seek coverage before formal diagnosis or symptoms emerge. Balancing comprehensive risk assessment with regulatory and ethical constraints will be crucial.

Product innovation: A strategic opportunity

While EOCRC presents clear challenges, it also invites innovation. Insurers can differentiate themselves by designing products that integrate early detection, lifestyle engagement, and preventive health participation. Potential avenues include:

  • Policy discounts or riders tied to completion of recommended screenings
  • Wellness incentives for maintaining healthy diet and exercise habits
  • Educational programs that inform younger customers about cancer warning signs and the value of screenings

Such initiatives not only enhance customer loyalty but also reduce long-term claims exposure by facilitating earlier diagnosis and intervention.

Challenges ahead

Implementing EOCRC-aligned underwriting and product strategies is not without obstacles. Privacy concerns must be properly managed as the use of genetic or personal health data increases. Evolving screening technology may outpace underwriting updates, creating a lag between best medical practice and insurance assessment. Operationally, insurers must invest in training, systems modernization, and compliance oversight to ensure new processes are implemented safely and efficiently.

Conclusion

Early-onset colorectal cancer represents a fast-emerging risk that the life insurance industry can no longer overlook. By aligning underwriting models with modern epidemiology, embracing new screening technologies, and developing products that encourage proactive health behaviors, insurers can both mitigate risk and empower policyholders. Those who adapt early will not only strengthen market competitiveness but also play a meaningful role in improving health outcomes for a generation facing rising cancer risk far sooner than expected.


Russell Hide

Profile picture for user RussellHide

Russell Hide

Dr. Russell Hide is a medical advisor with RGA.

He specializes in underwriting and claims assessment support for South Africa and the EMEA region. He has more than 25 years of experience in the insurance and reinsurance sectors, as well as a clinical background in general practice. 

He holds an MBBCh degree from the University of the Witwatersrand.

Coder Cannibalism

Developers who automated other industries now face AI displacement themselves, as technical certifications prove less valuable than human judgment and accountability.

Woman Using a Computer

Most of my friends are coders—and, disclosure, I used to be one. Smart people. Good people. People who spent years mastering arcane syntax, memorizing AWS service catalogs, stacking certifications like frequent flyer miles, and genuinely believing—with some justification—that they were the high priests of the modern economy.

They automated the travel agents. The paralegals. The loan officers, the radiologists, the customer service reps, even the truckers—at least in theory. And they did all of it with a clear conscience because, hey, that's capitalism, baby. Creative destruction. If we can do it better, faster, cheaper, then by the immutable laws of the market, we should.

They were not wrong. And they were not unkind people. They just never believed, not really, not in their gut, that the logic had a return address.

It does.

Amazon just laid off a cohort of developers whose primary offense was building something that worked. The system they constructed—on AI, with AI, as a monument to AI—became, upon completion, the argument for their own termination. The product was the pink slip. You couldn't script a better parable. These weren't junior button-pushers. Some of them held AWS Solutions Architect certifications. Professional level. The kind of credential that used to mean something in a job interview, that used to justify a salary band, that used to make a hiring manager feel confident they were buying proven expertise.

What they were actually buying, it turns out, was structured knowledge retrieval. Which is a very polite way of saying: a human being who had memorized a lot of things and learned to pattern-match against them quickly. And if there is one thing—one single thing—that large language models do better than humans, it is exactly that. The machine doesn't need a certification. It doesn't need a salary. It doesn't get defensive when you change the requirements at 11 p.m.

So here we are. The hue and cry from the coding community is structurally identical to every argument that was dismissed when the travel agents and the paralegals and the loan officers were in the crosshairs. This is different. This requires real skill. You don't understand the complexity. 

Brother, Sister, those whose jobs you automated said the same thing. You just didn't listen because you were the one holding the compiler.

The real question—the one worth asking these days—is, what skills actually don't have a shelf life problem? Some of them seem obvious in retrospect, and most of them aren't technical.

Regulatory judgment under uncertainty is one. Not knowing what a rule says—AI can read the Federal Register faster than any human—but knowing what it means when a specific auditor in a specific regional office has been interpreting it a certain way for three years. That's pattern recognition built from exposure and consequence, not training data. A friend of mine who works in healthcare private equity says the top three risks related to any deal are regulatory in nature—gray area, subjective.

Organizational power mapping is another. Every failed technology implementation in history failed for the same reason: someone built the right thing for the wrong power structure. The CMO thinks she controls the data. The CFO controls the budget. The VP of operations controls the workflow. The IT director controls the timeline through "security review." No AI maps this. No certification covers it. This is human intelligence in the original meaning of the phrase.

Cross-domain translation may be the rarest and most durable skill of all. The ability to stand in a room and make a CMS actuary, an Epic build team, and a 55-year-old case manager all feel heard, and then synthesize what they need into something that actually ships—that's not a technical skill. It never was. We just told ourselves it was adjacent to technical skill so the coders could claim it.

And finally, accountability. The willingness to put your name on a recommendation and mean it. AI is a brilliant, tireless, unaccountable collaborator. In regulated industries—healthcare, insurance, finance, law—where the downside of being wrong is measured in dollars with a lot of zeroes or people with actual problems, someone has to own the outcome. That someone is still a human being with a name and a reputation and something to lose.

The coders who survive this aren't the ones who fight the AI. They're the ones who understand that the job was never really about the code. It's about the judgment surrounding the code. Which explains why Stanford CS grads can't find jobs—while McKinsey is hiring liberal arts majors again. Coders just got away with charging for the code because nobody had built the machine yet.

Now somebody has.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.   

5 Operational Shifts for Scaling Insurance AI

Insurance AI is shifting from the wow factor of innovation to the how factor of sustaining automation at scale.

Human Responsibility for AI

AI is moving well beyond experimentation and into everyday insurance operations. As this happens, the wow factor of introducing new forms of automation to insurance use cases is giving way to the how factor of sustaining these innovations at scale. Once AI influences underwriting decisions and claims outcomes in a heavily regulated environment, success depends far less on the sophistication of models and far more on the operational systems that support them.

Earlier phases of AI adoption proved that insurers can deploy advanced models. The priority now is to embed those models into the deeply regulated, process-driven realities of underwriting, claims, and distribution without creating new friction or risk. All this must happen while taking into account what may be an outdated back office tech stack, and with a level of integration that doesn't create the next issue on the horizon of agent sprawl. Here are five operational trends that are emerging as the differentiators between AI programs that compound value over time and those that stall under complexity:

Treat document intelligence as foundational infrastructure, not a point solution

Document intelligence is a prime focus for AI modernization, yet many organizations still approach it as a tactical automation limited to intake. At scale, this narrow view leaves significant value unrealized. Documents and work items remain central to underwriting, claims adjudication, and compliance. Manual handling introduces delay, inconsistency, and risk at every handoff. As AI adoption matures, document intelligence and rigorous contextualization functions should exist as shared operational infrastructure embedded directly into workflows, rather than bolted on at the edges. This shift reduces cycle times, improves data quality, and strengthens auditability; and it further informs future agentic capabilities stemming from those same work items. That's why insurers that move fastest stop treating document intelligence as an isolated capability and start treating it as a prerequisite for operational scale.

Make AI governance an enterprise operating model

As AI becomes embedded in decision-making, the ability to maintain explainability, accountability, and auditability of AI systems must be designed into processes from the outset, not retrofitted after systems are already in production. At scale, this allows insurers to deploy AI confidently across regions, lines of business, and regulatory regimes without fragmenting their operating model. This enterprise-wide discipline of clear ownership, transparent decision logic, and consistent oversight of machine processes helps position AI governance as a C-suite priority that strengthens risk posture, customer trust, and long-term resilience.

Keep humans in the loop strategically

When human involvement is applied too broadly, productivity gains erode and trust in automation declines. Human-in-the-loop AI is most effective when experienced underwriters or claims professionals are only pulled into cases where their judgment, oversight, and exception handling add the most value in assessing complex risks, edge cases, and decisions with material financial or regulatory impact. Emerging governance models increasingly reinforce this principle. For instance, Singapore's IMDA Model AI Governance Framework on agentic systems describes a spectrum of oversight that includes human-in-the-loop, on-the-loop, and over-the-loop to help selectively scale automation while preserving accountability and control.

Connect underwriting and claims workflows end-to-end

Siloed workflows are increasingly untenable as customer expectations rise and loss events grow more complex and costly. End-to-end visibility from first notice of loss through settlement, or from submission through bind, enables AI to coordinate decisions across the full lifecycle, rather than optimizing individual steps in isolation. This coordination reduces cycle times, improves broker/agent/customer experience, and strengthens risk selection and pricing accuracy. It also provides the transparency needed to support governance, oversight, and continuous improvement. AI delivers its greatest operational value when it serves as a connective layer across workflows, aligning data, decisions, and actions inside of a process.

Modernize legacy integrations iteratively

Best-in-class agents and tools cannot operate in a silo and must take into consideration the complex legacy systems that remain a reality for most insurers. Because large-scale replacements often span multiple years, waiting for perfect conditions before deploying AI is rarely viable; yet fragmented pilots that never scale introduce their own risks. Insurers that maximize their AI investments at scale focus on incremental modernizations that deliver early operational value while progressively addressing data and system complexity. This approach avoids the trap of pilots that prove concepts yet fail to translate into production impact with quantifiable benefit. By modernizing iteratively, insurers can improve workflows, connect disparate systems, and strengthen data foundations without discarding prior investments.

Conclusion

As AI becomes embedded in core insurance operations, the conversation is shifting from capability to durability. Most insurers now understand what AI can do. The more consequential question is whether it can be integrated into underwriting, claims, and compliance in ways that improve performance without eroding trust, operational integrity, or compliance. As such, sustaining AI at scale is a matter of organization-wide discipline. It requires aligning automation with real insurance cycles, protecting scarce expert judgment, and ensuring transparency as non-deterministic agentic-driven decisions expand. Insurers that approach AI through this lens position themselves not just to automate faster, but to operate smarter, more resiliently, and with greater confidence in the outcomes their systems produce.


Jake Sloan

Profile picture for user JakeSloan

Jake Sloan

Jake Sloan is vice president, global insurance, at Appian

He has held senior operations roles with Farmers Insurance, including front-line insurance/licensed field operations, and served as CIO of Aon National Flood Services. 

Sloan volunteers as a mentor to the Global Insurance Accelerator, holds an MBA from Baker University and is a graduate of the Advanced Management Program (AMP) of Harvard Business School.

2026 Commercial Market Outlook

Prepare for Renewals and Manage Costs in a Changing Market

wavy

After years of disruption, the commercial insurance market is showing signs of moderation—but risks remain. Catastrophe losses, social inflation, and regulatory scrutiny continue to challenge organizations.

Zywave’s 2026 Outlook breaks down what insurance professionals and business leaders need to know to prepare for renewals, manage costs, and position programs for success.

Key Takeaways for 2026
  • Property Insurance: After years of a hard market, property insurance is stabilizing thanks to improved capacity and reinsurance strength. However, catastrophe losses, valuation scrutiny, and climate risks continue to challenge underwriting. Parametric solutions and resilience measures are gaining traction—organizations with accurate valuations and proactive risk controls will benefit most.
  • Casualty Insurance: Litigation trends and social inflation keep pressure on casualty lines, especially commercial auto and umbrella liability. Nuclear verdicts and expanded litigation funding drive severity, while technology like telematics and AI safety tools are becoming key differentiators for favorable outcomes
  • Professional & Executive Liability: Competition is improving, but emerging risks tied to AI adoption and regulatory scrutiny are reshaping underwriting. Cyber events increasingly overlap with management liability, making strong governance and compliance essential for broader coverage and stable pricing.
Access the Full Outlook Today

Get expert insights into market forces and strategies for success. Download the full 77-page report now.

Access the Report

 

 

Sponsored by ITL Partner: Zywave


ITL Partner: Zywave

Profile picture for user Zywave

ITL Partner: Zywave

Zywave delivers AI-powered growth engines for the insurance industry, enabling carriers, MGAs, agencies, and brokers to grow profitably, strengthen risk assessment, enhance client relationships, and streamline operations. Its intelligent, AI-driven platform acts as a performance multiplier for more than 160,000 insurance professionals worldwide, across all major segments. By combining automation, data insights, and best practices, Zywave helps organizations stay competitive and efficient in today’s fast-changing risk environment—empowering them to adapt quickly, scale effectively, and achieve sustainable growth.

For more information, visit zywave.com.

Additional Resources

Zywave recognized as a Leader in The Forrester Wave™: Insurance Agency Management Systems, Q4 2025 

Access Report

A Problem With Renters Insurance

Half of property owners fail to verify active renters insurance, leaving multifamily portfolios exposed to entirely preventable losses.

Empty Apartment with Sunlight Coming through a Large Window

In a recent survey of real estate investors and property owners, roughly half admitted that they don't verify whether their residents maintain renters insurance throughout the lease term. Those who do, rely on a mix of manual checks, carrier notifications or loosely integrated property management tools to track coverage. Both approaches leave portfolios exposed in ways that remain invisible — until a loss turns hidden risk into a real cost.

When a resident without an active policy causes a fire from an unattended candle or a faulty space heater, for example, the exposure falls on the operator. There's no clear recovery path. Just a loss, a dispute and a difficult conversation with ownership about how a lapsed policy went undetected for six months while the portfolio assumed it was covered.

This outcome is entirely preventable. When enforcing renters insurance is treated as a formality rather than an operational safeguard, multifamily owners and operators are exposed to significant risk and potentially expensive repairs. Renters insurance should be managed as part of the portfolio's overall risk strategy with the same consistency and oversight applied to any other source of financial exposure.

When Scale Creates Blind Spots in Protection

Survey data found that landlords with smaller portfolios of one to four units were more likely to require and enforce renters insurance. In contrast, those with larger portfolios of 20 or more units were significantly less likely to do so.

As multifamily portfolios grow, managing renters insurance enforcement becomes complex, and manual audits quickly become a liability. That risk compounds in ways that aren't always apparent until there's a loss or dispute.

At scale, compliance begins to break down in three key ways.

  • Inconsistent compliance enforcement across properties. Liability requirements in a portfolio lose force when individual sites enforce compliance differently. If one site grants exceptions but another follows a much stricter protocol, this inconsistency creates operational confusion. Plus, property staff turnover can create knowledge gaps and process changes. This erosion of compliance discipline increases the likelihood that a preventable lapse will become a reportable event.
  • Documentation gaps. It's not enough to have a renters insurance requirement in the lease. Operators must be able to show how they explained the requirement to the resident and when they were told. In multi-state portfolios, documentation is the difference between defensible policy and avoidable liability.
  • Technology stack drift. As portfolios grow, systems rarely remain uniform. Variations in property management system configurations, workflows and tracking methods across a portfolio make it more likely that policy lapses, missed renewals and incomplete documentation will go unnoticed. Fragmented data also limits oversight. If verification data lives in multiple places — or worse, in email inboxes and spreadsheets — operators can't see portfolio-wide status in real time. During an audit or post-loss review, inconsistent records make it difficult to demonstrate that coverage was consistently monitored.

Inconsistent enforcement, weak documentation and fragmented systems create exposure that is entirely preventable.

Strengthening Oversight at the Portfolio Level

Closing these gaps requires consistent management, but three practical shifts can make all the difference:

  1. Standardize enforcement protocols across every property. Portfolio-wide protection requires portfolio-wide consistency. That means the same requirements, exceptions process and documentation standards must be applied uniformly across every site.
  2. Automate verification and treat it as a continuing process, not just a move-in checkbox. Technology can help operators track the receipt, processing and review of certificates of insurance at the start of a lease. But confirming coverage at lease signing is only the starting point.

    Up to 40% of renters cancel their policies mid-lease, meaning a portfolio that only verifies at move-in is operating with a false sense of protection for a significant portion of its residents. At scale, that volume of documentation and continuing monitoring can only be managed reliably with technology. Automated tools can help operators continuously track coverage, flag lapses and prompt residents to reinstate when needed. Technology removes pressure on on-site staff to catch what falls through the cracks and creates a consistent, auditable record across the portfolio.

  3. Implement a tech-enabled solution to monitor resident coverage and auto-enroll residents with lapsed or canceled policies in a waiver program. Even in well-run portfolios, not all residents will obtain or maintain coverage. A property damage liability waiver program addresses this directly. When a resident's individual policy lapses or was never obtained, auto-enrollment in a waiver program ensures the resident isn't held personally liable for negligently causing certain damage to the unit, and that the property isn't on the line for the cost of the damage, either.

    The strongest programs also monitor certificates of insurance for residents who carry their own policy, processing renewals, flagging lapses and prompting reinstatement before gaps occur. Owners and operators should seek waiver programs that include 24/7 monitoring: a flood caused by an overflowing sink, even one day after a policy lapses, can result in the same costly damage as one that happens six months in. Continuous monitoring may be the difference between a covered loss and a big payout.

    Beyond protection, these programs can also generate revenue. Residents enrolled in a waiver program pay a fee, and operators can retain a portion of that fee after paying for the underlying insurance policy issued to the property and any third-party administrative costs. What starts as a compliance backstop can become a revenue line.

Keeping Gaps from Becoming Losses

For growing multifamily portfolios, renters insurance compliance is easy to underestimate. Risk stays quiet until something goes wrong. But once a policy lapses and a loss occurs, the financial and operational impact is immediate. Operators end up making repairs, managing disputes and absorbing costs that could have been avoided.

Real protection requires clear requirements, documented processes and continuing verification to reduce preventable losses and make recovery more predictable. At scale, where small compliance gaps cost real dollars, managing renters insurance compliance intentionally keeps those gaps from becoming losses.


Kelli Stiles

Profile picture for user KelliStiles

Kelli Stiles

Kelli Stiles is chief legal and insurance officer at Foxen.

Before joining Foxen, she spent eight years at Nationwide Insurance, most recently as AVP, associate general counsel for property & casualty development and distribution legal. Earlier in her career, she practiced for nearly 11 years at Jones Day, representing Fortune 500 companies in complex litigation and regulatory matters.

Enterprise Connectivity Is Becoming Critical

In 2026, fragmented systems and siloed workflows are no longer inefficiencies but competitive liabilities that constrain workforce adaptability.

Vibrant Abstract Circular Design in Blue and Pink

In 2026, a clear pattern is emerging across industries: disconnected systems, fragmented teams, and siloed workflows are no longer tolerable inefficiencies. They are now competitive liabilities. The next phase of enterprise transformation will not be defined by which company adopts the most tools or deploys the most AI pilots. It will be defined by which organizations can connect people, platforms, and processes into a coherent operating model that actually works at scale.

For years, digital transformation efforts focused on modernization. Cloud migrations, workflow automation, and analytics platforms promised efficiency and speed. Many delivered incremental gains. Yet few addressed the structural problem underneath: enterprises built digital layers on top of operational silos. As a result, employees still bounce between systems, data remains fragmented, and decision-making slows when it should accelerate.

The Hidden Cost of Disconnected Work

Most organizations underestimate how much friction disconnected systems create for their employees. Knowledge workers routinely spend hours each week navigating multiple platforms, re-entering data, and chasing approvals across departments. The result is not just lost productivity. It is cognitive overload. Employees are forced to manage the complexity of systems instead of focusing on higher-value work.

This friction has broader implications for retention and engagement. When work feels unnecessarily complicated, burnout accelerates. High-performing employees expect modern environments that enable them to collaborate seamlessly and move quickly. Organizations that fail to deliver this experience will struggle to attract and keep talent, particularly as younger professionals enter leadership pipelines with higher expectations for digital fluency and workflow simplicity.

Connectivity, in this context, is about removing obstacles between people and outcomes. It is about creating environments where information flows naturally, tasks move forward without constant manual intervention, and teams can operate with clarity.

Why Connectivity Is Now a Leadership Issue

Traditionally, integration efforts sat within IT departments. Leaders approved budgets, but execution was often isolated from business strategy. That approach no longer works.

As enterprises adopt more AI-driven tools, automation platforms, and distributed work models, the complexity of the environment increases. Without intentional orchestration, organizations risk creating ecosystems that are powerful on paper but unusable in practice.

Leaders in 2026 must treat connectivity as a core management responsibility. This means asking different questions: Are workflows designed around how employees actually work? Do teams have a single source of truth for critical data? Can new hires onboard without weeks of system training? Are frontline employees empowered with the same digital capabilities as corporate teams?

These are not technical considerations alone. They are cultural and operational decisions that shape how work happens every day.

Workforce Adaptability Depends on System Design

Adaptability is often framed as a human skill set. We talk about reskilling, upskilling, and continuous learning. While these remain important, adaptability is also shaped by the environment people operate within.

When systems are connected, employees can respond faster to change. They can access real-time information, collaborate across departments, and adjust workflows without waiting for manual handoffs. When systems are fragmented, even the most capable workforce becomes constrained.

In 2026, the most resilient organizations will be those that design their digital infrastructure to support rapid adaptation. This includes enabling cross-functional collaboration, reducing dependency on specialized gatekeepers, and allowing teams to reconfigure processes as business needs evolve.

Adaptability is not about working harder. It is about removing structural barriers that prevent people from working smarter.

Moving From Tool Accumulation to Platform Thinking

One of the biggest mistakes enterprises continue to make is equating progress with tool adoption. New platforms are added to solve specific problems, but rarely integrated into a broader operational framework. Over time, this creates digital sprawl that increases complexity instead of reducing it.

Platform thinking requires a shift in mindset. Rather than asking which tool to buy next, leaders must ask how systems interact, where data flows, and how users experience the entire environment. This approach prioritizes interoperability, standardized workflows, and shared data models.

It also requires governance that balances flexibility with structure. Teams should have autonomy to innovate, but within a connected framework that prevents fragmentation. The goal is not uniformity. It is coherence.

The Human Side of Enterprise Connectivity

Technology alone will not solve connectivity challenges. Organizations must invest in change management, communication, and leadership alignment. Employees need clarity on why new systems are being introduced and how they improve daily work. Managers need training to lead in connected environments where visibility increases and workflows become more transparent.

Trust plays a critical role. When systems are connected, performance data becomes more accessible. Used thoughtfully, this creates accountability and improvement. Used poorly, it creates surveillance and resistance. Leaders must establish norms that prioritize support, not control.

Why Insurance Cannot Afford Fragmentation in 2026

Nowhere is the cost of disconnected systems more visible than in the insurance sector. Carriers and brokers operate across policy administration platforms, claims systems, underwriting tools, CRM environments, and regulatory reporting frameworks that rarely communicate cleanly with one another. The result is delayed claims resolution, inconsistent customer experiences, manual reconciliation work, and increased operational risk.

As insurers adopt AI for fraud detection, pricing optimization, and customer service automation, fragmentation becomes even more dangerous. AI models depend on clean, connected data flows. Without unified infrastructure, insurers risk amplifying errors instead of improving outcomes. In 2026, competitive insurers will be those that connect underwriting, claims, compliance, and customer engagement into a single operational ecosystem that supports speed, transparency, and regulatory confidence at scale.

The Road Ahead

As 2026 unfolds, enterprises face a simple but demanding reality: disconnected operations cannot keep pace with the speed of modern business. Markets move faster. Customer expectations evolve rapidly. Talent demands better work environments.

Connectivity is the foundation for workforce adaptability, operational resilience, and sustainable growth.

Organizations that embrace this will create environments where people can focus on meaningful work instead of navigating complexity. Those that ignore it will find themselves constrained by systems that no longer serve their ambitions.