Download

From Documents to Decisions: Why Claims Needs a New Operating Model

While claims technology has improved for decades, too little has been done to leverage it. It's time to move beyond document storage and into effective decision-making.

Mark Tainton Interview Header

The insurance claims industry sits at an inflection point. Medical records are more complex, nuclear verdicts are rising, and the workforce is changing faster than most organizations can adapt. AI promises to help — but most implementations have fallen short. We sat down with Mark Tainton, senior vice president of data solutions at Wisedocs, to talk about what's actually working, what isn't, and why the industry needs to move from document management to true decision intelligence.

Paul Carroll

The insurance claims industry has been talking about digital transformation for years. What's actually changed in the last 18 to 24 months, and what's still stuck?

Mark Tainton

Having worked in the insurance industry for over 30 years at the intersection of technology and claims operations, I've certainly seen infrastructure change. But the bigger question now is the operating model that can actually leverage that infrastructure. And the operating model is not so much around storing documents in claims management systems or document management systems—it's about how we take advantage of that data asset. We’re essentially moving from document storage into effective decision-making.

Over the last five years, there has been an acceleration in the technology, in particular with large language models. Technology is not the problem.

It's really about taking advantage of the individual pieces of information in the world of unstructured data. That's the next wave we should be focusing on: How do we operationalize the assets so they’re part of the DNA of insurance processes?

Paul Carroll

Medical record review is at the heart of so many claims decisions, yet it still appears remarkably manual at most organizations.

Mark Tainton

I’ve certainly seen large carriers that have introduced AI but haven't introduced the process changes or changed how people can take advantage of the insights as the claim goes through its lifecycle. Carriers are using ineffective decision making approaches that continue to mirror what we saw 10, 15, 20 years ago. 

There needs to be a conversation around how adjusters work, especially because of the change in their age demographic. New people coming into the claims industry consume data completely differently. We have to adjust. 

You have to also understand the psychosocial aspects of the workforce, where COVID accelerated change. You need to cut across multiple claims at any given time and look for triggers that are prevalent by a treatment provider, or at risk indicators that suggest psychosocial issues—they are top of mind for a lot of claims teams right now.

Paul Carroll

There's always a tension between speed and defensibility in claims, especially given the high stakes. How do insurers resolve that tension?

Mark Tainton

Claims are getting more complex, and we've seen a lot of legislation that makes it very clear that if someone's making a decision solely based off AI output with no human in the loop, that's going to be a problem.

When you tie that concern into the expansion of traditional fraud and increases in nuclear verdicts, the defensibility question becomes critical. There needs to be a human in the loop.

Several states are already drawing that line legislatively. California's SB 574 and a growing number of AI governance frameworks now require that AI-assisted decisions in insurance and legal contexts be documented, auditable, and explainable. That is not a future concern; it is a present operating requirement for carriers doing business in those jurisdictions. The organizations that build defensibility infrastructure now will not be scrambling to retrofit it later.

Paul Carroll

There are a lot of solutions out there these days, but they seem to largely be point solutions—summarization tools, triage tools, document processors, and so forth. What's missing from the point solution approach?

Mark Tainton

First, they don't fit into the ecosystems of clients and large carriers. They don't work alongside platforms like Guidewire where they can function as a module and help make those decisions effective.

The point solutions also aren’t really end-to-end. They're focusing on a point in time on a particular claim. That produces what I call a silent failure. The AI processes the document and returns a summary, and the claim moves forward. But the anomaly that should have triggered a flag, the treatment pattern that does not match the diagnosis, the billing inconsistency that signals a problem: None of that surface because the tool was never designed to look across the lifecycle. The claim does not fail loudly. It just quietly travels in the wrong direction for months. 

Think about first notice of injury as a claim goes through the life cycle, and all of a sudden you get a demand package or a treatment package coming in. What are the decisions you want the adjuster to make?

You need intelligence that cuts across the full lifecycle of the claim in terms of other claims with certain characteristics. And I think that's where point solutions really come up short.

Paul Carroll

I assume that thinking is why you took a platform approach with WiseShare.

Mark Tainton

Very much so. We have the sorting and summarization solution that we just renamed WisePrep. It includes WiseChat, where users can save all the insights they generate from a large language model. We've introduced WiseInsights looking at litigation trends, looking at treatment patterns and how they develop, looking across claims that an adjuster who's got a workload of 200 or 300 claims cannot identify on their own. These insights reveal similar characteristics across claims. For example, we looked at one portfolio and identified that a particular treatment provider, over a 12-week program, consistently prescribed a higher and more severe medication at the four-week timeline. 

WiseShare is important, too.  Far too often, a summarized document gets passed from the adjuster to inside counsel, then to external counsel, and eventually to an IME [independent medical examiner]. A lot of the time, we see slip-ups—documents go missing, misinterpretations occur, different versions of the truth emerge. WiseShare brings everything together into one consolidated environment where all of those entities can actually share, review, and export the claim file. 

From a legal defensibility standpoint, that consolidation is not a convenience; it is a chain-of-custody argument. The defense bar needs to see a complete, unbroken record: the medical record chronology, the time series of decisions made, and documented consistency in how AI processed the underlying materials. When a claim ends up in litigation, the question is not just what decision was made; it is whether that decision can be reconstructed, sourced, and defended at deposition. WiseShare is built for that standard.

You have to be able to wrap intelligence around a decision, and that requires a platform. 

Decision intelligence needs to be comparative. You have to be able to see the claim you're dealing with in the context of other claims. The intelligence also needs to be sequential. Are we seeing similar patterns starting to develop on other claims in certain jurisdictions? Are we starting to see certain seasonal trends? Are we starting to see different types of treatment coming through? Finally, the intelligence must provide accountability. Is every inference sourced and every decision point documented? 

The defense bar needs to see that audit trail. They need to see the medical record chronology, the time series, and the consistency in terms of best practices for how AI actually processes documents and insights for better outcomes. From 2023 to 2024, nuclear verdicts rose 52%. Thermonuclear verdicts are up 81%, and overall verdicts are up 116%. 

You need one single environment where you store the materials, one single process that's consistent across an organization.

Bottom line: if you can't show defensibility, you're in a world of trouble.

Paul Carroll

There's discussion about AI replacing many human workers in the insurance industry. What is your perspective?

Mark Tainton 

There's this notion that AI is going to replace people at the desk. From my perspective, that's totally inaccurate. And I think that mindset sets back adoption.

But here's the inflection point: We're dealing with an aging workforce. Insurers and TPAs are struggling to attract talent. Why? Because some of the tools and technology have not evolved as quickly as in other industries. When you can walk hand in hand with AI and the person at the desk and show them all the benefits, that’s exciting. 

Paul Carroll

If you could change one thing about how the insurance industry is currently approaching AI adoption in claims, what would it be?

Mark Tainton

For me, it's what I call the evolution framework. AI is a journey, not a one-time event. Far too often, what I've seen is large organizations—mid-tier, tier two, tier three—treating this as basically an implementation. It's almost like they're going in, turning the light switch on and walking out.

I spend quite a bit of time working with clients all the way from inception to asking: Where are we actually going to implement this? What's the impact we're expecting? How does this align with strategic objectives? What are some of the key measurements we want to see in terms of adoption, change, and, ultimately, having the AI start to hit the hard dollars—reduction in litigation, average duration, and things like that.

I'll give you an example. I worked with a large carrier that wanted to implement AI across the entire organization. But they have an aging demographic in certain lines, and getting them to adopt AI would be difficult. They've also captured a lot of information very poorly in their systems—it's very much in their heads.

I said, Let's focus on the younger generation. They’ll adopt AI, and we’ll create a best practice, one that we can use when we bring in new talent. So we built a three-year program focused on them. Ultimately, the program was so successful that the older generation said, We want to be part of that, too. 

For me, the next window for anyone embarking on an AI journey is to focus on embedding it upfront—knowing, of course, that the process will evolve over time. 

Begin with what we call an EDA—exploratory data analysis—to determine what the baseline is. That way, you can prove that you’re opening and closing claims far more quickly and can see the change quarter over quarter. That data helps sell the journey. We've also done quite a bit of work around what we call data quality programs, where we assess the quality and change behavior at the desk in terms of how people are capturing data—all the way from structured to unstructured and, more importantly, in the adjuster call notes. That program embeds the solution into the fabric of the organization.

I think that's the next wave. 

Paul Carroll

Thanks, Mark.

 

Sponsored by Wisedocs

About Mark Tainton

Mark Tainton

Mark Tainton is the SVP of Data Solutions at Wisedocs, bringing over 30 years of AI, data and analytics transformation expertise in insurance and financial services. Having served as Chief Data Officer at multiple leading organizations, Mark understands the critical intersection of medical intelligence, litigation strategy, and claims outcomes. He advises Wisedocs on data and product strategy, go-to-market positioning, and the deployment of AI-powered solutions that address the most pressing challenges facing claims and legal professionals today.


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.


Wisedocs

Profile picture for user Wisedocs

Wisedocs

Wisedocs is an AI-powered claims documentation platform purpose-built for insurance and medical record processing. Trained on over 100 million claim documents, the platform delivers structured, defensible outputs, from summaries to insights, all with expert human oversight. Wisedocs empowers enterprise carriers, government agencies, legal defense teams, and medical experts to improve operational efficiency, reduce administrative burden, and enhance decision accuracy. Visit www.wisedocs.ai to learn more.

Stop Defending, Start Anchoring

It's time to stop simply reacting to plaintiffs' counsel and to become more aggressive through data-driven counter-anchoring.

Decorative small anchor placed on weathered windowsill

Brute force has been the corporate response to the normalization of nine-figure payouts—build taller insurance towers. But by 2026, we've reached the breaking point of that strategy. Adding more capacity is no longer a hedge; it's a target. Leaders who continue adhering to a "wait-and-see" strategy will likely hand over their negotiating power to plaintiffs' counsel. It's time to stop reacting and shift to a more aggressive tactic of data-driven litigation counter-anchoring, a tactical maneuver that uses historical benchmarks and hard modeling to ground a case's valuation.

The Psychology of the First Number

Refusing to name a number isn't a denial of liability; it's a tactical surrender. When we stay silent and treat it as a problem for later, we leave a vacuum that the plaintiff is only too happy to fill. This is the psychology of anchoring: the first number heard becomes the mental hook upon which all subsequent negotiations hang. If the opening bid is a $100 million "lottery ticket," even a successful defense that cuts it in half results in a $50 million disaster.

Counter-anchoring disrupts this by providing a grounded alternative before the plaintiff's number can take root. This isn't a guess; it is a calculated figure backed by historical industry benchmarks and internal safety data. By presenting a credible, data-backed valuation early, we offer juries a "safe harbor."

Most jurors are actually overwhelmed by the emotional volatility of nuclear-risk cases; they want to be fair, but they lack a yardstick. When the defense provides that yardstick—derived from logic rather than emotion—it grants the jury the permission they need to reject an inflated demand without feeling they are dismissing the injury itself.

Deployment: When to Anchor (and When to Pivot)

Counter-anchoring is most effective in "gray area" liability cases—scenarios where the question isn't if the company is responsible, but for how much. In these high-value moments, the goal is to cap the ceiling before it vanishes. By introducing a data-backed valuation early in mediation, you effectively narrow the range between "reasonable" and "astronomical."

However, data is a double-edged sword. The greatest risk in this strategy is the "Cold Corporation" trap. If your counter-anchor looks like a sterile spreadsheet in the face of a human tragedy, you don't just lose the argument; you lose the jury.

There is a razor-thin line between being "grounded in reality" and being "callous to suffering." The math must be the foundation, but the delivery must be human. If the jury perceives your data as a tool to devalue a life rather than a method to find a fair resolution, the anchor will drag your defense to the bottom.

When executed with empathy, speed becomes your primary weapon. By removing the "valuation fog" early in the process, counter-anchoring forces both sides to deal with reality. It strips away the performative inflation of the discovery phase and gets to the heart of the settlement, often shaving months—and millions—off the litigation lifecycle.

The 2026 Toolkit: Credibility Over Calculation

In 2026, a spreadsheet is not a strategy. While internal loss runs are necessary, they are rarely sufficient to move a jury. To make an anchor stick, you must look beyond internal data. A jury will instinctively view a company's own historical figures as self-serving; to achieve true "safe harbor" status, your numbers must be validated against industry cohorts. Credibility is built on external benchmarks—proving that your valuation isn't just what you want to pay but what the broader market defines as objective reality.

The most critical hurdle, however, is the communication gap. Raw modeling is the foundation, but the courtroom narrative trumps all. If you cannot translate a complex actuarial model into a story about fairness and community standards, the data will be dismissed as "corporate math." The numbers provide the boundaries, but the narrative provides the "why."

Finally, this strategy demands a collapse of the traditional corporate silo. We are seeing the rise of the general counsel/risk manager nexus. In the past, Risk bought the insurance, and Legal fought the claims. Today, these two must merge their datasets well before a summons is served. By aligning on valuation models during the underwriting phase, the defense is armed and ready on Day 1 to set the anchor before the ink on the complaint is even dry.

The Underwriting Reality: From Defense to Differentiation

Adopting a counter-anchoring strategy does more than win cases; it fundamentally shifts the power dynamic at the renewal table. In the 2026 market, excess underwriters are no longer just looking at loss history—they are scrutinizing a firm's "litigation maturity." When you can demonstrate a repeatable, data-backed method for suppressing social inflation, you move from being a commodity risk to a "preferred risk."

The conversation with underwriters changes the moment you move beyond passive risk transfer. Instead of simply presenting a tower of limits, you are presenting a proactive defense framework. Underwriters are tired of "blank check" litigation; showing them that you have the tools to anchor damages early provides them with something they value more than anything: predictability. By proving you can cap the ceiling of a potential nuclear verdict, you provide the actuarial certainty that justifies lower attachments or more competitive pricing.

The ultimate result is a stronger strategic partnership with your carrier. You aren't just buying paper to cover a potential disaster; you are demonstrating a sophisticated operational control that protects the carrier's capital as much as your own balance sheet. In an era of escalating awards, the companies that thrive will be those that prove they aren't just insured against the storm—they have the data to ground the lightning.

A Grounded Future

The era of "buying our way out" of litigation risk is over. In a 2026 landscape where $100 million is the new baseline for a nuclear verdict, silence on damages is a luxury no risk team can afford. By embracing data-driven counter-anchoring, general counsels and risk managers can reclaim the narrative, providing juries and mediators with a logical "safe harbor" before the emotional tide takes over.

Success now requires a fusion of math and empathy—a strategy where the data is the foundation, but the story is the house. Ultimately, those who anchor early won't just lower their payouts; they will redefine what it means to be a resilient, data-forward organization in an age of outsized expectations.

What Insurers Will Learn About Trust... the Hard Way

Banks lost customers' trust one automated interaction at a time. Insurers are making the same mistakes. 

Low-Angle Shot of a Tall Glass Building under the Sky

In 1979, Gallup asked Americans how much confidence they had in banks. Sixty percent said a great deal or quite a lot. Banks ranked second out of nine institutions — behind only the church.

Today that number is 26%.

The collapse didn't happen because of one crisis or one bad actor. It happened over 40-plus years, one automated interaction at a time. ATMs that replaced tellers. Interactive voice response systems that replaced those ATMs. Digital channels that replaced the IVR. And now AI-driven decisions replacing the digital channel that replaced the thing that replaced the person who used to know your name.

Each wave came with a business case. And each wave, when it touched the moments that actually matter to customers — a confusing charge, a decision that needed explanation, the thing that went wrong at the worst possible time — quietly withdrew a small deposit from an account that doesn't show up on any balance sheet.

That account is trust. And trust, it turns out, is an organizational capability problem — not a sentiment problem.

The Moment That Reveals Everything

Here's what I observed working inside a global bank during those automation waves: the technology worked. The process was faster. The costs came down. And customers were fine — until they weren't.

When something went wrong, people didn't want a faster process. They wanted a person who understood the situation, had the authority to act on it, and demonstrated that the institution they'd trusted actually cared what happened to them. What they got, too often, was a system designed for the average case, handling something that wasn't average at all.

What struck me wasn't the technology failure. It was the organizational failure underneath it. The leaders driving automation were making efficiency decisions. Nobody was accountable for the capability question: Does this organization know how to rebuild trust when the automated system fails a real person? The answer, in most cases, was no — because that capability had never been built. It had been assumed.

That pattern — confusing an efficiency decision for a capability decision and discovering the difference too late — is what eroded four decades of public confidence in banking. And it's the pattern insurers are now repeating.

This Is Now Insurers' Problem

Insurers are making the same bet banks made, in the same places banks made it.

Claims. Denials. Coverage decisions. Underwriting. These are not commodity interactions. They are, almost by definition, the moments when a policyholder is most vulnerable — a damaged home, a health crisis, a business interruption, a death. They are the moments that test whether the relationship the insurer sold is real.

The industry is automating them anyway. With AI systems that make faster decisions, with chatbots that handle first contact, with models that assess claims before a human ever sees them. The business case is real. The efficiency gains are real. The risk is also real — and it is being systematically underestimated.

Here's what gets missed in most of these conversations: The risk isn't primarily in the technology. It's in the organizational capability gaps the technology exposes. Does this organization have the judgment infrastructure to know when a claim needs a human? Does it have the change leadership — not change management, but genuine leadership capability — to ensure that the people still in the room when it matters are empowered to act? Can it tell the difference between a process that's working and a relationship that's quietly eroding?

Most organizations can't answer yes to all three. Not yet.

What Happens to the Humans Left in the Room

Here is the part the business case doesn't model: what automation does to the agents and claims professionals who remain.

When an organization systematically automates the high-stakes moments, it doesn't just remove humans from those interactions. It degrades the humans who stay. Authority gets stripped. Judgment gets overridden. The agent or adjuster who once had the latitude to assess a situation and act on it becomes an escalation path for complaints the system couldn't handle — without the context, the tools, or the organizational backing to actually resolve them.

This matters because the agent is still the face of the insurer when the policyholder calls. The claims handler is still the voice on the other end when the denial needs explaining.

The data on this dynamic in financial services is stark. An Eagle Hill Consulting survey of more than 500 U.S. financial services employees found that 62% say their organizations have prioritized improving customer over employee experience — yet those same employees report that their own work experience directly affects their ability to serve clients. Dissatisfied employees are more than three times as likely to report that their negative work feelings reduce their willingness to help others.

Deloitte's research adds another dimension: When AI tools are introduced without careful design and change leadership, employees perceive their organizations as nearly two times less empathetic and human. That dynamic doesn't stay inside the organization. It travels. Policyholders feel it.

For insurers that rely on independent agents — professionals whose loyalty is earned, not owned — the stakes are even higher. Think of independent agents as the community bankers of insurance: For decades, they've translated corporate rules into human terms, sitting across the table from policyholders at the moments that matter most. J.D. Power's independent agent satisfaction research consistently finds that scores are dramatically higher — by hundreds of points — when carriers make agents easier to work with: faster quotes, transparent claims status, access to a human on complex cases. When AI becomes a black box agents can't explain to a policyholder, that advantage reverses. An agent who can't get a straight answer on a claim denial, or can't reach a human on an exception, doesn't complain to the carrier. They quietly shift their next piece of business elsewhere. The trust problem isn't just with policyholders. It runs through the entire distribution chain.

The Balance Sheet Doesn't Show the Problem — Until It Does

What makes this dynamic particularly dangerous is that trust erosion is invisible on a quarterly basis.

The banking sector learned this the hard way in early 2023. When Silicon Valley Bank failed, uninsured deposits left the broader banking system at the fastest rate recorded since the FDIC began tracking data in 1984 — an 8.2% quarterly decline, industry-wide, in a single quarter. The FDIC noted that SVB's deposits were "remarkably quick to run" precisely because they were concentrated among depositors whose trust, once shaken, had no friction to slow it.

Insurers don't face bank runs. But they face their own version: policy non-renewals, lapse rates, coverage migration, claims disputes that become regulatory attention, and the slow erosion of the trusted advisor position that has historically made insurance a relationship business.

The erosion rarely announces itself. It accumulates in policyholder satisfaction scores that drift, in agent feedback that doesn't make it up the chain, in claims handling data that gets read as operational variance rather than relationship signal. By the time it's visible on the balance sheet, the capability gap that caused it has been open for years.

This Is a Capability Problem. Capability Can Be Built.

The research on AI deployment in financial services confirms what the banking experience suggests. McKinsey finds that AI high performers are more than 1.5 times as likely to have changed their standard operating procedures and talent practices — not just deployed tools. MIT CISR shows that firms stuck in the pilot stage financially underperform their industries, while those that have embedded AI into their operating models significantly outperform.

What those numbers describe, underneath the data, is an organizational capability gap. The high performers aren't distinguished by better technology. They're distinguished by having built the mindsets, the skillsets, and the operating conditions — the governance, the decision rights, the human judgment infrastructure — that allow them to absorb what the technology makes possible without losing what made them trustworthy.

That's the real lesson from banking. The institutions that automated their way into a trust deficit weren't led by people who didn't care about customers. They were led by people who treated trust as a communications challenge rather than a capability one. They managed it. They didn't build it.

Insurers now face a choice that banks didn't get to make deliberately. Insurers can design AI deployments that preserve human judgment at the moments that matter most. They can build the change leadership and workforce capability that determines whether AI enhances the relationship or quietly erodes it. They can treat trust not as a sentiment to be managed after the fact but as an organizational capability to be built before the moment of truth arrives.

Or they can assume their situation is different from banking.

Banks assumed that, too.


Amy Radin

Profile picture for user AmyRadin

Amy Radin

Amy Radin is a strategic advisor, keynote speaker, and Columbia University lecturer focused on why transformation succeeds or stalls in large, complex organizations. 

Drawing on senior leadership roles at Citi, American Express, and AXA, including one of the world’s first corporate chief innovation officer roles, she helps leaders build the capabilities required to absorb, scale, and sustain change.

 

College Wrestling's Lessons for AI Innovation

The just-concluded NCAA Wrestling Championships showcased the sort of thorough competitive advantage that can come from early success with AI.

Image
2 Amateur Wrestlers Wrestling in the middle of a wrestling mat

As the Penn State wrestling team won yet another Division 1 title over the weekend--its 13th of the past 16 awarded--and did so in overwhelming fashion, I realized there is a deeper competitive advantage at play than exists even in other sports. 

College wrestling dominance requires a layer that goes beyond the normal advantages that come from having a great coach and a roster of superb college athletes. Penn State-level dominance in wrestling requires an additional, self-reinforcing factor--of the sort I think can come from early success with AI, as it builds and builds and builds on itself.

I'll explain. 

To understand that self-reinforcing factor, you need to look at the Penn State coach and at the coach whose record of 15 NCAA wrestling titles in 21 seasons Penn State is now approaching. 

The Penn State coach is Cael Sanderson, arguably the best college wrestler ever. He was undefeated in college, winning 159 matches, and won four NCAA individual titles. He also won a gold medal at the 2004 Olympics. 

The man he's chasing, Dan Gable, who coached the University of Iowa from 1976 through 1997, ranks even higher in the wrestling pantheon. He not only won two NCAA individual titles (in an era when freshmen weren't allowed in the tournament) but took the gold medal at the 1971 world championships and at the 1972 Olympics. In those tournaments, Gable won each of his six matches in those tournaments without giving up a point--a preposterous achievement given how scoring works in international wrestling.

Sanderson's and Gable's credentials are so impressive that they naturally attracted top recruits -- and started to build that self-reinforcing layer. 

Wrestling differs from most college sports because the very best tend to pursue international careers after graduating but don't have any affiliation akin to what other athletes take on in professional leagues. Post-college wrestlers need a home. They need a wrestling room. And the best go to the best room, making it even better... and on and on we go.

Penn State has easily the best roster of collegiate talent at the moment -- six wrestlers made it to the NCAA finals among the 10 weight classes last weekend, tying the record, and four won titles. And Penn State has even better talent among the international wrestlers, who bring with them scores of NCAA titles and medals from world championships and the Olympics. In the finals of the 190-pound weight class at the U.S. trials for the 2024 Olympics, two wrestlers from that room went up against each other and had an epic battle -- which qualified as just another day in the life of Penn State wrestling.

The insurance industry should, I think, draw a lesson because AI can create a flywheel effect similar to what's happening at Penn State and what happened under Dan Gable at Iowa in the '80s and '90s. 

Adopting AI won't happen overnight. Using it is an unnatural act for many people, especially older ones, so you need to find ways to get people to start to get comfortable with it. You need to produce successes that you can use to evangelize about AI. You need to create rock stars that, while not at the level of a Sanderson or Gable, can attract talented people who want to take on more ambitious projects. You need to keep testing and feeling your way toward more aspirational business models, going beyond efficiencies to, perhaps, embedding insurance in other companies' sales processes or developing services that predict and prevent losses before they can occur.

In fact, early successes with AI can generate savings that you can pump into more future projects, so you just keep accelerating. 

(I realize I made more or less this point about a flywheel in last week's commentary on Lemonade, but I think it's so important that it's worth reinforcing, and college wrestling turns out to be even a better example than Lemonade.) 

No competitive advantage lasts forever. Gable retired at age 48 -- coaches often mix it up with their wrestlers, and even an all-time great eventually wears down. The Iowa program, while still strong, has drifted in the decades since. Sanderson is now 46, and maybe he'll tire out one of these days, too. Meanwhile, David Taylor, a just-retired big name, has set up camp at Oklahoma State, which had four wrestlers make the NCAA finals. Three won. All four are freshman. So another cauldron of a wrestling room may be taking shape.

But I'll bet any insurer would be happy with an advantage on AI of the sort that Sanderson has produced at Penn State and that Gable developed at Iowa before him.

Cheers,

Paul

The Fraud Window Opens at Death

Deceased policyholders' digital accounts remain accessible to fraudsters but locked to legitimate beneficiaries, creating costly exposure for life insurers.

Man Placing a Bunch of Flowers on a Grave

Policyholders are dying with dozens of open digital accounts, no record of what they own, and no plan for what happens to any of it. When that happens, a fraud window opens. That gap has a cost, and insurers are absorbing it. Life insurance is where the stakes concentrate and the exposure is most acute.

Sandra filed the life insurance claim four days after her husband's death. She had everything she was supposed to have: the policy number, the death certificate, executor authority. Her insurer had 17 unverifiable digital accounts, a death record that hadn't reached the broker databases yet, and a fraud window that had been open since the obituary ran.

That's the default condition for life insurance claims today.

The scale of the problem

Policyholders maintain dozens of active digital accounts - financial, medical, cloud storage, subscriptions, social media. Many hold documentation directly relevant to estate and insurance administration. Death doesn't close those accounts; it severs access to them.

Only 36% of Americans use password managers, meaning most policyholders leave no systematic record of what they own digitally or how to reach it. Most major platforms offer some form of legacy contact or digital will feature, but adoption remains low. Death leaves a scattered, largely inaccessible digital estate, one that intersects directly with claims management processes.

Where the cost lands

This is where the exposure becomes the insurer's problem, and that immediate exposure is fraud. After a death, a gap opens between when the death certificate is issued and when that record propagates to the commercial databases that underpin identity verification. During that window, the deceased's digital accounts remain accessible to anyone who can answer a few security questions, questions drawn from the same broker records that haven't been updated yet.

Thieves target recently deceased identities, while life insurers absorb the cost - fraudulent claims, delayed payouts to legitimate beneficiaries, reputational harm when carriers pay bad actors.

There's a legal dimension too. Most platform terms of service were not written with estate law in mind. Even where the Revised Uniform Fiduciary Access to Digital Assets Act (RUFADAA) gives executors legal access to digital accounts, platforms often don't honor it in practice. The beneficiary has a legal right that the platform won't act on. The adjuster has no clean path forward.

Health insurance and workers' compensation face the same fragmentation - medical records, employer portals, and benefit accounts scattered across systems that don't communicate. But life insurance sits at the sharp end of the problem, where the industry's exposure is most acute.

The verification gap

The infrastructure for verifying identity after death has a gap built into it. Deceased individuals' records persist in commercial data broker databases indefinitely, with no real-time connection to official death records. Verification systems that rely on those databases can't distinguish between a living person and a recently deceased one. The fraud window is a consequence of infrastructure that was never designed to handle life transitions.

Sandra's experience perfectly illustrates both sides of that gap. Sandra couldn't get to her husband's financial accounts. Platforms that held documentation she needed for the claim locked her out despite her legal authority as executor. While she was fighting for access, the fraud window that had opened at his death was available to anyone with enough of his personal history to answer a few questions. The accounts she couldn't reach to support her claim were simultaneously drainable by strangers.

AI as accelerant

Voice cloning and deepfake technology now allow a bad actor to reconstruct a deceased person's voice or likeness from publicly available material, and use it to defeat authentication systems that were never designed with post-death scenarios in mind. As a result, the cost of perpetrating this type of fraud is falling and the risk is rising.

No standard consent or identity framework currently governs the use of a deceased person's biometric data. No enforceable mechanism exists for people to specify how their likeness can be used after death, and insurers have no protection against the claims that follow.

The limits of individual planning

Those who use password managers are ahead of their peers, but individual preparation has a ceiling. Even the most organized policyholder can't force their bank, their cloud provider, and their insurer to exchange data in a standardized way after their death. That requires infrastructure that doesn't yet exist.

The question is: Who shapes that infrastructure? And will the sectors with the most to lose have a seat at the table when the standards are written?

A call for industry engagement

The Death and the Digital Estate (DADE) Community Group at the OpenID Foundation, which I co-chair, recently published a white paper and a planning guide laying out the problem and recommendations for addressing it. Developing interoperable standards for the full lifecycle of digital estate management will require expertise from every affected sector; the insurance industry's knowledge of fraud vectors, claims complexity, and regulatory exposure is specifically what's missing from this conversation.

The groundwork for those standards is being laid now. The sectors that engage early will shape the agenda before the formal process begins. If your organization has a stake in how they get built - and insurers clearly do - the DADE Community Group welcomes participation.


Eve Maler

Profile picture for user EveMaler

Eve Maler

Eve Maler is the founder and president of Venn Factory and co-chair of the Death and the Digital Estate (DADE) Community Group at the OpenID Foundation. 

She led identity innovation at Sun Microsystems and ForgeRock, serving as ForgeRock's CTO through Series E, IPO, and acquisition. 

Healthcare Requires a New System Design

Making healthcare affordable requires rethinking system design through financial protection, cost discipline and shared digital infrastructure, not just pricing fixes.

Dctor in a white coat with a stethoscope around her neck looking at a screen against a white office background

Healthcare affordability is often treated as a pricing problem. Let us reexamine affordable healthcare as a system design problem - with measurement methods/metrics, shared infrastructure and practical adoption pathways.

I am borrowing a "grounded futurism" mindset similar to Dario Amodei's Machines of Loving Grace to make the vision concrete, identify leverage points, acknowledge adoption frictions and build pathways that can learn and adapt to societal needs.

In healthcare, the leverage points are clear and practical: a) protect households from financial shocks, b) control system costs through purchasing and delivery design, and c) build shared digital and data infrastructure so improvements can scale beyond pilots and be extensible.

What is affordable healthcare?

"Affordable" doesn't mean cheap. It means access to needed care without financial hardship. The most useful global yardstick is SDG indicator 3.8.2, revised in 2025 to better capture hardship among poorer households. It tracks the proportion of population with positive out-of-pocket (OOP) health spending exceeding 40% of household discretionary budget (relative to societal poverty line).

Why does affordability look different across countries?

The challenges vary by fiscal capacity, health system maturity, and implementation capability — i.e., ability to coordinate providers, payers, and supply chains. This is why WHO's global digital health strategy emphasizes institutionalizing digital health through an integrated approach of financial, organizational, human and technological resources. This is where affordability can be operationalized via shared infrastructure (identity, registries, exchange standards, claims rails, supply chain visibility, etc.)

What works (transferable design patterns), and why is data the key denominator?

Countries that sustain affordability tend to combine financial protection, cost discipline and organized delivery. Thailand's Universal Coverage Scheme (UCS) pairs coverage with explicit cost controls, including capitation for outpatient care and diagnosis-related groups (DRGs) under the country's budget for inpatient care, and positions its purchaser (NHSO) as an "active" manager of budgets and payments. NHSO's responsibilities include registration of beneficiaries and providers, establishing a claims and reimbursement process and using a standard dataset and APIs for claims flows — i.e., affordability reinforced through systems and not only policy.

India's ABDM (National Health Stack) reflects the same principle via a modern digital public infrastructure (DPI). It is built from Health IDs (ABHA), provider and facility registries (HPR/HFR), and a consent manager enabling consented exchange in a federated architecture, designed to support continuity of care and interoperability across a diverse ecosystem.

These examples imply that you cannot scale affordability without building country/state/region-specific datasets as public utilities, as targeting, purchasing, and delivery of health services (including AI) all depend on them.

The Affordable Healthcare Replication Stack: Systems View (three pillars)

The learnings from those transferable design patterns lend themselves to the systems view below for affordability.

1. Financial protection (prepayment + pooling + subsidies + safety nets) Goal: Reduce household hardship, measured using revised SDG 3.8.2 (2025) and complementary impoverishment measures. Required datasets: Household financial protection dataset (OOP spending and consumption/income) captured via household surveys, Beneficiary & entitlement dataset: Eligibility, enrollment and benefit rules captured as part of beneficiary registration and entitlement management by Thailand's NHSO. AI acceleration: AI can improve eligibility verification, detect anomalous enrollment patterns, and optimize outreach (renewals, maternal/NCD reminders), but only once entitlement datasets are reliable and governance is in place.

2. Cost Discipline + Access (strategic purchasing + primary care-first delivery) Goal: Keep care affordable for the system and accessible for patients by shaping incentives and shifting care upstream. Thailand illustrates how provider payment design (capitation + DRG/budget) can contain costs while scaling coverage. Required datasets: Provider and facility registry - who is licensed, where they operate and what services they offer. ABDM's HPR/HFR are direct analogs of this "registry layer", Utilization and case-mix dataset - outpatient visits, inpatient episodes, DRG groupers, Referral pathway and primary care dataset - catchment areas, referral rules, appointment and follow-up flows. AI acceleration: AI copilots can reduce clinical burden and expand capacity - especially documentation and decision support.

3. Digital Rails for Scale (Health DPI + Claims rails) Goal: Make affordability scalable and auditable by reducing fragmentation, duplication and payment friction. ABDM is a working reference to provide a federated, consent-based exchange with registries and gateway model for interoperable services. Required datasets: Longitudinal health record pointers and metadata that are discoverable and consented references to clinical history, Claims and payment status dataset: Standardized, machine-readable claims for adjudication and auditing enabled by National Health Claims Exchange (NHCX). AI acceleration: AI reduces leakage and delay when claims and registries are machine-readable.

An example/'living lab' archetype in creating datasets - A powerful way to build datasets from the ground up is to start in a region with real operational constraints and build end-to-end connectivity. This is demonstrated in Kuppam, Andhra Pradesh (India) via Tata's Digital Nerve Centre (DiNC) - by digitizing personal medical records, connecting an area hospital with 13 primary health centers (PHC) and 92 village health centers, enabling continuous monitoring, timely diagnosis and virtual consultations. DiNC integrates public health facilities through digital tools and protocols to improve coordination and patient convenience.

The supply chain resiliency on affordability - Affordability is not only financing and care delivery, but also the reliability and cost of diagnostics and supply chains, especially during shocks. C-CAMP's Indigenisation of Diagnostics (InDx) program that was launched to build molecular diagnostics capacity and supply chain networks during COVID, connects indigenous manufacturers, suppliers, service providers and health agencies to improve supply chain visibility and accountability. This can be leveraged as a "Diagnostics & Supply Chain Data rail" when connected to public procurements and primary care diagnostic needs.

A pragmatic roadmap of affordable healthcare for developing economies

Here's a practical sequence that acknowledges adoption frictions and delivers services:

  1. Adopt revised SDG 3.8.2 (2025) metric and publish baselines/targets for financial protection.
  2. Establish or strengthen an active purchaser function and implement payment discipline
  3. Build health DPI early - India's ABDM provides a working reference architecture
  4. Digitize claims via claims rails (similar to National Health Claims Exchange) to reduce friction
  5. Use district "living labs" for social datasets, connected PHCs to harden workflows and enable scaling and outreach
  6. Strengthen diagnostics and supply resiliency with InDx-like marketplaces
  7. Deploy AI where it delivers value in the safest and most responsible way - tele-triage, imaging, clinician co-pilots, claims, etc.

Affordable healthcare is not achieved by one reform or one model, but a continuous journey when financial protection, cost discipline and digital rails evolve together - and when AI is used to reduce burden and extend scarce expertise, reinforcing responsible policies, controls and effective governance for social good.

Time for action is NOW

If you had to start tomorrow, what would you build first in your state/country and why?

  1. Entitlement + benefit registry
  2. Provider/facility registry + service directory
  3. Digital public infrastructure
  4. Claims rails
  5. Diagnostics supply chain visibility

Prathap Gokul

Profile picture for user PrathapGokul

Prathap Gokul

Prathap Gokul is head of insurance data and analytics with the data and analytics group in TCS’s banking, financial services and insurance (BFSI) business unit.

He has over 25 years of industry experience in commercial and personal insurance, life and retirement, and corporate functions.

Insurers Must Fix Enterprise Design to Use AI Right

Insurers remain trapped in AI pilot purgatory by layering technology over fractured legacy systems instead of solving core enterprise design problems.

White frequency lines an dots across a gradient purple background

Insurance's value is a myriad of things. Insurers' problems are, too.

We can't move without insurance, yet we don't trust it and often don't value it, either. It's a cost, a necessary evil, essentially a direct debit on the balance sheet of our lives and businesses we would rather not have. 

Here we are at the tipping point where math and neurons can think for us, and at levels of "intelligence" we are often told we can't even comprehend. Despite this, most of what we are artificially trying to make more intelligent is simply what we do today. And to many of us, this doesn't seem right at all.

The issue for strategic thinkers remains "value chain" thinking, where we focus on minimizing costs and maximizing distribution (channels, coverage, capacity). This puts us at a permanent disadvantage, where new value, through new working models in new technology, is pushed aside for cost savings and efficiency. Worse, when we try to do this with prediction token engines, we are constantly backpedaling because we live in an industry that needs us to be highly deterministic. This is one of the key reasons we remain in pilot purgatory with AI far too often.

We need to solve the meaningful problems we face and start to evolve our business and technology architectures into ecosystems capable of maximizing the knowledge of a customer (and their risks) and acting on this as near to real time as needed.

To do this, we have to address major issues or misperceptions:

  • Many insurers are building houses on sand by layering AI over a "messy middle" of fragmented data and customer-blind legacy processes. AI isn't a repair kit for insurers' broken business models.
  • If we apply AI to a fractured, policy-centric design, we just get fractured, policy-centric mistakes - at scale and at speed. We are simply automating the friction, industrializing the silos, and alienating the customer faster than ever before.
  • The insurance industry is obsessed with plugging in AI, but it's still in pilot purgatory. And that's because layering GenAI over outdated data structures and silos means we aren't innovating; we're building a house on quicksand.
Framing the answer to this paradoxical state

This is, therefore, an enterprise design problem, where policy-centric architectures have to give way to customer-centric enterprises.

Building AI into this new model is vital, but so is building in risk, regulations, compliance, auditing and legal. If things move in real time and intelligently, so will all these things as well.

We need to move from a "data & AI" strategic frame where these things become almost self-serving toward an "intelligent" business model, where data is seen as a perishable asset, constantly mined for insight and acted on as close to real time as is needed, but in a controlled, deterministic and responsible way.

To make this possible, we need to deal with the messy middle. That's because operations in insurance are the big unlock - where the magic (or the misery) happens. If the middle is a black box of manual hand-offs and disconnected spreadsheets, AI will choke on it anyway.

Insurance is a process-heavy industry, one where simply making a claim also means the insurer understands the wider context we are in, that it will focus communications on the best resolution path, that other communications or needs are sympathetically managed in this context, like a repairer, and so on. It's multi-faceted, and the operations, customer experience, and data that weave it together need to be symbiotic. We are at the point now where operational efficiencies and better customer experiences are mutually beneficial, and not the opposing forces they are all too often seen as.

To get to the end state where AI actually works and starts to create new value, we need an evolutionary model to aim for. And we need to clean up this messy middle and orchestrate the flow of outcomes more intelligently - I tend to call this intelligent orchestration. Systems of intelligence are hyped and relevant, but systems of outcome are needed to make them count.

In conclusion

Foundationally, we need a robust data orchestration layer (not more data storage), but insurers need a unified data model, built around the customer. Data should be fluid, so events are available and usable when they need to be.

Insurers need to be able to interoperate agents, with telemetry across their estates, all the way into employee and customer use. And they need a deterministic framework that harnesses agentic solutions and ensures human intervention. But it also needs to be deliberately designed to maximize human interaction when it's needed.

AI is an outcome, not the goal, and once insurers solve the enterprise design problem and move from policy-centric to customer-centric via intelligent orchestration, AI likely becomes the hero. A hero they can control, manage the risk of, and interoperate and adapt at will.


Rory Yates

Profile picture for user RoryYates

Rory Yates

Rory Yates is strategic adviser for insurance at Synechron, a digital transformation consulting firm.

He previously was the SVP of corporate strategy at EIS, a core technology platform provider for the insurance sector.

Lemonade Throws Down the Gauntlet

The 10-year-old insurtech carrier claims it has an insurmountable lead in AI — an overly bold assertion, but one that deserves a hard look. 

Image
Robots Using Laptops

For a 10-year-old carrier that still has a combined ratio far above 100, Lemonade has never been reluctant about dissing its established competitors or about patting itself on the back. In that vein, CEO Daniel Schreiber recently published a manifesto titled, "Why Incumbents Won't Catch Up." 

The cheeky claim is that Lemonade was founded as an AI-native and thus has a 10-year head start on State Farm, Allstate, Progressive, GEICO, et al. Schreiber says the incumbents are "optimized for yesterday," while Lemonade is "designed for the world as it’s becoming." He argues that Lemonade's advantage will keep growing. 

Schreiber's argument doesn't make me want to rush out and buy stock in Lemonade, which, after some years in the wilderness, has recently surged and now carries a hefty $5.1 billion market valuation. But I don't dismiss his argument, either. He's certainly right that early movers like Lemonade have an advantage that incumbents need to reckon with. He also poses three measures for AI adoption that all insurance companies should test themselves on.

Let's have a look. 

Schreiber writes that "companies who slap technology on top of their legacy businesses are not changing their DNA: their incentives, capital allocation logic, talent mix, data architecture, distribution dependencies, brand promise, investor expectations, and legacy stacks. Those systems and processes co-evolved over many decades. They cannot be reengineered piecemeal; and untangling them is laborious and risky."

He says Lemonade began as an AI-native: 

"The result is a different cost structure. A faster clock speed. A compounding feedback loop that continuously improves underwriting, customer experience, and efficiency.

"The question, then, is not whether incumbents can “use AI.” Of course they can. And they should. The question is whether they can re-architect themselves to close the gap to Lemonade. 

"That seems unlikely."

To buttress his argument, he suggests three tests for whether an insurer is adopting AI at its core. All three, of course, show Lemonade outpacing incumbents. 

The first is what Schreiber calls The Scaling Quotient. You look at how fast you're growing, by whatever measure you use. You then divide that growth rate by the rate at which your headcount is increasing. If you're growing, say, your policies in force far faster than you're adding people, you're winning. If not, not. 

Second is Loss Adjustment Expense Ratio. You take your loss adjustment expenses and divide by your gross earned premium. If you're spending a lower percentage than the industry average, and the percentage is declining, you're winning. If not, not. 

Third is what Schreiber calls Structural Precision. This involves two calculations of gross profit. First is gross profit divided by your exposure — you want as high a profit as you can get based on the risk you're taking on. Second is gross profit divided by your sales and marketing expenses — you want to acquire customers as efficiently as possible. You add the two calculations, then compare yourself to the industry over time. 

Those all strike me as fair enough measures of efficiency for any carrier, and AI is certainly the main driver these days. I think his approach can be extended to other players in the insurance industry, not just carriers. Agencies, for instance, can measure whether AI is making them more efficient in winning clients, in processing renewals and so on. 

If you take Schreiber's piece as a wake-up call for incumbents, I can get behind that, too. They can't just be tacking on bits of AI to become slightly more efficient, and they can't just wait and see. The carriers developed their cultures over decades, and changing them will take many years. People don't change overnight even if the technology does. Incumbents have to be thinking big — NOW — and experimenting with ways to allow for radical change. That may even mean new service-based business models, such as Predict & Prevent, or very different distribution channels, such as through embedded insurance. 

Schreiber can certainly point to lots of industries where upstarts with a head start and momentum overcame incumbent behemoths — look at Kodak, Blockbuster, Nokia and Blackberry, city taxi monopolies and Sears (as well as every other company in Amazon's path).

Now to quibble.

For one thing, Schreiber is focusing almost entirely on overhead, which accounts for maybe 20% of every premium dollar, while claims in P&C account for north of 60%. You can be as efficient as you want in processing claims, but if you're taking on bad risks you're still going to lose — and even after years in the business, Lemonade's combined ratio in the fourth quarter was 139.

In addition, as Simon Torrance writes in this thorough analysis, the sort of AI that will really matter in the long run is AI agents, and the competition is just beginning in that phase. He says:

"The genuine compounding asset — the one that cannot be replicated by purchasing the same technology at a later date — is not automated claims processing. It is what happens [when] deliberative agentic teams capture structured reasoning with every decision, build institutional memory that compounds across thousands of cases, and encode expert judgment that persists independently of the individuals who generated it. This is Intelligence Capital. The question Lemonade's investors should be asking is whether their architecture has built this — or whether it has built a more efficient version of what every insurer will have by 2027."

Lemonade might also want to be careful about lecturing incumbents just yet, given that it is still small and has so many ways it could slip up as it expands into new lines of business and new geographies. (Here is a good analysis of its opportunities and challenges.)

But I suppose being cheeky is in the company's DNA at least as much as AI is. 

I hope the rest of us take the Lemonade manifesto for what it's worth — and devise real metrics that accurately measure our progress with AI (or lack thereof), think boldly about where AI agents can change everything about our businesses and start reshaping our cultures for, as Schreiber put it, "the world as it's becoming."

Cheers,

Paul

 

Colorectal Cancer Challenges Life Insurers

A 30% rise in colorectal cancer among adults under 50 is forcing life insurers to rethink age-based underwriting models.

Woman in White Scrub Suit Wearing Black and Gray Stethoscope

Colorectal cancer has long been viewed as a condition primarily affecting older adults, but that assumption is rapidly becoming outdated. Over the past two decades, a marked increase in colorectal cancer diagnoses among people under 50 years old has emerged as one of the most concerning epidemiologic shifts confronting both the medical community and the insurance industry. For life insurers, this rise in early-onset colorectal cancer (EOCRC) brings far-reaching implications, from underwriting and pricing to product development and wellness strategy.

A rising trend with industry-level consequences

Early-onset colorectal cancer, defined as diagnosis before age 50, has grown steadily, with incidence climbing by roughly 30% in the last two decades. Although overall case counts remain lower than in older populations, the rate of increase underscores an unsettling trajectory.

Studies now show an approximate 2% annual rise in diagnoses for adults aged 20-50.

For insurers, this change disrupts longstanding mortality expectations built on age-driven risk curves. Younger applicants have traditionally been priced favorably due to low expected cancer incidence. But the rapid emergence of EOCRC means traditional age-based risk assumptions no longer fully capture early life cancer risk. Compounding this challenge, younger patients often present with a more advanced disease. Symptoms — such as abdominal discomfort, rectal bleeding, or shifting digestive patterns — frequently mimic benign conditions, delaying diagnosis and worsening outcomes.

As a result, underwriting models built around the idea that cancer risk accelerates mainly after age 50 must be reassessed.

Understanding the drivers: Lifestyle, genetics, and environmental factors

The rise in EOCRC stems from a complex interplay of behavioral, genetic, and environmental forces. Lifestyle shifts, including diets high in processed meats and low in fiber, reduced consumption of fruits and vegetables, and increased sedentary behavior appear to play substantial roles. The parallel rise in obesity adds another layer of risk, amplifying inflammatory and hormonal pathways associated with colorectal tumor development.

Genetic risk, while present in a smaller segment of the population, carries significant consequences. Inherited conditions, such as Lynch syndrome or familial adenomatous polyposis, sharply elevate lifetime risk. Mutations in genes, including NTHL1, POLE, POLD1, and RNF43 also contribute to susceptibility, and a family history of colorectal or endometrial cancer is a consistent red flag.

Environmental and medical exposures may also be contributors. Frequent antibiotic use can disrupt the gut microbiome, potentially altering protective bacterial profiles. Long-term inflammatory disorders, such as inflammatory bowel disease, create chronic tissue stress that elevates cancer likelihood.

For insurers, recognizing how these variables interact is essential. Incorporating lifestyle, familial, and clinical risk indicators into modern underwriting frameworks helps ensure high-risk younger applicants are identified earlier and more accurately than age-based approaches alone allow.

Screening guidelines shift — and insurers must follow

One of the clearest responses to rising EOCRC has come in the form of revised screening guidelines. The U.S. Preventive Services Task Force and the American Cancer Society now both advise routine colorectal cancer screening beginning at age 45 for average-risk adults — a notable reduction from the longstanding threshold of age 50. In certain high-risk populations, earlier screening may be warranted. Some European health networks are already exploring screening initiation at age 40.

As screening recommendations evolve, early detection will likely improve, which is particularly crucial for younger adults who tend to present later in the disease process. This shift presents an opportunity for insurers to align underwriting expectations with modern preventive care standards and encourage applicants to stay current with screenings.

Advances in screening and diagnostic technology

Beyond guideline changes, screening technologies are rapidly advancing. While colonoscopy remains the most definitive method, emerging modalities are increasingly accessible and appealing to younger adults who may be reluctant to undergo invasive procedures.

Noninvasive stool-based tests, such as fecal immunochemical tests (FIT) and multitarget stool DNA tests (mt-sDNA), offer convenient at-home screening with promising detection capabilities. Frequent use of these tests tends to boost adherence — an important advantage for younger populations.

CT colonography, or virtual colonoscopy, offers a radiologic alternative, while capsule endoscopy provides a swallowable camera platform with future potential for broader colorectal screening use.

Perhaps most transformative is the rise of blood-based biomarker testing, including liquid biopsies that detect circulating tumor DNA or methylated DNA fragments. Machine-learning-enhanced platforms now combine methylation signatures with DNA fragment analysis to pick up cancer indicators at minimal concentrations. Meanwhile, germline multigene panel testing is uncovering meaningful hereditary risks in approximately 14% of colorectal cancer patients, prompting universal recommendations for genetic testing in EOCRC cases.

For insurers, keeping pace with the strengths, limitations, and cost profiles of each screening approach can inform more accurate underwriting guidelines and create opportunities to promote early detection among policyholders.

Underwriting implications: Rethinking risk in younger applicants

The shifts in incidence and screening warrant a reevaluation of underwriting practices. Traditional risk assessments centered heavily on age must now incorporate:

  • More sophisticated risk stratification, combining family history, lifestyle indicators, and screening adherence.
  • Adjusted premium models that account for elevated risk in younger demographics while rewarding proactive health behaviors.
  • Integration of new data sources, such as medical records, wearables, and — in jurisdictions that allow it — genetic testing results to capture emerging risk more precisely.

However, insurers must also guard against anti-selection, as applicants aware of personal risk may seek coverage before formal diagnosis or symptoms emerge. Balancing comprehensive risk assessment with regulatory and ethical constraints will be crucial.

Product innovation: A strategic opportunity

While EOCRC presents clear challenges, it also invites innovation. Insurers can differentiate themselves by designing products that integrate early detection, lifestyle engagement, and preventive health participation. Potential avenues include:

  • Policy discounts or riders tied to completion of recommended screenings
  • Wellness incentives for maintaining healthy diet and exercise habits
  • Educational programs that inform younger customers about cancer warning signs and the value of screenings

Such initiatives not only enhance customer loyalty but also reduce long-term claims exposure by facilitating earlier diagnosis and intervention.

Challenges ahead

Implementing EOCRC-aligned underwriting and product strategies is not without obstacles. Privacy concerns must be properly managed as the use of genetic or personal health data increases. Evolving screening technology may outpace underwriting updates, creating a lag between best medical practice and insurance assessment. Operationally, insurers must invest in training, systems modernization, and compliance oversight to ensure new processes are implemented safely and efficiently.

Conclusion

Early-onset colorectal cancer represents a fast-emerging risk that the life insurance industry can no longer overlook. By aligning underwriting models with modern epidemiology, embracing new screening technologies, and developing products that encourage proactive health behaviors, insurers can both mitigate risk and empower policyholders. Those who adapt early will not only strengthen market competitiveness but also play a meaningful role in improving health outcomes for a generation facing rising cancer risk far sooner than expected.


Russell Hide

Profile picture for user RussellHide

Russell Hide

Dr. Russell Hide is a medical advisor with RGA.

He specializes in underwriting and claims assessment support for South Africa and the EMEA region. He has more than 25 years of experience in the insurance and reinsurance sectors, as well as a clinical background in general practice. 

He holds an MBBCh degree from the University of the Witwatersrand.

Coder Cannibalism

Developers who automated other industries now face AI displacement themselves, as technical certifications prove less valuable than human judgment and accountability.

Woman Using a Computer

Most of my friends are coders—and, disclosure, I used to be one. Smart people. Good people. People who spent years mastering arcane syntax, memorizing AWS service catalogs, stacking certifications like frequent flyer miles, and genuinely believing—with some justification—that they were the high priests of the modern economy.

They automated the travel agents. The paralegals. The loan officers, the radiologists, the customer service reps, even the truckers—at least in theory. And they did all of it with a clear conscience because, hey, that's capitalism, baby. Creative destruction. If we can do it better, faster, cheaper, then by the immutable laws of the market, we should.

They were not wrong. And they were not unkind people. They just never believed, not really, not in their gut, that the logic had a return address.

It does.

Amazon just laid off a cohort of developers whose primary offense was building something that worked. The system they constructed—on AI, with AI, as a monument to AI—became, upon completion, the argument for their own termination. The product was the pink slip. You couldn't script a better parable. These weren't junior button-pushers. Some of them held AWS Solutions Architect certifications. Professional level. The kind of credential that used to mean something in a job interview, that used to justify a salary band, that used to make a hiring manager feel confident they were buying proven expertise.

What they were actually buying, it turns out, was structured knowledge retrieval. Which is a very polite way of saying: a human being who had memorized a lot of things and learned to pattern-match against them quickly. And if there is one thing—one single thing—that large language models do better than humans, it is exactly that. The machine doesn't need a certification. It doesn't need a salary. It doesn't get defensive when you change the requirements at 11 p.m.

So here we are. The hue and cry from the coding community is structurally identical to every argument that was dismissed when the travel agents and the paralegals and the loan officers were in the crosshairs. This is different. This requires real skill. You don't understand the complexity. 

Brother, Sister, those whose jobs you automated said the same thing. You just didn't listen because you were the one holding the compiler.

The real question—the one worth asking these days—is, what skills actually don't have a shelf life problem? Some of them seem obvious in retrospect, and most of them aren't technical.

Regulatory judgment under uncertainty is one. Not knowing what a rule says—AI can read the Federal Register faster than any human—but knowing what it means when a specific auditor in a specific regional office has been interpreting it a certain way for three years. That's pattern recognition built from exposure and consequence, not training data. A friend of mine who works in healthcare private equity says the top three risks related to any deal are regulatory in nature—gray area, subjective.

Organizational power mapping is another. Every failed technology implementation in history failed for the same reason: someone built the right thing for the wrong power structure. The CMO thinks she controls the data. The CFO controls the budget. The VP of operations controls the workflow. The IT director controls the timeline through "security review." No AI maps this. No certification covers it. This is human intelligence in the original meaning of the phrase.

Cross-domain translation may be the rarest and most durable skill of all. The ability to stand in a room and make a CMS actuary, an Epic build team, and a 55-year-old case manager all feel heard, and then synthesize what they need into something that actually ships—that's not a technical skill. It never was. We just told ourselves it was adjacent to technical skill so the coders could claim it.

And finally, accountability. The willingness to put your name on a recommendation and mean it. AI is a brilliant, tireless, unaccountable collaborator. In regulated industries—healthcare, insurance, finance, law—where the downside of being wrong is measured in dollars with a lot of zeroes or people with actual problems, someone has to own the outcome. That someone is still a human being with a name and a reputation and something to lose.

The coders who survive this aren't the ones who fight the AI. They're the ones who understand that the job was never really about the code. It's about the judgment surrounding the code. Which explains why Stanford CS grads can't find jobs—while McKinsey is hiring liberal arts majors again. Coders just got away with charging for the code because nobody had built the machine yet.

Now somebody has.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.