Download

College Wrestling's Lessons for AI Innovation

The just-concluded NCAA Wrestling Championships showcased the sort of thorough competitive advantage that can come from early success with AI.

Image
2 Amateur Wrestlers Wrestling in the middle of a wrestling mat

As the Penn State wrestling team won yet another Division 1 title over the weekend--its 13th of the past 16 awarded--and did so in overwhelming fashion, I realized there is a deeper competitive advantage at play than exists even in other sports. 

College wrestling dominance requires a layer that goes beyond the normal advantages that come from having a great coach and a roster of superb college athletes. Penn State-level dominance in wrestling requires an additional, self-reinforcing factor--of the sort I think can come from early success with AI, as it builds and builds and builds on itself.

I'll explain. 

To understand that self-reinforcing factor, you need to look at the Penn State coach and at the coach whose record of 15 NCAA wrestling titles in 21 seasons Penn State is now approaching. 

The Penn State coach is Cael Sanderson, arguably the best college wrestler ever. He was undefeated in college, winning 159 matches, and won four NCAA individual titles. He also won a gold medal at the 2004 Olympics. 

The man he's chasing, Dan Gable, who coached the University of Iowa from 1976 through 1997, ranks even higher in the wrestling pantheon. He not only won two NCAA individual titles (in an era when freshmen weren't allowed in the tournament) but took the gold medal at the 1971 world championships and at the 1972 Olympics. In those tournaments, Gable won each of his six matches in those tournaments without giving up a point--a preposterous achievement given how scoring works in international wrestling.

Sanderson's and Gable's credentials are so impressive that they naturally attracted top recruits -- and started to build that self-reinforcing layer. 

Wrestling differs from most college sports because the very best tend to pursue international careers after graduating but don't have any affiliation akin to what other athletes take on in professional leagues. Post-college wrestlers need a home. They need a wrestling room. And the best go to the best room, making it even better... and on and on we go.

Penn State has easily the best roster of collegiate talent at the moment -- six wrestlers made it to the NCAA finals among the 10 weight classes last weekend, tying the record, and four won titles. And Penn State has even better talent among the international wrestlers, who bring with them scores of NCAA titles and medals from world championships and the Olympics. In the finals of the 190-pound weight class at the U.S. trials for the 2024 Olympics, two wrestlers from that room went up against each other and had an epic battle -- which qualified as just another day in the life of Penn State wrestling.

The insurance industry should, I think, draw a lesson because AI can create a flywheel effect similar to what's happening at Penn State and what happened under Dan Gable at Iowa in the '80s and '90s. 

Adopting AI won't happen overnight. Using it is an unnatural act for many people, especially older ones, so you need to find ways to get people to start to get comfortable with it. You need to produce successes that you can use to evangelize about AI. You need to create rock stars that, while not at the level of a Sanderson or Gable, can attract talented people who want to take on more ambitious projects. You need to keep testing and feeling your way toward more aspirational business models, going beyond efficiencies to, perhaps, embedding insurance in other companies' sales processes or developing services that predict and prevent losses before they can occur.

In fact, early successes with AI can generate savings that you can pump into more future projects, so you just keep accelerating. 

(I realize I made more or less this point about a flywheel in last week's commentary on Lemonade, but I think it's so important that it's worth reinforcing, and college wrestling turns out to be even a better example than Lemonade.) 

No competitive advantage lasts forever. Gable retired at age 48 -- coaches often mix it up with their wrestlers, and even an all-time great eventually wears down. The Iowa program, while still strong, has drifted in the decades since. Sanderson is now 46, and maybe he'll tire out one of these days, too. Meanwhile, David Taylor, a just-retired big name, has set up camp at Oklahoma State, which had four wrestlers make the NCAA finals. Three won. All four are freshman. So another cauldron of a wrestling room may be taking shape.

But I'll bet any insurer would be happy with an advantage on AI of the sort that Sanderson has produced at Penn State and that Gable developed at Iowa before him.

Cheers,

Paul

Healthcare Requires a New System Design

Making healthcare affordable requires rethinking system design through financial protection, cost discipline and shared digital infrastructure, not just pricing fixes.

Dctor in a white coat with a stethoscope around her neck looking at a screen against a white office background

Healthcare affordability is often treated as a pricing problem. Let us reexamine affordable healthcare as a system design problem - with measurement methods/metrics, shared infrastructure and practical adoption pathways.

I am borrowing a "grounded futurism" mindset similar to Dario Amodei's Machines of Loving Grace to make the vision concrete, identify leverage points, acknowledge adoption frictions and build pathways that can learn and adapt to societal needs.

In healthcare, the leverage points are clear and practical: a) protect households from financial shocks, b) control system costs through purchasing and delivery design, and c) build shared digital and data infrastructure so improvements can scale beyond pilots and be extensible.

What is affordable healthcare?

"Affordable" doesn't mean cheap. It means access to needed care without financial hardship. The most useful global yardstick is SDG indicator 3.8.2, revised in 2025 to better capture hardship among poorer households. It tracks the proportion of population with positive out-of-pocket (OOP) health spending exceeding 40% of household discretionary budget (relative to societal poverty line).

Why does affordability look different across countries?

The challenges vary by fiscal capacity, health system maturity, and implementation capability — i.e., ability to coordinate providers, payers, and supply chains. This is why WHO's global digital health strategy emphasizes institutionalizing digital health through an integrated approach of financial, organizational, human and technological resources. This is where affordability can be operationalized via shared infrastructure (identity, registries, exchange standards, claims rails, supply chain visibility, etc.)

What works (transferable design patterns), and why is data the key denominator?

Countries that sustain affordability tend to combine financial protection, cost discipline and organized delivery. Thailand's Universal Coverage Scheme (UCS) pairs coverage with explicit cost controls, including capitation for outpatient care and diagnosis-related groups (DRGs) under the country's budget for inpatient care, and positions its purchaser (NHSO) as an "active" manager of budgets and payments. NHSO's responsibilities include registration of beneficiaries and providers, establishing a claims and reimbursement process and using a standard dataset and APIs for claims flows — i.e., affordability reinforced through systems and not only policy.

India's ABDM (National Health Stack) reflects the same principle via a modern digital public infrastructure (DPI). It is built from Health IDs (ABHA), provider and facility registries (HPR/HFR), and a consent manager enabling consented exchange in a federated architecture, designed to support continuity of care and interoperability across a diverse ecosystem.

These examples imply that you cannot scale affordability without building country/state/region-specific datasets as public utilities, as targeting, purchasing, and delivery of health services (including AI) all depend on them.

The Affordable Healthcare Replication Stack: Systems View (three pillars)

The learnings from those transferable design patterns lend themselves to the systems view below for affordability.

1. Financial protection (prepayment + pooling + subsidies + safety nets) Goal: Reduce household hardship, measured using revised SDG 3.8.2 (2025) and complementary impoverishment measures. Required datasets: Household financial protection dataset (OOP spending and consumption/income) captured via household surveys, Beneficiary & entitlement dataset: Eligibility, enrollment and benefit rules captured as part of beneficiary registration and entitlement management by Thailand's NHSO. AI acceleration: AI can improve eligibility verification, detect anomalous enrollment patterns, and optimize outreach (renewals, maternal/NCD reminders), but only once entitlement datasets are reliable and governance is in place.

2. Cost Discipline + Access (strategic purchasing + primary care-first delivery) Goal: Keep care affordable for the system and accessible for patients by shaping incentives and shifting care upstream. Thailand illustrates how provider payment design (capitation + DRG/budget) can contain costs while scaling coverage. Required datasets: Provider and facility registry - who is licensed, where they operate and what services they offer. ABDM's HPR/HFR are direct analogs of this "registry layer", Utilization and case-mix dataset - outpatient visits, inpatient episodes, DRG groupers, Referral pathway and primary care dataset - catchment areas, referral rules, appointment and follow-up flows. AI acceleration: AI copilots can reduce clinical burden and expand capacity - especially documentation and decision support.

3. Digital Rails for Scale (Health DPI + Claims rails) Goal: Make affordability scalable and auditable by reducing fragmentation, duplication and payment friction. ABDM is a working reference to provide a federated, consent-based exchange with registries and gateway model for interoperable services. Required datasets: Longitudinal health record pointers and metadata that are discoverable and consented references to clinical history, Claims and payment status dataset: Standardized, machine-readable claims for adjudication and auditing enabled by National Health Claims Exchange (NHCX). AI acceleration: AI reduces leakage and delay when claims and registries are machine-readable.

An example/'living lab' archetype in creating datasets - A powerful way to build datasets from the ground up is to start in a region with real operational constraints and build end-to-end connectivity. This is demonstrated in Kuppam, Andhra Pradesh (India) via Tata's Digital Nerve Centre (DiNC) - by digitizing personal medical records, connecting an area hospital with 13 primary health centers (PHC) and 92 village health centers, enabling continuous monitoring, timely diagnosis and virtual consultations. DiNC integrates public health facilities through digital tools and protocols to improve coordination and patient convenience.

The supply chain resiliency on affordability - Affordability is not only financing and care delivery, but also the reliability and cost of diagnostics and supply chains, especially during shocks. C-CAMP's Indigenisation of Diagnostics (InDx) program that was launched to build molecular diagnostics capacity and supply chain networks during COVID, connects indigenous manufacturers, suppliers, service providers and health agencies to improve supply chain visibility and accountability. This can be leveraged as a "Diagnostics & Supply Chain Data rail" when connected to public procurements and primary care diagnostic needs.

A pragmatic roadmap of affordable healthcare for developing economies

Here's a practical sequence that acknowledges adoption frictions and delivers services:

  1. Adopt revised SDG 3.8.2 (2025) metric and publish baselines/targets for financial protection.
  2. Establish or strengthen an active purchaser function and implement payment discipline
  3. Build health DPI early - India's ABDM provides a working reference architecture
  4. Digitize claims via claims rails (similar to National Health Claims Exchange) to reduce friction
  5. Use district "living labs" for social datasets, connected PHCs to harden workflows and enable scaling and outreach
  6. Strengthen diagnostics and supply resiliency with InDx-like marketplaces
  7. Deploy AI where it delivers value in the safest and most responsible way - tele-triage, imaging, clinician co-pilots, claims, etc.

Affordable healthcare is not achieved by one reform or one model, but a continuous journey when financial protection, cost discipline and digital rails evolve together - and when AI is used to reduce burden and extend scarce expertise, reinforcing responsible policies, controls and effective governance for social good.

Time for action is NOW

If you had to start tomorrow, what would you build first in your state/country and why?

  1. Entitlement + benefit registry
  2. Provider/facility registry + service directory
  3. Digital public infrastructure
  4. Claims rails
  5. Diagnostics supply chain visibility

Prathap Gokul

Profile picture for user PrathapGokul

Prathap Gokul

Prathap Gokul is head of insurance data and analytics with the data and analytics group in TCS’s banking, financial services and insurance (BFSI) business unit.

He has over 25 years of industry experience in commercial and personal insurance, life and retirement, and corporate functions.

Insurers Must Fix Enterprise Design to Use AI Right

Insurers remain trapped in AI pilot purgatory by layering technology over fractured legacy systems instead of solving core enterprise design problems.

White frequency lines an dots across a gradient purple background

Insurance's value is a myriad of things. Insurers' problems are, too.

We can't move without insurance, yet we don't trust it and often don't value it, either. It's a cost, a necessary evil, essentially a direct debit on the balance sheet of our lives and businesses we would rather not have. 

Here we are at the tipping point where math and neurons can think for us, and at levels of "intelligence" we are often told we can't even comprehend. Despite this, most of what we are artificially trying to make more intelligent is simply what we do today. And to many of us, this doesn't seem right at all.

The issue for strategic thinkers remains "value chain" thinking, where we focus on minimizing costs and maximizing distribution (channels, coverage, capacity). This puts us at a permanent disadvantage, where new value, through new working models in new technology, is pushed aside for cost savings and efficiency. Worse, when we try to do this with prediction token engines, we are constantly backpedaling because we live in an industry that needs us to be highly deterministic. This is one of the key reasons we remain in pilot purgatory with AI far too often.

We need to solve the meaningful problems we face and start to evolve our business and technology architectures into ecosystems capable of maximizing the knowledge of a customer (and their risks) and acting on this as near to real time as needed.

To do this, we have to address major issues or misperceptions:

  • Many insurers are building houses on sand by layering AI over a "messy middle" of fragmented data and customer-blind legacy processes. AI isn't a repair kit for insurers' broken business models.
  • If we apply AI to a fractured, policy-centric design, we just get fractured, policy-centric mistakes - at scale and at speed. We are simply automating the friction, industrializing the silos, and alienating the customer faster than ever before.
  • The insurance industry is obsessed with plugging in AI, but it's still in pilot purgatory. And that's because layering GenAI over outdated data structures and silos means we aren't innovating; we're building a house on quicksand.
Framing the answer to this paradoxical state

This is, therefore, an enterprise design problem, where policy-centric architectures have to give way to customer-centric enterprises.

Building AI into this new model is vital, but so is building in risk, regulations, compliance, auditing and legal. If things move in real time and intelligently, so will all these things as well.

We need to move from a "data & AI" strategic frame where these things become almost self-serving toward an "intelligent" business model, where data is seen as a perishable asset, constantly mined for insight and acted on as close to real time as is needed, but in a controlled, deterministic and responsible way.

To make this possible, we need to deal with the messy middle. That's because operations in insurance are the big unlock - where the magic (or the misery) happens. If the middle is a black box of manual hand-offs and disconnected spreadsheets, AI will choke on it anyway.

Insurance is a process-heavy industry, one where simply making a claim also means the insurer understands the wider context we are in, that it will focus communications on the best resolution path, that other communications or needs are sympathetically managed in this context, like a repairer, and so on. It's multi-faceted, and the operations, customer experience, and data that weave it together need to be symbiotic. We are at the point now where operational efficiencies and better customer experiences are mutually beneficial, and not the opposing forces they are all too often seen as.

To get to the end state where AI actually works and starts to create new value, we need an evolutionary model to aim for. And we need to clean up this messy middle and orchestrate the flow of outcomes more intelligently - I tend to call this intelligent orchestration. Systems of intelligence are hyped and relevant, but systems of outcome are needed to make them count.

In conclusion

Foundationally, we need a robust data orchestration layer (not more data storage), but insurers need a unified data model, built around the customer. Data should be fluid, so events are available and usable when they need to be.

Insurers need to be able to interoperate agents, with telemetry across their estates, all the way into employee and customer use. And they need a deterministic framework that harnesses agentic solutions and ensures human intervention. But it also needs to be deliberately designed to maximize human interaction when it's needed.

AI is an outcome, not the goal, and once insurers solve the enterprise design problem and move from policy-centric to customer-centric via intelligent orchestration, AI likely becomes the hero. A hero they can control, manage the risk of, and interoperate and adapt at will.


Rory Yates

Profile picture for user RoryYates

Rory Yates

Rory Yates is strategic adviser for insurance at Synechron, a digital transformation consulting firm.

He previously was the SVP of corporate strategy at EIS, a core technology platform provider for the insurance sector.

Lemonade Throws Down the Gauntlet

The 10-year-old insurtech carrier claims it has an insurmountable lead in AI — an overly bold assertion, but one that deserves a hard look. 

Image
Robots Using Laptops

For a 10-year-old carrier that still has a combined ratio far above 100, Lemonade has never been reluctant about dissing its established competitors or about patting itself on the back. In that vein, CEO Daniel Schreiber recently published a manifesto titled, "Why Incumbents Won't Catch Up." 

The cheeky claim is that Lemonade was founded as an AI-native and thus has a 10-year head start on State Farm, Allstate, Progressive, GEICO, et al. Schreiber says the incumbents are "optimized for yesterday," while Lemonade is "designed for the world as it’s becoming." He argues that Lemonade's advantage will keep growing. 

Schreiber's argument doesn't make me want to rush out and buy stock in Lemonade, which, after some years in the wilderness, has recently surged and now carries a hefty $5.1 billion market valuation. But I don't dismiss his argument, either. He's certainly right that early movers like Lemonade have an advantage that incumbents need to reckon with. He also poses three measures for AI adoption that all insurance companies should test themselves on.

Let's have a look. 

Schreiber writes that "companies who slap technology on top of their legacy businesses are not changing their DNA: their incentives, capital allocation logic, talent mix, data architecture, distribution dependencies, brand promise, investor expectations, and legacy stacks. Those systems and processes co-evolved over many decades. They cannot be reengineered piecemeal; and untangling them is laborious and risky."

He says Lemonade began as an AI-native: 

"The result is a different cost structure. A faster clock speed. A compounding feedback loop that continuously improves underwriting, customer experience, and efficiency.

"The question, then, is not whether incumbents can “use AI.” Of course they can. And they should. The question is whether they can re-architect themselves to close the gap to Lemonade. 

"That seems unlikely."

To buttress his argument, he suggests three tests for whether an insurer is adopting AI at its core. All three, of course, show Lemonade outpacing incumbents. 

The first is what Schreiber calls The Scaling Quotient. You look at how fast you're growing, by whatever measure you use. You then divide that growth rate by the rate at which your headcount is increasing. If you're growing, say, your policies in force far faster than you're adding people, you're winning. If not, not. 

Second is Loss Adjustment Expense Ratio. You take your loss adjustment expenses and divide by your gross earned premium. If you're spending a lower percentage than the industry average, and the percentage is declining, you're winning. If not, not. 

Third is what Schreiber calls Structural Precision. This involves two calculations of gross profit. First is gross profit divided by your exposure — you want as high a profit as you can get based on the risk you're taking on. Second is gross profit divided by your sales and marketing expenses — you want to acquire customers as efficiently as possible. You add the two calculations, then compare yourself to the industry over time. 

Those all strike me as fair enough measures of efficiency for any carrier, and AI is certainly the main driver these days. I think his approach can be extended to other players in the insurance industry, not just carriers. Agencies, for instance, can measure whether AI is making them more efficient in winning clients, in processing renewals and so on. 

If you take Schreiber's piece as a wake-up call for incumbents, I can get behind that, too. They can't just be tacking on bits of AI to become slightly more efficient, and they can't just wait and see. The carriers developed their cultures over decades, and changing them will take many years. People don't change overnight even if the technology does. Incumbents have to be thinking big — NOW — and experimenting with ways to allow for radical change. That may even mean new service-based business models, such as Predict & Prevent, or very different distribution channels, such as through embedded insurance. 

Schreiber can certainly point to lots of industries where upstarts with a head start and momentum overcame incumbent behemoths — look at Kodak, Blockbuster, Nokia and Blackberry, city taxi monopolies and Sears (as well as every other company in Amazon's path).

Now to quibble.

For one thing, Schreiber is focusing almost entirely on overhead, which accounts for maybe 20% of every premium dollar, while claims in P&C account for north of 60%. You can be as efficient as you want in processing claims, but if you're taking on bad risks you're still going to lose — and even after years in the business, Lemonade's combined ratio in the fourth quarter was 139.

In addition, as Simon Torrance writes in this thorough analysis, the sort of AI that will really matter in the long run is AI agents, and the competition is just beginning in that phase. He says:

"The genuine compounding asset — the one that cannot be replicated by purchasing the same technology at a later date — is not automated claims processing. It is what happens [when] deliberative agentic teams capture structured reasoning with every decision, build institutional memory that compounds across thousands of cases, and encode expert judgment that persists independently of the individuals who generated it. This is Intelligence Capital. The question Lemonade's investors should be asking is whether their architecture has built this — or whether it has built a more efficient version of what every insurer will have by 2027."

Lemonade might also want to be careful about lecturing incumbents just yet, given that it is still small and has so many ways it could slip up as it expands into new lines of business and new geographies. (Here is a good analysis of its opportunities and challenges.)

But I suppose being cheeky is in the company's DNA at least as much as AI is. 

I hope the rest of us take the Lemonade manifesto for what it's worth — and devise real metrics that accurately measure our progress with AI (or lack thereof), think boldly about where AI agents can change everything about our businesses and start reshaping our cultures for, as Schreiber put it, "the world as it's becoming."

Cheers,

Paul

 

Colorectal Cancer Challenges Life Insurers

A 30% rise in colorectal cancer among adults under 50 is forcing life insurers to rethink age-based underwriting models.

Woman in White Scrub Suit Wearing Black and Gray Stethoscope

Colorectal cancer has long been viewed as a condition primarily affecting older adults, but that assumption is rapidly becoming outdated. Over the past two decades, a marked increase in colorectal cancer diagnoses among people under 50 years old has emerged as one of the most concerning epidemiologic shifts confronting both the medical community and the insurance industry. For life insurers, this rise in early-onset colorectal cancer (EOCRC) brings far-reaching implications, from underwriting and pricing to product development and wellness strategy.

A rising trend with industry-level consequences

Early-onset colorectal cancer, defined as diagnosis before age 50, has grown steadily, with incidence climbing by roughly 30% in the last two decades. Although overall case counts remain lower than in older populations, the rate of increase underscores an unsettling trajectory.

Studies now show an approximate 2% annual rise in diagnoses for adults aged 20-50.

For insurers, this change disrupts longstanding mortality expectations built on age-driven risk curves. Younger applicants have traditionally been priced favorably due to low expected cancer incidence. But the rapid emergence of EOCRC means traditional age-based risk assumptions no longer fully capture early life cancer risk. Compounding this challenge, younger patients often present with a more advanced disease. Symptoms — such as abdominal discomfort, rectal bleeding, or shifting digestive patterns — frequently mimic benign conditions, delaying diagnosis and worsening outcomes.

As a result, underwriting models built around the idea that cancer risk accelerates mainly after age 50 must be reassessed.

Understanding the drivers: Lifestyle, genetics, and environmental factors

The rise in EOCRC stems from a complex interplay of behavioral, genetic, and environmental forces. Lifestyle shifts, including diets high in processed meats and low in fiber, reduced consumption of fruits and vegetables, and increased sedentary behavior appear to play substantial roles. The parallel rise in obesity adds another layer of risk, amplifying inflammatory and hormonal pathways associated with colorectal tumor development.

Genetic risk, while present in a smaller segment of the population, carries significant consequences. Inherited conditions, such as Lynch syndrome or familial adenomatous polyposis, sharply elevate lifetime risk. Mutations in genes, including NTHL1, POLE, POLD1, and RNF43 also contribute to susceptibility, and a family history of colorectal or endometrial cancer is a consistent red flag.

Environmental and medical exposures may also be contributors. Frequent antibiotic use can disrupt the gut microbiome, potentially altering protective bacterial profiles. Long-term inflammatory disorders, such as inflammatory bowel disease, create chronic tissue stress that elevates cancer likelihood.

For insurers, recognizing how these variables interact is essential. Incorporating lifestyle, familial, and clinical risk indicators into modern underwriting frameworks helps ensure high-risk younger applicants are identified earlier and more accurately than age-based approaches alone allow.

Screening guidelines shift — and insurers must follow

One of the clearest responses to rising EOCRC has come in the form of revised screening guidelines. The U.S. Preventive Services Task Force and the American Cancer Society now both advise routine colorectal cancer screening beginning at age 45 for average-risk adults — a notable reduction from the longstanding threshold of age 50. In certain high-risk populations, earlier screening may be warranted. Some European health networks are already exploring screening initiation at age 40.

As screening recommendations evolve, early detection will likely improve, which is particularly crucial for younger adults who tend to present later in the disease process. This shift presents an opportunity for insurers to align underwriting expectations with modern preventive care standards and encourage applicants to stay current with screenings.

Advances in screening and diagnostic technology

Beyond guideline changes, screening technologies are rapidly advancing. While colonoscopy remains the most definitive method, emerging modalities are increasingly accessible and appealing to younger adults who may be reluctant to undergo invasive procedures.

Noninvasive stool-based tests, such as fecal immunochemical tests (FIT) and multitarget stool DNA tests (mt-sDNA), offer convenient at-home screening with promising detection capabilities. Frequent use of these tests tends to boost adherence — an important advantage for younger populations.

CT colonography, or virtual colonoscopy, offers a radiologic alternative, while capsule endoscopy provides a swallowable camera platform with future potential for broader colorectal screening use.

Perhaps most transformative is the rise of blood-based biomarker testing, including liquid biopsies that detect circulating tumor DNA or methylated DNA fragments. Machine-learning-enhanced platforms now combine methylation signatures with DNA fragment analysis to pick up cancer indicators at minimal concentrations. Meanwhile, germline multigene panel testing is uncovering meaningful hereditary risks in approximately 14% of colorectal cancer patients, prompting universal recommendations for genetic testing in EOCRC cases.

For insurers, keeping pace with the strengths, limitations, and cost profiles of each screening approach can inform more accurate underwriting guidelines and create opportunities to promote early detection among policyholders.

Underwriting implications: Rethinking risk in younger applicants

The shifts in incidence and screening warrant a reevaluation of underwriting practices. Traditional risk assessments centered heavily on age must now incorporate:

  • More sophisticated risk stratification, combining family history, lifestyle indicators, and screening adherence.
  • Adjusted premium models that account for elevated risk in younger demographics while rewarding proactive health behaviors.
  • Integration of new data sources, such as medical records, wearables, and — in jurisdictions that allow it — genetic testing results to capture emerging risk more precisely.

However, insurers must also guard against anti-selection, as applicants aware of personal risk may seek coverage before formal diagnosis or symptoms emerge. Balancing comprehensive risk assessment with regulatory and ethical constraints will be crucial.

Product innovation: A strategic opportunity

While EOCRC presents clear challenges, it also invites innovation. Insurers can differentiate themselves by designing products that integrate early detection, lifestyle engagement, and preventive health participation. Potential avenues include:

  • Policy discounts or riders tied to completion of recommended screenings
  • Wellness incentives for maintaining healthy diet and exercise habits
  • Educational programs that inform younger customers about cancer warning signs and the value of screenings

Such initiatives not only enhance customer loyalty but also reduce long-term claims exposure by facilitating earlier diagnosis and intervention.

Challenges ahead

Implementing EOCRC-aligned underwriting and product strategies is not without obstacles. Privacy concerns must be properly managed as the use of genetic or personal health data increases. Evolving screening technology may outpace underwriting updates, creating a lag between best medical practice and insurance assessment. Operationally, insurers must invest in training, systems modernization, and compliance oversight to ensure new processes are implemented safely and efficiently.

Conclusion

Early-onset colorectal cancer represents a fast-emerging risk that the life insurance industry can no longer overlook. By aligning underwriting models with modern epidemiology, embracing new screening technologies, and developing products that encourage proactive health behaviors, insurers can both mitigate risk and empower policyholders. Those who adapt early will not only strengthen market competitiveness but also play a meaningful role in improving health outcomes for a generation facing rising cancer risk far sooner than expected.


Russell Hide

Profile picture for user RussellHide

Russell Hide

Dr. Russell Hide is a medical advisor with RGA.

He specializes in underwriting and claims assessment support for South Africa and the EMEA region. He has more than 25 years of experience in the insurance and reinsurance sectors, as well as a clinical background in general practice. 

He holds an MBBCh degree from the University of the Witwatersrand.

Coder Cannibalism

Developers who automated other industries now face AI displacement themselves, as technical certifications prove less valuable than human judgment and accountability.

Woman Using a Computer

Most of my friends are coders—and, disclosure, I used to be one. Smart people. Good people. People who spent years mastering arcane syntax, memorizing AWS service catalogs, stacking certifications like frequent flyer miles, and genuinely believing—with some justification—that they were the high priests of the modern economy.

They automated the travel agents. The paralegals. The loan officers, the radiologists, the customer service reps, even the truckers—at least in theory. And they did all of it with a clear conscience because, hey, that's capitalism, baby. Creative destruction. If we can do it better, faster, cheaper, then by the immutable laws of the market, we should.

They were not wrong. And they were not unkind people. They just never believed, not really, not in their gut, that the logic had a return address.

It does.

Amazon just laid off a cohort of developers whose primary offense was building something that worked. The system they constructed—on AI, with AI, as a monument to AI—became, upon completion, the argument for their own termination. The product was the pink slip. You couldn't script a better parable. These weren't junior button-pushers. Some of them held AWS Solutions Architect certifications. Professional level. The kind of credential that used to mean something in a job interview, that used to justify a salary band, that used to make a hiring manager feel confident they were buying proven expertise.

What they were actually buying, it turns out, was structured knowledge retrieval. Which is a very polite way of saying: a human being who had memorized a lot of things and learned to pattern-match against them quickly. And if there is one thing—one single thing—that large language models do better than humans, it is exactly that. The machine doesn't need a certification. It doesn't need a salary. It doesn't get defensive when you change the requirements at 11 p.m.

So here we are. The hue and cry from the coding community is structurally identical to every argument that was dismissed when the travel agents and the paralegals and the loan officers were in the crosshairs. This is different. This requires real skill. You don't understand the complexity. 

Brother, Sister, those whose jobs you automated said the same thing. You just didn't listen because you were the one holding the compiler.

The real question—the one worth asking these days—is, what skills actually don't have a shelf life problem? Some of them seem obvious in retrospect, and most of them aren't technical.

Regulatory judgment under uncertainty is one. Not knowing what a rule says—AI can read the Federal Register faster than any human—but knowing what it means when a specific auditor in a specific regional office has been interpreting it a certain way for three years. That's pattern recognition built from exposure and consequence, not training data. A friend of mine who works in healthcare private equity says the top three risks related to any deal are regulatory in nature—gray area, subjective.

Organizational power mapping is another. Every failed technology implementation in history failed for the same reason: someone built the right thing for the wrong power structure. The CMO thinks she controls the data. The CFO controls the budget. The VP of operations controls the workflow. The IT director controls the timeline through "security review." No AI maps this. No certification covers it. This is human intelligence in the original meaning of the phrase.

Cross-domain translation may be the rarest and most durable skill of all. The ability to stand in a room and make a CMS actuary, an Epic build team, and a 55-year-old case manager all feel heard, and then synthesize what they need into something that actually ships—that's not a technical skill. It never was. We just told ourselves it was adjacent to technical skill so the coders could claim it.

And finally, accountability. The willingness to put your name on a recommendation and mean it. AI is a brilliant, tireless, unaccountable collaborator. In regulated industries—healthcare, insurance, finance, law—where the downside of being wrong is measured in dollars with a lot of zeroes or people with actual problems, someone has to own the outcome. That someone is still a human being with a name and a reputation and something to lose.

The coders who survive this aren't the ones who fight the AI. They're the ones who understand that the job was never really about the code. It's about the judgment surrounding the code. Which explains why Stanford CS grads can't find jobs—while McKinsey is hiring liberal arts majors again. Coders just got away with charging for the code because nobody had built the machine yet.

Now somebody has.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.   

5 Operational Shifts for Scaling Insurance AI

Insurance AI is shifting from the wow factor of innovation to the how factor of sustaining automation at scale.

Human Responsibility for AI

AI is moving well beyond experimentation and into everyday insurance operations. As this happens, the wow factor of introducing new forms of automation to insurance use cases is giving way to the how factor of sustaining these innovations at scale. Once AI influences underwriting decisions and claims outcomes in a heavily regulated environment, success depends far less on the sophistication of models and far more on the operational systems that support them.

Earlier phases of AI adoption proved that insurers can deploy advanced models. The priority now is to embed those models into the deeply regulated, process-driven realities of underwriting, claims, and distribution without creating new friction or risk. All this must happen while taking into account what may be an outdated back office tech stack, and with a level of integration that doesn't create the next issue on the horizon of agent sprawl. Here are five operational trends that are emerging as the differentiators between AI programs that compound value over time and those that stall under complexity:

Treat document intelligence as foundational infrastructure, not a point solution

Document intelligence is a prime focus for AI modernization, yet many organizations still approach it as a tactical automation limited to intake. At scale, this narrow view leaves significant value unrealized. Documents and work items remain central to underwriting, claims adjudication, and compliance. Manual handling introduces delay, inconsistency, and risk at every handoff. As AI adoption matures, document intelligence and rigorous contextualization functions should exist as shared operational infrastructure embedded directly into workflows, rather than bolted on at the edges. This shift reduces cycle times, improves data quality, and strengthens auditability; and it further informs future agentic capabilities stemming from those same work items. That's why insurers that move fastest stop treating document intelligence as an isolated capability and start treating it as a prerequisite for operational scale.

Make AI governance an enterprise operating model

As AI becomes embedded in decision-making, the ability to maintain explainability, accountability, and auditability of AI systems must be designed into processes from the outset, not retrofitted after systems are already in production. At scale, this allows insurers to deploy AI confidently across regions, lines of business, and regulatory regimes without fragmenting their operating model. This enterprise-wide discipline of clear ownership, transparent decision logic, and consistent oversight of machine processes helps position AI governance as a C-suite priority that strengthens risk posture, customer trust, and long-term resilience.

Keep humans in the loop strategically

When human involvement is applied too broadly, productivity gains erode and trust in automation declines. Human-in-the-loop AI is most effective when experienced underwriters or claims professionals are only pulled into cases where their judgment, oversight, and exception handling add the most value in assessing complex risks, edge cases, and decisions with material financial or regulatory impact. Emerging governance models increasingly reinforce this principle. For instance, Singapore's IMDA Model AI Governance Framework on agentic systems describes a spectrum of oversight that includes human-in-the-loop, on-the-loop, and over-the-loop to help selectively scale automation while preserving accountability and control.

Connect underwriting and claims workflows end-to-end

Siloed workflows are increasingly untenable as customer expectations rise and loss events grow more complex and costly. End-to-end visibility from first notice of loss through settlement, or from submission through bind, enables AI to coordinate decisions across the full lifecycle, rather than optimizing individual steps in isolation. This coordination reduces cycle times, improves broker/agent/customer experience, and strengthens risk selection and pricing accuracy. It also provides the transparency needed to support governance, oversight, and continuous improvement. AI delivers its greatest operational value when it serves as a connective layer across workflows, aligning data, decisions, and actions inside of a process.

Modernize legacy integrations iteratively

Best-in-class agents and tools cannot operate in a silo and must take into consideration the complex legacy systems that remain a reality for most insurers. Because large-scale replacements often span multiple years, waiting for perfect conditions before deploying AI is rarely viable; yet fragmented pilots that never scale introduce their own risks. Insurers that maximize their AI investments at scale focus on incremental modernizations that deliver early operational value while progressively addressing data and system complexity. This approach avoids the trap of pilots that prove concepts yet fail to translate into production impact with quantifiable benefit. By modernizing iteratively, insurers can improve workflows, connect disparate systems, and strengthen data foundations without discarding prior investments.

Conclusion

As AI becomes embedded in core insurance operations, the conversation is shifting from capability to durability. Most insurers now understand what AI can do. The more consequential question is whether it can be integrated into underwriting, claims, and compliance in ways that improve performance without eroding trust, operational integrity, or compliance. As such, sustaining AI at scale is a matter of organization-wide discipline. It requires aligning automation with real insurance cycles, protecting scarce expert judgment, and ensuring transparency as non-deterministic agentic-driven decisions expand. Insurers that approach AI through this lens position themselves not just to automate faster, but to operate smarter, more resiliently, and with greater confidence in the outcomes their systems produce.


Jake Sloan

Profile picture for user JakeSloan

Jake Sloan

Jake Sloan is vice president, global insurance, at Appian

He has held senior operations roles with Farmers Insurance, including front-line insurance/licensed field operations, and served as CIO of Aon National Flood Services. 

Sloan volunteers as a mentor to the Global Insurance Accelerator, holds an MBA from Baker University and is a graduate of the Advanced Management Program (AMP) of Harvard Business School.

2026 Commercial Market Outlook

Prepare for Renewals and Manage Costs in a Changing Market

wavy

After years of disruption, the commercial insurance market is showing signs of moderation—but risks remain. Catastrophe losses, social inflation, and regulatory scrutiny continue to challenge organizations.

Zywave’s 2026 Outlook breaks down what insurance professionals and business leaders need to know to prepare for renewals, manage costs, and position programs for success.

Key Takeaways for 2026
  • Property Insurance: After years of a hard market, property insurance is stabilizing thanks to improved capacity and reinsurance strength. However, catastrophe losses, valuation scrutiny, and climate risks continue to challenge underwriting. Parametric solutions and resilience measures are gaining traction—organizations with accurate valuations and proactive risk controls will benefit most.
  • Casualty Insurance: Litigation trends and social inflation keep pressure on casualty lines, especially commercial auto and umbrella liability. Nuclear verdicts and expanded litigation funding drive severity, while technology like telematics and AI safety tools are becoming key differentiators for favorable outcomes
  • Professional & Executive Liability: Competition is improving, but emerging risks tied to AI adoption and regulatory scrutiny are reshaping underwriting. Cyber events increasingly overlap with management liability, making strong governance and compliance essential for broader coverage and stable pricing.
Access the Full Outlook Today

Get expert insights into market forces and strategies for success. Download the full 77-page report now.

Access the Report

 

 

Sponsored by ITL Partner: Zywave


ITL Partner: Zywave

Profile picture for user Zywave

ITL Partner: Zywave

Zywave delivers AI-powered growth engines for the insurance industry, enabling carriers, MGAs, agencies, and brokers to grow profitably, strengthen risk assessment, enhance client relationships, and streamline operations. Its intelligent, AI-driven platform acts as a performance multiplier for more than 160,000 insurance professionals worldwide, across all major segments. By combining automation, data insights, and best practices, Zywave helps organizations stay competitive and efficient in today’s fast-changing risk environment—empowering them to adapt quickly, scale effectively, and achieve sustainable growth.

For more information, visit zywave.com.

Additional Resources

Zywave recognized as a Leader in The Forrester Wave™: Insurance Agency Management Systems, Q4 2025 

Access Report

Insurance Struggles With Digital Friction

Clunky insurance experiences are now a competitive liability, driving customer churn and employee turnover in equal measure.

A Gradient Design

As usability expectations have been pushed to the max and user experience has become increasingly commoditized, the clunky and confusing experiences that still dominate insurance—both internally and in customer-facing products—have become more noticeable and less acceptable.

Customers feel it, and employees do, too.

Recent research by Insurify shows that one in four younger customers has switched insurance carriers due to frustrating digital interactions. At the same time, seven in 10 young employees say they would consider changing jobs for better workplace technology, according to a study published last year by Adobe.

These trends point to something many insurance organizations are already experiencing firsthand: poor digital experiences are no longer just a usability issue. They affect retention, operational efficiency, and ultimately competitive advantage. In other words, they are a liability.

And yet, despite years of investment in digital transformation, friction still defines many insurance interactions. Why does the industry continue to struggle here?

Why Insurance Experiences Still Lag Behind

Part of the explanation lies in how insurance experiences were created in the first place.

In many cases, what organizations call "digital products" are not truly designed experiences in the way, say, apps like Uber or Slack might be. They are more like digitized manual processes.

Over the past few decades, insurers have gradually translated manual workflows into software—policy administration, quoting, underwriting, claims management—often without fundamentally rethinking how those processes should work in a digital environment. New platforms, integrations, and features have been layered onto existing infrastructure over time, producing systems that reflect the history of the business rather than the needs of modern users.

Insurance also operates within an unusually complex ecosystem. Digital tools frequently need to support multiple audiences simultaneously: customers, agents, brokers, underwriters, customer service representatives, employers, benefits administrators, and third-party partners. Each group interacts with the same underlying systems but with different goals, responsibilities, and expectations.

When digital experiences are built within this environment without a clear design strategy, complexity has a tendency to surface directly in the interface. What should feel like a coherent system instead begins to resemble a collection of disconnected workflows and tools.

In practice, this friction tends to appear in a few clear ways.

Three Patterns of Friction

Across the insurance ecosystem—from consumer apps to broker portals to internal platforms—we frequently see friction emerge in three distinct forms.

These patterns are not necessarily the result of poor decisions or weak design teams. More often, they reflect structural realities within the industry itself. While understanding them doesn't fix anything, it does explain why so many digital experiences in insurance feel more difficult to use than they should—and can lead to design solutions that can resolve much of this friction.

Role friction

Insurance systems often serve a wide range of users at once. Customers may use the same platform that agents rely on for quoting or that underwriters use to evaluate submissions. In benefits ecosystems, carriers, employers, brokers, and employees may all interact with overlapping systems.

When experiences fail to account for these differences, it becomes difficult for people to understand what they are responsible for or what actions they are permitted to take. Workflows slow down, ownership becomes ambiguous, and teams begin to rely on manual coordination outside the system—emails, calls, spreadsheets—to move work forward.

Offering friction

A second type of friction emerges when products that are conceptually connected are delivered through disconnected experiences.

Insurance offerings often span multiple policies, services, or programs. A household may purchase auto, renters, and umbrella coverage from the same carrier. A broker may assemble a coverage package across several products. Employees may navigate benefits that combine insurance coverage with wellness programs or leave management services.

Although these offerings are experienced as part of a single relationship, they are frequently delivered through separate systems and workflows. From the user's perspective, what should feel like one cohesive service instead becomes a series of disconnected touchpoints.

Mission friction

A third type of friction arises when organizations themselves are not aligned on what a digital product is meant to accomplish. This is more common than you may think.

Insurance portals and applications often accumulate features over time as different teams add features to support their own goals—sales, servicing, compliance, reporting, relationship management. Without a clear shared vision and objective guiding the experience, these additions can gradually pull the product in competing directions.

For the people using these systems, the result is an experience that feels incoherent. Users may struggle to determine where to begin, which workflows are most relevant, or what the platform is ultimately designed to help them do.

Designing Through Complexity

The complexity that produces these forms of friction is not unique to any single insurer. It is a product of the industry itself. Insurance ecosystems involve multiple stakeholders, layered products, regulatory constraints, and long-standing organizational structures.

Because of this, the goal should not necessarily be to eliminate friction altogether. In many cases, some friction is necessary. Verification steps, disclosures, and safeguards often exist to protect customers and ensure that risk decisions are made responsibly.

The challenge is distinguishing between the kinds of friction that add value to users and the kinds that simply make systems harder to use.

Human-centered design plays an important role here because it shifts the starting point for digital experiences. Rather than organizing systems around internal structures or historical processes, it begins with the people who rely on those systems every day and the tasks they are trying to accomplish.

When digital products are designed with that perspective in mind, complexity does not disappear—but it can be absorbed and structured in ways that make the experience feel far more usable.

Looking More Closely at Friction in Insurance

In a recent report, we at Cake & Arrow took a deeper look at how these patterns of friction show up across B2C, B2B, and B2B2C insurance experiences. The report explores why these dynamics persist, how they shape day-to-day interactions with insurance systems, and how design teams can begin addressing them in practical ways.

The industry will likely never achieve completely frictionless experiences—and there are good reasons for this. But understanding where friction comes from is critical to designing syst

For a deeper exploration of these ideas and practical design solutions for reducing friction in digital insurance experiences, download our full report, Tackling Friction in Insurance Through Design.


Emily Smith Cardineau

Profile picture for user EmilySmith

Emily Smith Cardineau

Emily Smith Cardineau is the Director of Content & Insights at Cake & Arrow, a customer experience agency providing end-to-end digital products and services that help insurance companies redefine customer experience.

An Urgent Need for Post-Quantum Cryptography

Organizations delaying the shift to post-quantum cryptography face major risks, as classical encryption schemes may break.

A purple and blue abstract background with a purple and blue swirl

While researching the Titanic recently, I was struck by something profound: the ship received numerous warning signs that could have prevented the catastrophic disaster of 1912. More than a century later, organizations continue making the same mistake, ignoring blatant warnings about pending disasters.

Today's iceberg? The quantum computing revolution that threatens to render our current cryptography obsolete.

The Warning Signs Are Already Here

Any entity using digital networks to store sensitive data needs to move away from classical cryptography toward post-quantum cryptography (PQC) standards. Organizations that fail to course correct risk drifting dangerously off course by maintaining the same classical cryptography instead of implementing new quantum-resistant algorithms that are already available.

This lack of proactive course correction, or what I call "cryptographic drift," creates what is now referred to as cryptographic debt – a burden that builds up until it may be too late to avoid disaster. One of the other perspectives to understand is that adversaries are constantly harvesting your data during the cryptographic drift, and the slow implementation of PQC-resistant algorithms will ease the adversarial burden to decrypt the data once a cryptographically relevant quantum computer (CRQC) becomes operationally available. The Titanic didn't sink simply from drifting off course, but because it maintained high speed into a known ice field despite numerous warnings that never reached the captain. Everyone was too busy to act.

Sound familiar?

Understanding the Quantum Threat

Quantum computers harness quantum mechanical phenomena, including superposition and entanglement, to process information in fundamentally different ways from classical systems. While classical computers encode data as binary bits (0s and 1s), quantum computers use quantum bits (qubits) that can occupy multiple states at once, potentially delivering exponential speedups for specific problem classes.

Quantum computers using gate-based operations (analogous to classical and/or gates) have been built with dozens of qubits, though their quality remains inconsistent. Scaling to fully error-corrected systems with logical qubits that can perform substantially more operations likely won't arrive until around 2030. Organizational management needs to understand what lies ahead in the cryptographic space of quantum computing. Advanced planning is essential to implement quantum-resistant algorithms before a CRQC arrives on the scene.

The primary organizational risk from quantum computing is that a CRQC could break widely used classical encryption schemes. This threat has prompted formal government action, including OMB Memorandum M-23-02 (Migrating to Post-Quantum Cryptography) and National Security Memorandum 10 (NSM-10, Promoting United States Leadership in Quantum Computing While Mitigating Risk to Vulnerable Cryptographic Systems), which direct federal agencies to take steps toward post-quantum cryptography (PQC) migration. The Department of Defense has issued additional guidance outlining implementation requirements and constraints for PQC adoption across government systems.

Private sector organizations, particularly those working with or seeking to work with government entities, should closely monitor these directives, as compliance will likely become essential for maintaining those relationships.

Planning safeguards your organization against the threat of a CRQC rendering current public-key encryption such as RSA (Rivest, Shamir, and Adleman) and Elliptic Curve Cryptography (ECC) obsolete. It may also mitigate "harvest now, decrypt later" (HNDL) attacks – a continuing threat where adversaries intercept and store encrypted data today, intending to decrypt it once error-correcting quantum computers become capable of breaking today's cryptographic protections.

Recent academic and industry publications have accelerated the timeline for operational CRQCs to on or before 2030, exponentially increasing risk in three critical areas:

  • Business operations disruption
  • Data exposure and breaches
  • Cost of emergency transition

Most forward-thinking organizations are already shifting their encryption ahead of 2030, anticipating moderate impacts to these areas.

Organizations experiencing cryptographic drift will continue operating normally, creating a dangerous illusion of security while adversaries store sensitive data now and decrypt it later (also known as HNDL attacks)—capturing encrypted data today for future exploitation. A crypto-agile approach maintains operational continuity while moving to quantum-resistant algorithms that protect data in transit. As shown in the figure, cryptographic debt accumulates over time and can become overwhelming or irreversible as organizations scale, eventually leading to loss of operational functionality and relevance due to government mandates and guidance. Wholesale replacement of IT infrastructure is neither practical nor cost-effective for achieving quantum resistance. Instead, implementing crypto-agility enables seamless migration from obsolete encryption to quantum-resistant standards, positioning organizations for future competitiveness through reduced costs, accelerated transition timelines, minimized data compromise risk, and uninterrupted operations.

The Time to Act Is Now

My advice is simple: start changing course now.

The quantum-resistant/PQC algorithms have been released by the National Institute of Standards and Technology (NIST):

  • FIPS 203 (ML-KEM) - key encapsulation
  • FIPS 204 (ML-DSA) - digital signatures
  • FIPS 205 (SLH-DSA) - stateless hash-based signatures

These standards form the foundation of the post-quantum cryptography migration mandated by government directives like OMB M-23-02 and NSM-10.

Start by inventorying your assets to understand what encryption is currently being used within the organizational enterprise. Focus on migrating the highly operationally used assets (high value or high impact) to the standard quantum-resistant algorithms, as they most likely transmit most of your sensitive data. For now, the HNDL threat is at the data in transit level, not particularly at the data in use and data at rest levels.

Additionally, migrating from TLS 1.2 to TLS 1.3 can counter a CRQC due to PQC algorithms integrating more naturally into the TLS 1.3 architecture. This is available now!

Reactive Planning

Migrating only after it's too late and your cryptography has been rendered void by an error-correcting/fault-tolerant quantum computer will dramatically increase the risk of your organization ending up like the Titanic.

Side Note

It took 73 years to find the wreckage, and to date, the Titanic has never been fully recovered from the ocean floor. Let's try not to have that happen to your organization.

The warnings are here. The danger is real. The timeline is shorter than you think. There are mitigations out there now that can be implemented within your organization.

Don't be too busy to change course. Pay attention to the warnings.


Garfield Jones

Profile picture for user GarfieldJones

Garfield Jones

Dr. Garfield Jones is senior vice president of research and technology for QuSecure. 

Dr. Jones previously served as the associate chief of strategic technology for the Cybersecurity and Infrastructure Security Agency (CISA), DHS, where he led the agency’s post-quantum cryptography (PQC) initiative. Prior to joining DHS, Dr. Jones worked as a systems engineer developing complex weapons, geographic, and information systems for agencies such as Office of Naval Intelligence (ONI), National Geospatial Intelligence Agency (NGA), and the Naval Criminal Investigative Service (NCIS). 

In 2018, he retired from the Army Reserves after serving 25 years (16 years active duty and nine years reservist) as an information systems warrant officer.