Download

Lemonade Throws Down the Gauntlet

The 10-year-old insurtech carrier claims it has an insurmountable lead in AI — an overly bold assertion, but one that deserves a hard look. 

Image
Robots Using Laptops

For a 10-year-old carrier that still has a combined ratio far above 100, Lemonade has never been reluctant about dissing its established competitors or about patting itself on the back. In that vein, CEO Daniel Schreiber recently published a manifesto titled, "Why Incumbents Won't Catch Up." 

The cheeky claim is that Lemonade was founded as an AI-native and thus has a 10-year head start on State Farm, Allstate, Progressive, GEICO, et al. Schreiber says the incumbents are "optimized for yesterday," while Lemonade is "designed for the world as it’s becoming." He argues that Lemonade's advantage will keep growing. 

Schreiber's argument doesn't make me want to rush out and buy stock in Lemonade, which, after some years in the wilderness, has recently surged and now carries a hefty $5.1 billion market valuation. But I don't dismiss his argument, either. He's certainly right that early movers like Lemonade have an advantage that incumbents need to reckon with. He also poses three measures for AI adoption that all insurance companies should test themselves on.

Let's have a look. 

Schreiber writes that "companies who slap technology on top of their legacy businesses are not changing their DNA: their incentives, capital allocation logic, talent mix, data architecture, distribution dependencies, brand promise, investor expectations, and legacy stacks. Those systems and processes co-evolved over many decades. They cannot be reengineered piecemeal; and untangling them is laborious and risky."

He says Lemonade began as an AI-native: 

"The result is a different cost structure. A faster clock speed. A compounding feedback loop that continuously improves underwriting, customer experience, and efficiency.

"The question, then, is not whether incumbents can “use AI.” Of course they can. And they should. The question is whether they can re-architect themselves to close the gap to Lemonade. 

"That seems unlikely."

To buttress his argument, he suggests three tests for whether an insurer is adopting AI at its core. All three, of course, show Lemonade outpacing incumbents. 

The first is what Schreiber calls The Scaling Quotient. You look at how fast you're growing, by whatever measure you use. You then divide that growth rate by the rate at which your headcount is increasing. If you're growing, say, your policies in force far faster than you're adding people, you're winning. If not, not. 

Second is Loss Adjustment Expense Ratio. You take your loss adjustment expenses and divide by your gross earned premium. If you're spending a lower percentage than the industry average, and the percentage is declining, you're winning. If not, not. 

Third is what Schreiber calls Structural Precision. This involves two calculations of gross profit. First is gross profit divided by your exposure — you want as high a profit as you can get based on the risk you're taking on. Second is gross profit divided by your sales and marketing expenses — you want to acquire customers as efficiently as possible. You add the two calculations, then compare yourself to the industry over time. 

Those all strike me as fair enough measures of efficiency for any carrier, and AI is certainly the main driver these days. I think his approach can be extended to other players in the insurance industry, not just carriers. Agencies, for instance, can measure whether AI is making them more efficient in winning clients, in processing renewals and so on. 

If you take Schreiber's piece as a wake-up call for incumbents, I can get behind that, too. They can't just be tacking on bits of AI to become slightly more efficient, and they can't just wait and see. The carriers developed their cultures over decades, and changing them will take many years. People don't change overnight even if the technology does. Incumbents have to be thinking big — NOW — and experimenting with ways to allow for radical change. That may even mean new service-based business models, such as Predict & Prevent, or very different distribution channels, such as through embedded insurance. 

Schreiber can certainly point to lots of industries where upstarts with a head start and momentum overcame incumbent behemoths — look at Kodak, Blockbuster, Nokia and Blackberry, city taxi monopolies and Sears (as well as every other company in Amazon's path).

Now to quibble.

For one thing, Schreiber is focusing almost entirely on overhead, which accounts for maybe 20% of every premium dollar, while claims in P&C account for north of 60%. You can be as efficient as you want in processing claims, but if you're taking on bad risks you're still going to lose — and even after years in the business, Lemonade's combined ratio in the fourth quarter was 139.

In addition, as Simon Torrance writes in this thorough analysis, the sort of AI that will really matter in the long run is AI agents, and the competition is just beginning in that phase. He says:

"The genuine compounding asset — the one that cannot be replicated by purchasing the same technology at a later date — is not automated claims processing. It is what happens [when] deliberative agentic teams capture structured reasoning with every decision, build institutional memory that compounds across thousands of cases, and encode expert judgment that persists independently of the individuals who generated it. This is Intelligence Capital. The question Lemonade's investors should be asking is whether their architecture has built this — or whether it has built a more efficient version of what every insurer will have by 2027."

Lemonade might also want to be careful about lecturing incumbents just yet, given that it is still small and has so many ways it could slip up as it expands into new lines of business and new geographies. (Here is a good analysis of its opportunities and challenges.)

But I suppose being cheeky is in the company's DNA at least as much as AI is. 

I hope the rest of us take the Lemonade manifesto for what it's worth — and devise real metrics that accurately measure our progress with AI (or lack thereof), think boldly about where AI agents can change everything about our businesses and start reshaping our cultures for, as Schreiber put it, "the world as it's becoming."

Cheers,

Paul

 

Colorectal Cancer Challenges Life Insurers

A 30% rise in colorectal cancer among adults under 50 is forcing life insurers to rethink age-based underwriting models.

Woman in White Scrub Suit Wearing Black and Gray Stethoscope

Colorectal cancer has long been viewed as a condition primarily affecting older adults, but that assumption is rapidly becoming outdated. Over the past two decades, a marked increase in colorectal cancer diagnoses among people under 50 years old has emerged as one of the most concerning epidemiologic shifts confronting both the medical community and the insurance industry. For life insurers, this rise in early-onset colorectal cancer (EOCRC) brings far-reaching implications, from underwriting and pricing to product development and wellness strategy.

A rising trend with industry-level consequences

Early-onset colorectal cancer, defined as diagnosis before age 50, has grown steadily, with incidence climbing by roughly 30% in the last two decades. Although overall case counts remain lower than in older populations, the rate of increase underscores an unsettling trajectory.

Studies now show an approximate 2% annual rise in diagnoses for adults aged 20-50.

For insurers, this change disrupts longstanding mortality expectations built on age-driven risk curves. Younger applicants have traditionally been priced favorably due to low expected cancer incidence. But the rapid emergence of EOCRC means traditional age-based risk assumptions no longer fully capture early life cancer risk. Compounding this challenge, younger patients often present with a more advanced disease. Symptoms — such as abdominal discomfort, rectal bleeding, or shifting digestive patterns — frequently mimic benign conditions, delaying diagnosis and worsening outcomes.

As a result, underwriting models built around the idea that cancer risk accelerates mainly after age 50 must be reassessed.

Understanding the drivers: Lifestyle, genetics, and environmental factors

The rise in EOCRC stems from a complex interplay of behavioral, genetic, and environmental forces. Lifestyle shifts, including diets high in processed meats and low in fiber, reduced consumption of fruits and vegetables, and increased sedentary behavior appear to play substantial roles. The parallel rise in obesity adds another layer of risk, amplifying inflammatory and hormonal pathways associated with colorectal tumor development.

Genetic risk, while present in a smaller segment of the population, carries significant consequences. Inherited conditions, such as Lynch syndrome or familial adenomatous polyposis, sharply elevate lifetime risk. Mutations in genes, including NTHL1, POLE, POLD1, and RNF43 also contribute to susceptibility, and a family history of colorectal or endometrial cancer is a consistent red flag.

Environmental and medical exposures may also be contributors. Frequent antibiotic use can disrupt the gut microbiome, potentially altering protective bacterial profiles. Long-term inflammatory disorders, such as inflammatory bowel disease, create chronic tissue stress that elevates cancer likelihood.

For insurers, recognizing how these variables interact is essential. Incorporating lifestyle, familial, and clinical risk indicators into modern underwriting frameworks helps ensure high-risk younger applicants are identified earlier and more accurately than age-based approaches alone allow.

Screening guidelines shift — and insurers must follow

One of the clearest responses to rising EOCRC has come in the form of revised screening guidelines. The U.S. Preventive Services Task Force and the American Cancer Society now both advise routine colorectal cancer screening beginning at age 45 for average-risk adults — a notable reduction from the longstanding threshold of age 50. In certain high-risk populations, earlier screening may be warranted. Some European health networks are already exploring screening initiation at age 40.

As screening recommendations evolve, early detection will likely improve, which is particularly crucial for younger adults who tend to present later in the disease process. This shift presents an opportunity for insurers to align underwriting expectations with modern preventive care standards and encourage applicants to stay current with screenings.

Advances in screening and diagnostic technology

Beyond guideline changes, screening technologies are rapidly advancing. While colonoscopy remains the most definitive method, emerging modalities are increasingly accessible and appealing to younger adults who may be reluctant to undergo invasive procedures.

Noninvasive stool-based tests, such as fecal immunochemical tests (FIT) and multitarget stool DNA tests (mt-sDNA), offer convenient at-home screening with promising detection capabilities. Frequent use of these tests tends to boost adherence — an important advantage for younger populations.

CT colonography, or virtual colonoscopy, offers a radiologic alternative, while capsule endoscopy provides a swallowable camera platform with future potential for broader colorectal screening use.

Perhaps most transformative is the rise of blood-based biomarker testing, including liquid biopsies that detect circulating tumor DNA or methylated DNA fragments. Machine-learning-enhanced platforms now combine methylation signatures with DNA fragment analysis to pick up cancer indicators at minimal concentrations. Meanwhile, germline multigene panel testing is uncovering meaningful hereditary risks in approximately 14% of colorectal cancer patients, prompting universal recommendations for genetic testing in EOCRC cases.

For insurers, keeping pace with the strengths, limitations, and cost profiles of each screening approach can inform more accurate underwriting guidelines and create opportunities to promote early detection among policyholders.

Underwriting implications: Rethinking risk in younger applicants

The shifts in incidence and screening warrant a reevaluation of underwriting practices. Traditional risk assessments centered heavily on age must now incorporate:

  • More sophisticated risk stratification, combining family history, lifestyle indicators, and screening adherence.
  • Adjusted premium models that account for elevated risk in younger demographics while rewarding proactive health behaviors.
  • Integration of new data sources, such as medical records, wearables, and — in jurisdictions that allow it — genetic testing results to capture emerging risk more precisely.

However, insurers must also guard against anti-selection, as applicants aware of personal risk may seek coverage before formal diagnosis or symptoms emerge. Balancing comprehensive risk assessment with regulatory and ethical constraints will be crucial.

Product innovation: A strategic opportunity

While EOCRC presents clear challenges, it also invites innovation. Insurers can differentiate themselves by designing products that integrate early detection, lifestyle engagement, and preventive health participation. Potential avenues include:

  • Policy discounts or riders tied to completion of recommended screenings
  • Wellness incentives for maintaining healthy diet and exercise habits
  • Educational programs that inform younger customers about cancer warning signs and the value of screenings

Such initiatives not only enhance customer loyalty but also reduce long-term claims exposure by facilitating earlier diagnosis and intervention.

Challenges ahead

Implementing EOCRC-aligned underwriting and product strategies is not without obstacles. Privacy concerns must be properly managed as the use of genetic or personal health data increases. Evolving screening technology may outpace underwriting updates, creating a lag between best medical practice and insurance assessment. Operationally, insurers must invest in training, systems modernization, and compliance oversight to ensure new processes are implemented safely and efficiently.

Conclusion

Early-onset colorectal cancer represents a fast-emerging risk that the life insurance industry can no longer overlook. By aligning underwriting models with modern epidemiology, embracing new screening technologies, and developing products that encourage proactive health behaviors, insurers can both mitigate risk and empower policyholders. Those who adapt early will not only strengthen market competitiveness but also play a meaningful role in improving health outcomes for a generation facing rising cancer risk far sooner than expected.


Russell Hide

Profile picture for user RussellHide

Russell Hide

Dr. Russell Hide is a medical advisor with RGA.

He specializes in underwriting and claims assessment support for South Africa and the EMEA region. He has more than 25 years of experience in the insurance and reinsurance sectors, as well as a clinical background in general practice. 

He holds an MBBCh degree from the University of the Witwatersrand.

Coder Cannibalism

Developers who automated other industries now face AI displacement themselves, as technical certifications prove less valuable than human judgment and accountability.

Woman Using a Computer

Most of my friends are coders—and, disclosure, I used to be one. Smart people. Good people. People who spent years mastering arcane syntax, memorizing AWS service catalogs, stacking certifications like frequent flyer miles, and genuinely believing—with some justification—that they were the high priests of the modern economy.

They automated the travel agents. The paralegals. The loan officers, the radiologists, the customer service reps, even the truckers—at least in theory. And they did all of it with a clear conscience because, hey, that's capitalism, baby. Creative destruction. If we can do it better, faster, cheaper, then by the immutable laws of the market, we should.

They were not wrong. And they were not unkind people. They just never believed, not really, not in their gut, that the logic had a return address.

It does.

Amazon just laid off a cohort of developers whose primary offense was building something that worked. The system they constructed—on AI, with AI, as a monument to AI—became, upon completion, the argument for their own termination. The product was the pink slip. You couldn't script a better parable. These weren't junior button-pushers. Some of them held AWS Solutions Architect certifications. Professional level. The kind of credential that used to mean something in a job interview, that used to justify a salary band, that used to make a hiring manager feel confident they were buying proven expertise.

What they were actually buying, it turns out, was structured knowledge retrieval. Which is a very polite way of saying: a human being who had memorized a lot of things and learned to pattern-match against them quickly. And if there is one thing—one single thing—that large language models do better than humans, it is exactly that. The machine doesn't need a certification. It doesn't need a salary. It doesn't get defensive when you change the requirements at 11 p.m.

So here we are. The hue and cry from the coding community is structurally identical to every argument that was dismissed when the travel agents and the paralegals and the loan officers were in the crosshairs. This is different. This requires real skill. You don't understand the complexity. 

Brother, Sister, those whose jobs you automated said the same thing. You just didn't listen because you were the one holding the compiler.

The real question—the one worth asking these days—is, what skills actually don't have a shelf life problem? Some of them seem obvious in retrospect, and most of them aren't technical.

Regulatory judgment under uncertainty is one. Not knowing what a rule says—AI can read the Federal Register faster than any human—but knowing what it means when a specific auditor in a specific regional office has been interpreting it a certain way for three years. That's pattern recognition built from exposure and consequence, not training data. A friend of mine who works in healthcare private equity says the top three risks related to any deal are regulatory in nature—gray area, subjective.

Organizational power mapping is another. Every failed technology implementation in history failed for the same reason: someone built the right thing for the wrong power structure. The CMO thinks she controls the data. The CFO controls the budget. The VP of operations controls the workflow. The IT director controls the timeline through "security review." No AI maps this. No certification covers it. This is human intelligence in the original meaning of the phrase.

Cross-domain translation may be the rarest and most durable skill of all. The ability to stand in a room and make a CMS actuary, an Epic build team, and a 55-year-old case manager all feel heard, and then synthesize what they need into something that actually ships—that's not a technical skill. It never was. We just told ourselves it was adjacent to technical skill so the coders could claim it.

And finally, accountability. The willingness to put your name on a recommendation and mean it. AI is a brilliant, tireless, unaccountable collaborator. In regulated industries—healthcare, insurance, finance, law—where the downside of being wrong is measured in dollars with a lot of zeroes or people with actual problems, someone has to own the outcome. That someone is still a human being with a name and a reputation and something to lose.

The coders who survive this aren't the ones who fight the AI. They're the ones who understand that the job was never really about the code. It's about the judgment surrounding the code. Which explains why Stanford CS grads can't find jobs—while McKinsey is hiring liberal arts majors again. Coders just got away with charging for the code because nobody had built the machine yet.

Now somebody has.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.   

5 Operational Shifts for Scaling Insurance AI

Insurance AI is shifting from the wow factor of innovation to the how factor of sustaining automation at scale.

Human Responsibility for AI

AI is moving well beyond experimentation and into everyday insurance operations. As this happens, the wow factor of introducing new forms of automation to insurance use cases is giving way to the how factor of sustaining these innovations at scale. Once AI influences underwriting decisions and claims outcomes in a heavily regulated environment, success depends far less on the sophistication of models and far more on the operational systems that support them.

Earlier phases of AI adoption proved that insurers can deploy advanced models. The priority now is to embed those models into the deeply regulated, process-driven realities of underwriting, claims, and distribution without creating new friction or risk. All this must happen while taking into account what may be an outdated back office tech stack, and with a level of integration that doesn't create the next issue on the horizon of agent sprawl. Here are five operational trends that are emerging as the differentiators between AI programs that compound value over time and those that stall under complexity:

Treat document intelligence as foundational infrastructure, not a point solution

Document intelligence is a prime focus for AI modernization, yet many organizations still approach it as a tactical automation limited to intake. At scale, this narrow view leaves significant value unrealized. Documents and work items remain central to underwriting, claims adjudication, and compliance. Manual handling introduces delay, inconsistency, and risk at every handoff. As AI adoption matures, document intelligence and rigorous contextualization functions should exist as shared operational infrastructure embedded directly into workflows, rather than bolted on at the edges. This shift reduces cycle times, improves data quality, and strengthens auditability; and it further informs future agentic capabilities stemming from those same work items. That's why insurers that move fastest stop treating document intelligence as an isolated capability and start treating it as a prerequisite for operational scale.

Make AI governance an enterprise operating model

As AI becomes embedded in decision-making, the ability to maintain explainability, accountability, and auditability of AI systems must be designed into processes from the outset, not retrofitted after systems are already in production. At scale, this allows insurers to deploy AI confidently across regions, lines of business, and regulatory regimes without fragmenting their operating model. This enterprise-wide discipline of clear ownership, transparent decision logic, and consistent oversight of machine processes helps position AI governance as a C-suite priority that strengthens risk posture, customer trust, and long-term resilience.

Keep humans in the loop strategically

When human involvement is applied too broadly, productivity gains erode and trust in automation declines. Human-in-the-loop AI is most effective when experienced underwriters or claims professionals are only pulled into cases where their judgment, oversight, and exception handling add the most value in assessing complex risks, edge cases, and decisions with material financial or regulatory impact. Emerging governance models increasingly reinforce this principle. For instance, Singapore's IMDA Model AI Governance Framework on agentic systems describes a spectrum of oversight that includes human-in-the-loop, on-the-loop, and over-the-loop to help selectively scale automation while preserving accountability and control.

Connect underwriting and claims workflows end-to-end

Siloed workflows are increasingly untenable as customer expectations rise and loss events grow more complex and costly. End-to-end visibility from first notice of loss through settlement, or from submission through bind, enables AI to coordinate decisions across the full lifecycle, rather than optimizing individual steps in isolation. This coordination reduces cycle times, improves broker/agent/customer experience, and strengthens risk selection and pricing accuracy. It also provides the transparency needed to support governance, oversight, and continuous improvement. AI delivers its greatest operational value when it serves as a connective layer across workflows, aligning data, decisions, and actions inside of a process.

Modernize legacy integrations iteratively

Best-in-class agents and tools cannot operate in a silo and must take into consideration the complex legacy systems that remain a reality for most insurers. Because large-scale replacements often span multiple years, waiting for perfect conditions before deploying AI is rarely viable; yet fragmented pilots that never scale introduce their own risks. Insurers that maximize their AI investments at scale focus on incremental modernizations that deliver early operational value while progressively addressing data and system complexity. This approach avoids the trap of pilots that prove concepts yet fail to translate into production impact with quantifiable benefit. By modernizing iteratively, insurers can improve workflows, connect disparate systems, and strengthen data foundations without discarding prior investments.

Conclusion

As AI becomes embedded in core insurance operations, the conversation is shifting from capability to durability. Most insurers now understand what AI can do. The more consequential question is whether it can be integrated into underwriting, claims, and compliance in ways that improve performance without eroding trust, operational integrity, or compliance. As such, sustaining AI at scale is a matter of organization-wide discipline. It requires aligning automation with real insurance cycles, protecting scarce expert judgment, and ensuring transparency as non-deterministic agentic-driven decisions expand. Insurers that approach AI through this lens position themselves not just to automate faster, but to operate smarter, more resiliently, and with greater confidence in the outcomes their systems produce.


Jake Sloan

Profile picture for user JakeSloan

Jake Sloan

Jake Sloan is vice president, global insurance, at Appian

He has held senior operations roles with Farmers Insurance, including front-line insurance/licensed field operations, and served as CIO of Aon National Flood Services. 

Sloan volunteers as a mentor to the Global Insurance Accelerator, holds an MBA from Baker University and is a graduate of the Advanced Management Program (AMP) of Harvard Business School.

2026 Commercial Market Outlook

Prepare for Renewals and Manage Costs in a Changing Market

wavy

After years of disruption, the commercial insurance market is showing signs of moderation—but risks remain. Catastrophe losses, social inflation, and regulatory scrutiny continue to challenge organizations.

Zywave’s 2026 Outlook breaks down what insurance professionals and business leaders need to know to prepare for renewals, manage costs, and position programs for success.

Key Takeaways for 2026
  • Property Insurance: After years of a hard market, property insurance is stabilizing thanks to improved capacity and reinsurance strength. However, catastrophe losses, valuation scrutiny, and climate risks continue to challenge underwriting. Parametric solutions and resilience measures are gaining traction—organizations with accurate valuations and proactive risk controls will benefit most.
  • Casualty Insurance: Litigation trends and social inflation keep pressure on casualty lines, especially commercial auto and umbrella liability. Nuclear verdicts and expanded litigation funding drive severity, while technology like telematics and AI safety tools are becoming key differentiators for favorable outcomes
  • Professional & Executive Liability: Competition is improving, but emerging risks tied to AI adoption and regulatory scrutiny are reshaping underwriting. Cyber events increasingly overlap with management liability, making strong governance and compliance essential for broader coverage and stable pricing.
Access the Full Outlook Today

Get expert insights into market forces and strategies for success. Download the full 77-page report now.

Access the Report

 

 

Sponsored by ITL Partner: Zywave


ITL Partner: Zywave

Profile picture for user Zywave

ITL Partner: Zywave

Zywave delivers AI-powered growth engines for the insurance industry, enabling carriers, MGAs, agencies, and brokers to grow profitably, strengthen risk assessment, enhance client relationships, and streamline operations. Its intelligent, AI-driven platform acts as a performance multiplier for more than 160,000 insurance professionals worldwide, across all major segments. By combining automation, data insights, and best practices, Zywave helps organizations stay competitive and efficient in today’s fast-changing risk environment—empowering them to adapt quickly, scale effectively, and achieve sustainable growth.

For more information, visit zywave.com.

Additional Resources

Zywave recognized as a Leader in The Forrester Wave™: Insurance Agency Management Systems, Q4 2025 

Access Report

An Urgent Need for Post-Quantum Cryptography

Organizations delaying the shift to post-quantum cryptography face major risks, as classical encryption schemes may break.

A purple and blue abstract background with a purple and blue swirl

While researching the Titanic recently, I was struck by something profound: the ship received numerous warning signs that could have prevented the catastrophic disaster of 1912. More than a century later, organizations continue making the same mistake, ignoring blatant warnings about pending disasters.

Today's iceberg? The quantum computing revolution that threatens to render our current cryptography obsolete.

The Warning Signs Are Already Here

Any entity using digital networks to store sensitive data needs to move away from classical cryptography toward post-quantum cryptography (PQC) standards. Organizations that fail to course correct risk drifting dangerously off course by maintaining the same classical cryptography instead of implementing new quantum-resistant algorithms that are already available.

This lack of proactive course correction, or what I call "cryptographic drift," creates what is now referred to as cryptographic debt – a burden that builds up until it may be too late to avoid disaster. One of the other perspectives to understand is that adversaries are constantly harvesting your data during the cryptographic drift, and the slow implementation of PQC-resistant algorithms will ease the adversarial burden to decrypt the data once a cryptographically relevant quantum computer (CRQC) becomes operationally available. The Titanic didn't sink simply from drifting off course, but because it maintained high speed into a known ice field despite numerous warnings that never reached the captain. Everyone was too busy to act.

Sound familiar?

Understanding the Quantum Threat

Quantum computers harness quantum mechanical phenomena, including superposition and entanglement, to process information in fundamentally different ways from classical systems. While classical computers encode data as binary bits (0s and 1s), quantum computers use quantum bits (qubits) that can occupy multiple states at once, potentially delivering exponential speedups for specific problem classes.

Quantum computers using gate-based operations (analogous to classical and/or gates) have been built with dozens of qubits, though their quality remains inconsistent. Scaling to fully error-corrected systems with logical qubits that can perform substantially more operations likely won't arrive until around 2030. Organizational management needs to understand what lies ahead in the cryptographic space of quantum computing. Advanced planning is essential to implement quantum-resistant algorithms before a CRQC arrives on the scene.

The primary organizational risk from quantum computing is that a CRQC could break widely used classical encryption schemes. This threat has prompted formal government action, including OMB Memorandum M-23-02 (Migrating to Post-Quantum Cryptography) and National Security Memorandum 10 (NSM-10, Promoting United States Leadership in Quantum Computing While Mitigating Risk to Vulnerable Cryptographic Systems), which direct federal agencies to take steps toward post-quantum cryptography (PQC) migration. The Department of Defense has issued additional guidance outlining implementation requirements and constraints for PQC adoption across government systems.

Private sector organizations, particularly those working with or seeking to work with government entities, should closely monitor these directives, as compliance will likely become essential for maintaining those relationships.

Planning safeguards your organization against the threat of a CRQC rendering current public-key encryption such as RSA (Rivest, Shamir, and Adleman) and Elliptic Curve Cryptography (ECC) obsolete. It may also mitigate "harvest now, decrypt later" (HNDL) attacks – a continuing threat where adversaries intercept and store encrypted data today, intending to decrypt it once error-correcting quantum computers become capable of breaking today's cryptographic protections.

Recent academic and industry publications have accelerated the timeline for operational CRQCs to on or before 2030, exponentially increasing risk in three critical areas:

  • Business operations disruption
  • Data exposure and breaches
  • Cost of emergency transition

Most forward-thinking organizations are already shifting their encryption ahead of 2030, anticipating moderate impacts to these areas.

Organizations experiencing cryptographic drift will continue operating normally, creating a dangerous illusion of security while adversaries store sensitive data now and decrypt it later (also known as HNDL attacks)—capturing encrypted data today for future exploitation. A crypto-agile approach maintains operational continuity while moving to quantum-resistant algorithms that protect data in transit. As shown in the figure, cryptographic debt accumulates over time and can become overwhelming or irreversible as organizations scale, eventually leading to loss of operational functionality and relevance due to government mandates and guidance. Wholesale replacement of IT infrastructure is neither practical nor cost-effective for achieving quantum resistance. Instead, implementing crypto-agility enables seamless migration from obsolete encryption to quantum-resistant standards, positioning organizations for future competitiveness through reduced costs, accelerated transition timelines, minimized data compromise risk, and uninterrupted operations.

The Time to Act Is Now

My advice is simple: start changing course now.

The quantum-resistant/PQC algorithms have been released by the National Institute of Standards and Technology (NIST):

  • FIPS 203 (ML-KEM) - key encapsulation
  • FIPS 204 (ML-DSA) - digital signatures
  • FIPS 205 (SLH-DSA) - stateless hash-based signatures

These standards form the foundation of the post-quantum cryptography migration mandated by government directives like OMB M-23-02 and NSM-10.

Start by inventorying your assets to understand what encryption is currently being used within the organizational enterprise. Focus on migrating the highly operationally used assets (high value or high impact) to the standard quantum-resistant algorithms, as they most likely transmit most of your sensitive data. For now, the HNDL threat is at the data in transit level, not particularly at the data in use and data at rest levels.

Additionally, migrating from TLS 1.2 to TLS 1.3 can counter a CRQC due to PQC algorithms integrating more naturally into the TLS 1.3 architecture. This is available now!

Reactive Planning

Migrating only after it's too late and your cryptography has been rendered void by an error-correcting/fault-tolerant quantum computer will dramatically increase the risk of your organization ending up like the Titanic.

Side Note

It took 73 years to find the wreckage, and to date, the Titanic has never been fully recovered from the ocean floor. Let's try not to have that happen to your organization.

The warnings are here. The danger is real. The timeline is shorter than you think. There are mitigations out there now that can be implemented within your organization.

Don't be too busy to change course. Pay attention to the warnings.


Garfield Jones

Profile picture for user GarfieldJones

Garfield Jones

Dr. Garfield Jones is senior vice president of research and technology for QuSecure. 

Dr. Jones previously served as the associate chief of strategic technology for the Cybersecurity and Infrastructure Security Agency (CISA), DHS, where he led the agency’s post-quantum cryptography (PQC) initiative. Prior to joining DHS, Dr. Jones worked as a systems engineer developing complex weapons, geographic, and information systems for agencies such as Office of Naval Intelligence (ONI), National Geospatial Intelligence Agency (NGA), and the Naval Criminal Investigative Service (NCIS). 

In 2018, he retired from the Army Reserves after serving 25 years (16 years active duty and nine years reservist) as an information systems warrant officer.

Should Brokers Trust Their Insurtech Vendors?

A study finds that two-thirds of brokers believe insurtech vendors overstate ROI promises, revealing a significant trust gap in the industry.

A Building under a Cloudy Sky

Insurtech offers the promise of transformation, but new data suggest brokers appear skeptical. Findings from the 2026 Benevolent Insurtech Trust Index indicate that broker trust in insurtech and its vendors across several trust dimensions is not high.

Consider:

  • 67% of broker respondents believe insurtech promises of time savings, efficiency and ROI (return on investment) are overstated;
  • 22% of respondents feel that vendors are honest about features, pricing and implementation during the sales process;
  • 23% of respondents feel that vendors can be counted on to do what is right;
  • 9% of respondents agree that vendors have made sacrifices for them in the past.

Before going further, two methodological disclosures about the inaugural Benevolent Insurtech Trust Index report. First, 67 brokers from across Canada completed the survey. This sample size means results are indicative but not generalizable. Second, attitudes toward various categories of insurtech, including broker management systems (BMS), quoting/rating, email marketing, policy admin systems (PAS), and AI solutions, were used in the findings.

Three themes emerged from the study where trust is breaking down between brokers and insurtech vendors.

The ROI Credibility Gap

When two out of three respondents believe that vendor claims of time savings, efficiency and ROI are overstated, there is a trust gap.

This isn't to say there are no efficiencies or productivity gains that come from using insurtech. Not at all. In fact, 57% of respondents agree that tech adds value to their organization. What is being captured here is the distance between initial expectation and lived experience. It is the feeling that claims or representations of ROI and increased productivity are exaggerated or embellished.

The result is that broker respondents are less likely to take such statements at face value. They want proof. As one respondent stated, "Show me real concrete examples of where our brokerage will see ROI and provide me with contacts that we could follow up with."

Of course, the challenge with relationships is the interpretation of behavior. Humans are meaning-makers, and we assign intent to behavior. As one respondent stated, "So yes tech firms all overstate their ROI and what they can do for you because that's how they get the sale."

Which leads to a second theme from the study: honesty during the sales process.

A Sales Process Brokers Don't Fully Trust

Only 22% of respondents agreed that vendors were honest with them about features, pricing, and implementation during the sales process. As one respondent remarked, "Tech vendors in the insurance space suffer from the over-promise and under-deliver syndrome."

Over-promise. Under-deliver. Overstated claims of ROI. Is it fair to paint every insurtech with this brush? No. But it doesn't really matter.

What matters is the perception that embellishment takes place. Because this is the thought that sticks. It's what gets talked about on convention floors; the "dark social" conversations that can influence buying decisions. Brands and reputations are shaped during these interactions, far from the boardroom table or the shine of new marketing campaigns.

We trust those who we believe will be honest and vulnerable with us, bringing us to the third theme: self-interest and partnering.

Are we really partners?

Consider these two findings: 23% of broker respondents feel that vendors can be counted on to do what is right, and only 9% of respondents agree that vendors have made sacrifices for them in the past.

What do "sacrifices" have to do with economic relationships? Sacrifices are an indicator of partnering behavior, of a willingness to put the interests of the other before our own. What respondents are saying is that they feel vendors are more inclined to put their own interests first, ahead of customer interests. That is, they expect vendors to behave in a self-interested way.

Building trust: What brokers are asking for

Transparency in pricing. Honest product roadmap discussions. Realistic implementation timelines and deliverables. These topped the list of ways brokers suggested vendors improve trust. As one broker offered, "trust grows with insurtech when (vendors) stop overselling roadmap features."

In addition, providing realistic, validated claims about time savings, productivity gains and ROI would also go a long way to strengthening feelings of trust. The opportunity and responsibility are shared between marketing, sales and service to set these expectations.

It may take time and intentional effort, but trust can be rebuilt, especially when shared interests are aligned. One respondent offered a clear partnering view, "Real insurtech success isn't about disruption; it's about reliability, partnership, and making brokers better at serving clients."

Here is a link to the full 2026 Benevolent Insurtech Trust Index report.


Steve Pieroway

Profile picture for user StevePieroway

Steve Pieroway

Steve Pieroway is principal at Benevolent Marketing, a B2B insurtech marketing consultancy. 

He is a former insurtech executive, having held leadership roles with Policy Works, Applied Systems Canada, and Trufla. Prior to his insurtech career, Steve wrote a thesis titled, “An Identification-Based Relationship Marketing Model.”

The Long View on Insurance's Transformation

To understand where insurance is heading, look at the history of computing — from batch processing to today's instant-answer capabilities. 

Image
Futuristic sky

I often tell people I've been watching the same movie for decades — it will be 40 years this fall since I started covering IBM as a young pup of a reporter at the Wall Street Journal. I've watched the disruption that hit IBM spread to the rest of the computer industry, then to commerce in general, thanks to the personal computer, internet, search engines, smartphones and now AI. 

Having watched the movie so often, I have a pretty good sense of how today's story lines will play out.

Today, I'll start even earlier than 1986 and offer a quick history of computing because I think the long view provides useful perspective on where insurance is — and where it's going. Some insurance processes are firmly stuck in the 1950s and 1960s, when batch processing was the only game in town. Others have made it to the 1980s and 1990s, with their PCs and networking. Still others are becoming fully modern, as they take advantage of mobile devices and generative AI.

On the theory that every industry is becoming a technology industry, insurers will eventually catch up on all fronts. Understanding where we lag the most and imagining a world where insurance can operate at the speed of Amazon will, I hope, provide a road map that will help us get to that future faster.

So, yes, I've set myself a rather ambitious goal this week.

To understand the starting point for computing (and insurance), think of my college roommate Mike. He was a computer science major, so he was wedded to the campus mainframe. He'd type out a program on a stack of punch cards, hand them in at the window in the computer center... and wait. When his turn finally came on the mainframe, he'd get a printout with the results. Given the complexity of what he was doing, and that even a typo would derail things, he inevitably had errors. So he'd debug the program, type out some more punch cards, turn them in at the window... and wait some more. 

Because turnaround times were shorter at night, after most students had gone back to their rooms, Mike typically stayed out into the wee hours of the morning, napping on a table while waiting for his latest printout. (The way our habits meshed led to a comical relationship, where we sometimes didn't see other while both were awake for weeks at a time. I'd leave in the morning while he was asleep and, after working a job, not get back until he'd left for the computer center in the evening. He went home on weekends to see his girlfriend, so I'd sometimes find myself asking mutual friends, "Hey, how's Mike? I haven't talked to him in ages. Tell him I said hi.")

Mike's travails were a holdover from the era of batch processing, when a computer could do only one thing at a time. Big efforts, such as processing payroll or reconciling accounting records, were done in a single batch at a time reserved on the mainframe. Mike's programs obviously weren't on anything like accounting's scale, but he still had to run a program in a single batch of cards and wait his turn. 

Even though computing technology has improved by orders of magnitude since Mike and I were in college, a lot of business still operates at the speed of batch processing. You have a meeting on some issue, and a question comes up. Someone is assigned to do some analysis and comes back a week or two or three later with an answer. The issue is discussed again, and another question arises. More analysis over more weeks ensues. The batch processing influence is even stronger in insurance than in most industries because there is so very much data to analyze.

Computer scientists saw early how much better interactive computing would be and spent decades getting us there. By the '60s and '70s, time-sharing became possible. The setup was awkward: You had a keyboard and printer but had to type out a program on special tape that you fed into the machine, and turnaround times were painfully slow because you were queueing up behind all the programs running on a distant mainframe or minicomputer. But time-sharing spread the power of computing far beyond the walls of the data center. (Bill Gates got his career started on a time-sharing terminal at his high school. I, too, had access to a terminal in high school but somehow didn't do as much with it as he did. Alas)

By the late 1970s and into the 1980s, Xerox PARC had worked its magic, and the Apple II and then the IBM PC were putting real power on individuals' desktops. The computers delivered big benefits to business because of the electronic spreadsheet but otherwise proved to be rather limited when used in isolation. Fortunately, Xerox took care of that issue, too, with the Ethernet networking standard that let businesses link their in-house computers. Then the internet took networking into the stratosphere thanks to the World Wide Web's debut in 1989 and the Mosaic browser in 1993. By the late 1990s, search engines were doing a good job of fulfilling Google's goal "to organize the world's information and make it universally accessible and useful." Then smartphones, led by the iPhone debut in 2007, put all the computing power and information in our hands. Generative AI is now letting us gather, process and use far more of the world's data than we humans could ever do on our own.

Big tech has taken advantage of the remarkable progression of technology to gather all sorts of signals about individuals (many of which I wish they didn't have) and target us with ads, with memes that keep us engaged, with dynamic pricing that maximizes their clients' revenue. Progress in other spheres is more uneven, but you can look at big retailers like Amazon and Walmart and see how they sense demand and respond to it in real time.

I'd say insurance has done a so-so job of taking advantage — acknowledging that our situation is complicated by heavy regulation and by the confusion of state-by-state oversight in the U.S. A lot of insurance work is still in a sort of batch mode — the analysis of loss runs, actuarial tables, and so on. While insurers have taken advantage of all the power on the desktop that PCs provide, I'm not sure we've done the best job of internal networking — why, for instance, isn't claims data always fed in real time to underwriters to inform future decisions? Insurers certainly haven't been great about taking advantage of all the information that's out there beyond their four walls; they're starting to figure out what data to trust and how to absorb it, but they've been slow. Insurers are also still figuring out what to do about smartphones. Yes, every company has an app these days, but my impression is that customers still want to be able to do a lot more self-service via phones than is possible today.

I'll withhold judgment on how insurance is doing on gen AI. We're headed in some good directions by gathering and doing initial processing for those in claims, underwriting and agencies, but we clearly haven't figured gen AI out — yet nobody has, so we're in good company. 

The nice thing is that, whatever our inadequacies to this point,  our version of the technology movie can have a happy ending for two reasons. One is that any new computer technology builds on everything that's come before in an exponential way. We're not just adding a gen AI capability alongside an information or networking capability. The capability increases by some exponent what was ushered in by smartphones, which raised what came before to some power, after it did the same to everything that came before that. The second reason is that we don't have to build the capability. The tech giants have done that over the past 75 years; we just have to take advantage. They're not done yet, either: The latest figure I saw is that the five biggest AI companies are investing $700 billion on infrastructure in this year alone

To me, the happy ending will come in a decade or so, when insurance can fully switch from batch processing to what I think of as conversational computing. You don't have a question in a meeting and send someone off to study the issue for weeks. You ask a question, and your AI uses all the internal and external information available to provide an answer. Loss runs and actuarial tables don't require massive studies. You converse with your computer and get the answers you need.

You can see glimmers of this sort of conversational future in some things going on today. Continuous underwriting is one great example. Why wait for an annual review of a policy when aerial imaging can tell you that a homeowner has added a pool, when an AI monitoring the internet can tell you that a restaurant has added a drinks menu or delivery options, etc.? Why not take advantage of the ability to sense what's going on among clients and prospects and respond? 

Embedded insurance is another example. Why should selling an insurance policy always be a formal project? Why not just use the ability to sense when a customer might want coverage and respond?

Technology never stops moving. Moore's law made sure of that for decades, with what became a sort of mandate for semiconductor makers to double the power of a chip every year and a half to two years at no increase in cost, and other forces, such as AI, are now amplifying those gains in capability by orders of magnitude. I figure I've gone through six tech revolutions since I debuted on the computer beat in 1986, and we could be in the middle of the next one, with agentic AI.

For insurers, I hope a look at the history of computing identifies some spots where we can and should improve. But I mostly hope the history shows us that we're headed toward a conversational future, where we ask questions and get answers in real time — and hope insurers will construct road maps toward that future so every incremental decision on IT can keep us moving in the right direction. Just imagine what insurance could look like at the speed of Amazon.

Cheers,

Paul

 

Traditional Insurers Can Still Win AI Race

Incumbents have operational context advantages AI-native startups can't replicate, but the window to leverage them is closing.

Side profile of a human's brain and face outline against a blue background with gears

Recently, there's been talk from AI-native insurance startups telling incumbents they'll never catch up. The argument goes like this: The barrier isn't technology; it's organizational DNA. Boards resist. Agent networks resist. Incentive structures resist. Even superintelligent AI can't rewrite a captive distribution network or a CEO's risk tolerance.

We built one of those AI-native insurers. We've spent nearly a decade learning where AI actually works in insurance - and where it doesn't. So we'll say what most people in our position won't: 

The critics are only half right.

The organizational immune system is real

We've watched it operate from the inside.

AI threatens more than processes. It threatens people, hierarchies, and decades of institutional knowledge that leaders built their careers on. The more powerful the technology gets, the more threatening the disruption feels, and the harder the organization pushes back.

The execution gap is genuine, too. Deloitte surveyed 3,200 enterprise leaders this year and found that executives feel strategically ready for AI but not operationally ready. Every insurance business we talk to confirms this. The board said yes. The pilot worked. But not much actually changed. They tripped in the last mile.

If you're reading those blog posts and feeling uneasy, trust your instincts. Standing still is falling behind.

Where the thesis breaks

The "incumbents are dead" argument assumes the only way to win with AI is to have been born with it. That organizational barriers are permanent. That traditional insurance businesses are evolutionary dead ends waiting for the asteroid.

This confuses two problems.

The first is building AI technology. AI-native startups have a real advantage here. Clean architectures, ML engineers who learned to work alongside actuaries, feedback loops from day one.

The second is having the operational context that makes AI actually work in insurance. Here, traditional businesses have an advantage no startup can replicate.

A startup can build a great claims model. But it doesn't know that your Florida team handles litigation differently than your Texas team because of venue-specific judicial considerations. It doesn't know that your underwriting knowledge base says one thing but your senior underwriters do another - and the deviation is actually producing better results. It doesn't know which of your 50 state regulatory constraints are real compliance requirements and which are institutional habits nobody has revisited in a decade.

That operational context - the messy, human, state-by-state reality of how insurance actually works - is the raw material AI needs to generate value. Technology is the engine. Context is the fuel. Insurance businesses have been accumulating this fuel for decades.

The startup pitch is: "We have the engine, and we'll figure out the fuel." The honest answer is that the fuel is harder to build than the engine.

The real question is speed

Can you close the execution gap before it shows up in your results?

The gap closes by connecting AI to the operational reality of how your business actually runs - across claims, underwriting, distribution, and compliance - in ways that compound over time.

Every month of operational AI data makes the system smarter. Every feedback loop accelerates the next one. This is an exponential curve, not a linear one. The businesses that start building now aren't just catching up. They're beginning a compounding process that gets harder to replicate with every cycle.

We spent nearly a decade building these feedback loops inside our own company. That experience made one thing clear: The distance between an AI demo that works and an AI system that changes how you operate is almost entirely about understanding the insurance underneath.

What I'm telling insurance executives right now

Your data is an asset that will appreciate with use. Your operational context can be youradvantage. The AI-native startups telling you it's over are talking their own book.

Some businesses already know this. The ones investing seriously in operational AI - not pilots, but production systems touching real policyholders - are proving the thesis wrong in real time.

We're seeing this from carriers, MGAs, and specialty businesses alike.

But the window is real. AI feedback loops compound. The businesses that start building them in the next 12 to 18 months will pull away from those that don't. You'll see it first in expense ratios, then in loss ratios, and then in competitive position.

The businesses that win won't become AI companies. They'll stay insurance companies that figured out how to make AI compound inside their operations before the window closed.


Kyle Nakatsuji

Profile picture for user KyleNakatsuji

Kyle Nakatsuji

Kyle Nakatsuji is the founder and CEO of Clearcover, an AI-native auto insurance carrier, and Dearborn Labs, which helps P&C carriers and MGAs operationalize artificial intelligence. 

Before founding Clearcover, he was a venture investor at American Family Insurance, where he led insurtech investments. He speaks regularly on AI strategy in insurance.

Smoother Insurance Agency Succession Planning

Most agents delay succession planning. The smoothest agency transitions start with technology-enabled operations built from day one.

Abstract Pattern on a Wall

For independent agents, the to-do list never gets shorter. New clients to win, policies to place, and revenue to grow. But there's one conversation that doesn't always make it onto the planning agenda, and it might be the most important one of all. What happens when it's time to hand things off?

Succession planning has long carried a reputation as something to worry about later. A conversation for agents nearing the end of their career, not those in the thick of building their business. But that thinking can be costly. The agencies that make the transition most smoothly aren't the ones that started planning at the last minute. They're the ones that built transferable tech operations from day one.

Here's the good news: if you're already using an agency management system to run your daily operations, you're likely closer to succession-ready than you think. The tools that help you manage client account data, track performance metrics, and stay on top of renewals can do double duty. Used consistently, they build the kind of organized and documented operation that makes handing things off far less daunting.

Performance Metrics Tell Your Agency's Story

When it comes time to demonstrate value, data speaks louder than anything else. Potential successors and buyers will want a clear picture of your agency's performance, including which lines of business are driving the most revenue, which producers are performing, and where coverage gaps exist across the book. Those answers need to be readily accessible.

A robust agency management system gives you this performance visibility in real time. Dashboards and reporting tools surface the metrics that matter most, from total annualized premium and active policies per customer to a detailed breakdown of your book of business by transaction type, often presented in intuitive visual layouts.

You can customize these reports too, filtering and drilling down into the data points that matter most. Some systems even let you benchmark your performance against peer agencies, giving you a clearer sense of where you stand. That kind of insight doesn't just serve a future transition. It sharpens your decision-making today, helping you spot growth opportunities and course correct before small issues become bigger ones.

Over time, these reports build a compelling picture of your agency's health and trajectory, one that tells a clear story to a successor and makes you a stronger agency today.

AI Keeps Client Knowledge Transferable

Serving clients without missing a beat is one of the first challenges any incoming leader faces. That means being able to find policy details quickly, understand coverage history, and get up to speed on the relationship without having to track down the person who used to handle it. When information is scattered across inboxes, desktop folders, and spreadsheets, that handoff becomes harder and more costly than it needs to be.

AI-powered agency management tools change that, and not just when a transition is on the horizon. Picture this: a newly onboarded staff member pulls up a long-standing client account in their first week. Rather than digging through months of email threads and agent logs, they get an instant summary of the relationship, enabling more knowledgeable client interactions and a much faster path to getting up to speed.

Clients expect continuity. They don't want to repeat themselves or re-explain their history with their agency. They expect whoever picks up the phone to already know them. AI makes that possible whether you're onboarding a new hire, navigating a leadership change, or simply trying to deliver a better client experience every day.

Renewal Tracking Protects What You've Built

Retention is the metric that tells the clearest story about an agency's health. A consistent renewal process signals that clients are being taken care of and that the book of business is stable.

A good management system gives you and any future leader a single view of every upcoming renewal, what has changed between the current policy and the renewal offer, and which clients are most at risk of shopping around. Predictive analytics flag at-risk policies before they become problems. Automated remarketing workflows retrieve updated rates and surface them alongside renewal details, so whoever is managing the book can act quickly and make informed recommendations.

For a buyer or successor, a clean and consistent renewal process is one of the most compelling things they can walk into.

No Matter Where You Are in Your Career — Start Now

Whether you're just launching your agency, in the middle of a growth run, or beginning to think seriously about the future, the time to invest in technology-enabled workflows is now.

The efficiencies you gain today will compound over time, and when the moment comes to pass the pen and the policies, you'll be glad you started early.


Rob Bourne

Profile picture for user RobBourne

Rob Bourne

Rob Bourne is the senior vice president and general manager of EZLynx

He previously served as SVP at Applied Systems, overseeing inside sales, account management, business development, and alliance partnerships. Before that, he held senior roles at Athelas and Podium. 

He has an MBA from Cornell University.