Download

The Forrester Wave™: Insurance Agency Management Systems, Q4 2025

The 10 Providers That Matter Most And How They Stack Up

Article Title Graphic

Digital insurance agency platforms are the cornerstone of the insurance ecosystem. With digital first consumers demanding seamless experiences, agencies and carriers need technology that delivers speed, insight, and engagement. Zywave received the highest rank of all vendors in the strategy category, and maximum possible scores in the innovation, vision, and roadmap criteria in this report. 

The report provides a comprehensive evaluation of insurance technology platforms, analyzing current offering and strategy. See how Forrester evaluated 10 top platforms, and why Zywave earned the highest score in the strategy category and the highest possible scores in the criteria of vision and innovation. 

The Forrester Wave™ report noted:

  • "Zywave’s vision is to facilitate the growth of insurance distributors by becoming an integrated, open API software suite powered by agentic AI".
  • “Its impressive roadmap and innovation aim to bolster quoting and carrier connectivity as well as use AI-powered agents to automate workflows.”
  • “Zywave’s SaaS platform offers agencies a robust marketing toolkit, quoting and proposal tools, and product analysis and comparison for personal, commercial, and benefit lines — aided by a highly intuitive UI and extensible platform architecture.” 
Graph showing Zywave vs other competitors

Access the Report

Sponsored by ITL Partner: Zywave


ITL Partner: Zywave

Profile picture for user Zywave

ITL Partner: Zywave

Zywave delivers AI-powered growth engines for the insurance industry, enabling carriers, MGAs, agencies, and brokers to grow profitably, strengthen risk assessment, enhance client relationships, and streamline operations. Its intelligent, AI-driven platform acts as a performance multiplier for more than 160,000 insurance professionals worldwide, across all major segments. By combining automation, data insights, and best practices, Zywave helps organizations stay competitive and efficient in today’s fast-changing risk environment—empowering them to adapt quickly, scale effectively, and achieve sustainable growth.

For more information, visit zywave.com.

Additional Resources

Zywave recognized as a Leader in The Forrester Wave™: Insurance Agency Management Systems, Q4 2025 

Access Report

How Self-Insured Employers Can Combat Healthcare Waste

Self-insured employers can now prevent wasteful healthcare spending before payment using AI rather than recovering overpayments after the fact.

An artist's illustration of AI

The U.S. healthcare system wastes between $760 billion and $1.6 trillion every year. That range comes from a landmark 2019 JAMA study and updated 2025 expenditure data from CMS. If you work in insurance or risk management, that number should stop you cold. It is larger than the GDP of most countries. It represents roughly 25-30% of total national health expenditure. And the researchers who quantified it also confirmed something important: proven interventions could save $191 billion to $282 billion annually.

This is not a projection based on theoretical models. It is a documented opportunity sitting inside the claims data of every self-insured health plan in America.

Where the Waste Lives

JAMA identified six categories of waste: billing errors and fraud, administrative complexity, unnecessary services, pricing failures, failure of care coordination, and other inefficiencies including underuse of preventive care. Each category is quantifiable. Each has known interventions. None of them require inventing technology or waiting for policy reform.

Consider the pricing problem alone. An echocardiogram can cost $350 at one facility and $2,700 at another in the same market, with no corresponding difference in quality. The Purchaser Business Group on Health (PBGH) found that commercial negotiated rates for identical procedures vary by more than 100% between regional markets. The primary driver of excess U.S. spending is higher prices, not greater usage. Americans use many healthcare services at lower rates than peers in other developed nations but pay far more for the same services.

On the pharmacy side, brand-name drugs are routinely dispensed when therapeutically equivalent generics exist at a fraction of the cost. PBM spread pricing, where the pharmacy benefit manager charges the plan one price and pays the pharmacy a lower price, persists because most employers never examine their claims data and their vendor contracts at the line-item level. Organizations that shift to transparent, pass-through pharmacy pricing models are documenting savings of 15-30% on pharmacy spend.

These are not outlier cases. They are structural patterns that repeat across virtually every health plan I have analyzed over the past decade. These analyses and corroborating studies consistently indicate that about half of all employer-sponsored health plan spending is inefficient or wasteful.

Why the Problem Persists

The most dangerous aspect of healthcare waste is that it hides in plain sight. It does not appear as a line item labeled "unnecessary" or "inefficient." It is buried in inflated claims, redundant procedures, opaque vendor clauses, and recurring overpayments to providers.

Many employers rely on their chosen third-party administrators, insurance brokers, or pharmacy benefit managers to manage most aspects of their plans. In a system riddled with misaligned incentives, that trust is often misplaced. When PBMs, for example, profit from higher usage of certain high-cost drugs or maintain deliberately opaque rebate arrangements, waste is not just tolerated. It is the business model!

The traditional cost-control approach is entirely reactive. An employer negotiates rates during contracting season, processes claims throughout the year, and then hires an auditor to review a random sample once or twice annually. By the time anyone identifies an overpayment or a suspicious billing pattern, the money is long gone. Recovering it requires time, administrative effort, and often a fight with the provider or vendor. Most employers never recoup the full amount, if they recoup anything.

The Intervention Point Has Shifted

AI and advanced analytics have moved the intervention point forward. Instead of reviewing claims after they have been paid, AI-powered platforms can analyze claims before payment is released. Every claim is fed through thousands of logic checks based on CMS guidelines, billing codes, plan-specific terms, and more. When a claim triggers a flag for a duplicate charge, an upcoded procedure, unbundling, or a charge exceeding contracted rates, the system can pause or deny payment until the issue is reviewed by a human. That is a fundamental shift from recovering waste after the fact to preventing it from occurring in the first place.

Predictive modeling takes this further upstream. Predictive engines analyze historical claims data, clinical indicators, pharmacy usage, and demographic profiles to identify plan members likely to become high-cost members in the coming months. When the model flags a member as high-risk for a cardiovascular event or a deteriorating chronic condition, care managers can intervene proactively. They can coordinate outreach, adjust treatment plans, and connect the member with resources before a $200,000 hospitalization shows up in the claims data. Prevention at that scale was never achievable through manual review.

Price transparency data, now available through federally mandated machine-readable payer files, gives employers another tool to act earlier. AI transforms that raw pricing data into market-by-market comparisons. Employers can identify where a plan is overpaying for specific medical services and direct members toward higher-value providers before care occurs, rather than negotiating discounts after the bill arrives.

The Fiduciary Dimension

Self-insured employers now provide health coverage for more than 160 million Americans. Approximately 67% of insured U.S. workers are covered by self-funded arrangements, and among large employers with 5,000 or more employees, adoption rates reach 95%. These organizations collectively spend hundreds of billions of dollars annually on health plan costs.

Under ERISA and the Consolidated Appropriations Act (CAA), these employers have explicit fiduciary obligations. They must ensure health plan dollars are spent prudently, demand transparency from TPAs, PBMs, and other vendors, and act in the best interest of plan participants. The CAA reinforces this by mandating that TPAs and PBMs disclose detailed claims and pricing information. Elizabeth Mitchell, president and CEO of PBGH, has stated clearly that self-insured employers need to take on a significantly larger role in selecting and managing their health care vendors and partners.

This is a critical point for insurance professionals. Stop-loss carriers, group health underwriters, brokers, and consultants all operate within an ecosystem shaped by employer health plan performance. When waste drives up claims, it drives up stop-loss premiums, reduces margins, and creates volatility that makes risk harder to price. Conversely, employers who actively identify and eliminate waste produce a cleaner, more predictable claims experience. That benefits everyone in their value chain.

What the Numbers Look Like in Practice

When an employer deploys AI-powered claims auditing on a $50 million health plan and identifies a 14% payment inaccuracy rate, typical for most small to mid-size plans, it recovers more than $7 million annually. That money would otherwise flow to vendor margins rather than employee benefits. Organizations using this approach routinely achieve 20% to 30% cost reductions in the first year.

The waste is not distributed randomly. It accumulates in specific, repeating patterns: duplicate charges, inflated facility fees, upcoded procedures, PBM spread pricing, and avoidable usage. These patterns are identifiable, measurable, and correctable with the tools available today.

The Only Remaining Question

The convergence of regulatory requirements, data transparency mandates, and AI-powered analytics gives self-insured employers unprecedented ability to identify and eliminate waste. The tools exist. The fiduciary mandate is clear. Healthcare costs have exceeded 5% in annual increases for three consecutive years, with 2025 projections at 5.8% and 2026 at 6.5%.

Every dollar recovered from waste is a dollar that can fund better benefits, lower premiums, or reinvestment in the workforce. The employers who act on this with data, transparency, accountability, and the right technology will control their costs effectively. They will set the standard for how healthcare should be managed in this country. The rest will keep paying for waste they could have prevented and harboring risk they could have eliminated.

From Documents to Decisions: Why Claims Needs a New Operating Model

While claims technology has improved for decades, too little has been done to leverage it. It's time to move beyond document storage and into effective decision-making.

interview

The insurance claims industry sits at an inflection point. Medical records are more complex, nuclear verdicts are rising, and the workforce is changing faster than most organizations can adapt. AI promises to help — but most implementations have fallen short. We sat down with Mark Tainton, senior vice president of data solutions at Wisedocs, to talk about what's actually working, what isn't, and why the industry needs to move from document management to true decision intelligence.

Paul Carroll

The insurance claims industry has been talking about digital transformation for years. What's actually changed in the last 18 to 24 months, and what's still stuck?

Mark Tainton

Having worked in the insurance industry for over 30 years at the intersection of technology and claims operations, I've certainly seen infrastructure change. But the bigger question now is the operating model that can actually leverage that infrastructure. And the operating model is not so much around storing documents in claims management systems or document management systems—it's about how we take advantage of that data asset. We’re essentially moving from document storage into effective decision-making.

Over the last five years, there has been an acceleration in the technology, in particular with large language models. Technology is not the problem.

It's really about taking advantage of the individual pieces of information in the world of unstructured data. That's the next wave we should be focusing on: How do we operationalize the assets so they’re part of the DNA of insurance processes?

Paul Carroll

Medical record review is at the heart of so many claims decisions, yet it still appears remarkably manual at most organizations.

Mark Tainton

I’ve certainly seen large carriers that have introduced AI but haven't introduced the process changes or changed how people can take advantage of the insights as the claim goes through its lifecycle. Carriers are using ineffective decision making approaches that continue to mirror what we saw 10, 15, 20 years ago. 

There needs to be a conversation around how adjusters work, especially because of the change in their age demographic. New people coming into the claims industry consume data completely differently. We have to adjust. 

You have to also understand the psychosocial aspects of the workforce, where COVID accelerated change. You need to cut across multiple claims at any given time and look for triggers that are prevalent by a treatment provider, or at risk indicators that suggest psychosocial issues—they are top of mind for a lot of claims teams right now.

Paul Carroll

There's always a tension between speed and defensibility in claims, especially given the high stakes. How do insurers resolve that tension?

Mark Tainton

Claims are getting more complex, and we've seen a lot of legislation that makes it very clear that if someone's making a decision solely based off AI output with no human in the loop, that's going to be a problem.

When you tie that concern into the expansion of traditional fraud and increases in nuclear verdicts, the defensibility question becomes critical. There needs to be a human in the loop.

Several states are already drawing that line legislatively. California's SB 574 and a growing number of AI governance frameworks now require that AI-assisted decisions in insurance and legal contexts be documented, auditable, and explainable. That is not a future concern; it is a present operating requirement for carriers doing business in those jurisdictions. The organizations that build defensibility infrastructure now will not be scrambling to retrofit it later.

Paul Carroll

There are a lot of solutions out there these days, but they seem to largely be point solutions—summarization tools, triage tools, document processors, and so forth. What's missing from the point solution approach?

Mark Tainton

First, they don't fit into the ecosystems of clients and large carriers. They don't work alongside platforms like Guidewire where they can function as a module and help make those decisions effective.

The point solutions also aren’t really end-to-end. They're focusing on a point in time on a particular claim. That produces what I call a silent failure. The AI processes the document and returns a summary, and the claim moves forward. But the anomaly that should have triggered a flag, the treatment pattern that does not match the diagnosis, the billing inconsistency that signals a problem: None of that surface because the tool was never designed to look across the lifecycle. The claim does not fail loudly. It just quietly travels in the wrong direction for months. 

Think about first notice of injury as a claim goes through the life cycle, and all of a sudden you get a demand package or a treatment package coming in. What are the decisions you want the adjuster to make?

You need intelligence that cuts across the full lifecycle of the claim in terms of other claims with certain characteristics. And I think that's where point solutions really come up short.

Paul Carroll

I assume that thinking is why you took a platform approach with WiseShare.

Mark Tainton

Very much so. We have the sorting and summarization solution that we just renamed WisePrep. It includes WiseChat, where users can save all the insights they generate from a large language model. We've introduced WiseInsights looking at litigation trends, looking at treatment patterns and how they develop, looking across claims that an adjuster who's got a workload of 200 or 300 claims cannot identify on their own. These insights reveal similar characteristics across claims. For example, we looked at one portfolio and identified that a particular treatment provider, over a 12-week program, consistently prescribed a higher and more severe medication at the four-week timeline. 

WiseShare is important, too.  Far too often, a summarized document gets passed from the adjuster to inside counsel, then to external counsel, and eventually to an IME [independent medical examiner]. A lot of the time, we see slip-ups—documents go missing, misinterpretations occur, different versions of the truth emerge. WiseShare brings everything together into one consolidated environment where all of those entities can actually share, review, and export the claim file. 

From a legal defensibility standpoint, that consolidation is not a convenience; it is a chain-of-custody argument. The defense bar needs to see a complete, unbroken record: the medical record chronology, the time series of decisions made, and documented consistency in how AI processed the underlying materials. When a claim ends up in litigation, the question is not just what decision was made; it is whether that decision can be reconstructed, sourced, and defended at deposition. WiseShare is built for that standard.

You have to be able to wrap intelligence around a decision, and that requires a platform. 

Decision intelligence needs to be comparative. You have to be able to see the claim you're dealing with in the context of other claims. The intelligence also needs to be sequential. Are we seeing similar patterns starting to develop on other claims in certain jurisdictions? Are we starting to see certain seasonal trends? Are we starting to see different types of treatment coming through? Finally, the intelligence must provide accountability. Is every inference sourced and every decision point documented? 

The defense bar needs to see that audit trail. They need to see the medical record chronology, the time series, and the consistency in terms of best practices for how AI actually processes documents and insights for better outcomes. From 2023 to 2024, nuclear verdicts rose 52%. Thermonuclear verdicts are up 81%, and overall verdicts are up 116%. 

You need one single environment where you store the materials, one single process that's consistent across an organization.

Bottom line: if you can't show defensibility, you're in a world of trouble.

Paul Carroll

There's discussion about AI replacing many human workers in the insurance industry. What is your perspective?

Mark Tainton 

There's this notion that AI is going to replace people at the desk. From my perspective, that's totally inaccurate. And I think that mindset sets back adoption.

But here's the inflection point: We're dealing with an aging workforce. Insurers and TPAs are struggling to attract talent. Why? Because some of the tools and technology have not evolved as quickly as in other industries. When you can walk hand in hand with AI and the person at the desk and show them all the benefits, that’s exciting. 

Paul Carroll

If you could change one thing about how the insurance industry is currently approaching AI adoption in claims, what would it be?

Mark Tainton

For me, it's what I call the evolution framework. AI is a journey, not a one-time event. Far too often, what I've seen is large organizations—mid-tier, tier two, tier three—treating this as basically an implementation. It's almost like they're going in, turning the light switch on and walking out.

I spend quite a bit of time working with clients all the way from inception to asking: Where are we actually going to implement this? What's the impact we're expecting? How does this align with strategic objectives? What are some of the key measurements we want to see in terms of adoption, change, and, ultimately, having the AI start to hit the hard dollars—reduction in litigation, average duration, and things like that.

I'll give you an example. I worked with a large carrier that wanted to implement AI across the entire organization. But they have an aging demographic in certain lines, and getting them to adopt AI would be difficult. They've also captured a lot of information very poorly in their systems—it's very much in their heads.

I said, Let's focus on the younger generation. They’ll adopt AI, and we’ll create a best practice, one that we can use when we bring in new talent. So we built a three-year program focused on them. Ultimately, the program was so successful that the older generation said, We want to be part of that, too. 

For me, the next window for anyone embarking on an AI journey is to focus on embedding it upfront—knowing, of course, that the process will evolve over time. 

Begin with what we call an EDA—exploratory data analysis—to determine what the baseline is. That way, you can prove that you’re opening and closing claims far more quickly and can see the change quarter over quarter. That data helps sell the journey. We've also done quite a bit of work around what we call data quality programs, where we assess the quality and change behavior at the desk in terms of how people are capturing data—all the way from structured to unstructured and, more importantly, in the adjuster call notes. That program embeds the solution into the fabric of the organization.

I think that's the next wave. 

Paul Carroll

Thanks, Mark.

 

Sponsored by Wisedocs

About Mark Tainton

Mark Tainton

Mark Tainton is the SVP of Data Solutions at Wisedocs, bringing over 30 years of AI, data and analytics transformation expertise in insurance and financial services. Having served as Chief Data Officer at multiple leading organizations, Mark understands the critical intersection of medical intelligence, litigation strategy, and claims outcomes. He advises Wisedocs on data and product strategy, go-to-market positioning, and the deployment of AI-powered solutions that address the most pressing challenges facing claims and legal professionals today.


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.


Wisedocs

Profile picture for user Wisedocs

Wisedocs

Wisedocs is an AI-powered claims documentation platform purpose-built for insurance and medical record processing. Trained on over 100 million claim documents, the platform delivers structured, defensible outputs, from summaries to insights, all with expert human oversight. Wisedocs empowers enterprise carriers, government agencies, legal defense teams, and medical experts to improve operational efficiency, reduce administrative burden, and enhance decision accuracy. Visit www.wisedocs.ai to learn more.

The Importance of 'Self-Organizing Maps'

Three converging trends—private machine learning systems, vendor large language models, and market expansion—push reinsurers toward integration over replacement.

An artist's illustration of AI

Clear trends in machine learning and artificial intelligence are converging in a growing (re)insurance industry. This process needs attention and reconciliation. For three decades, specialists in insurance and finance have built machine learning systems to solve various complex problems. Prominent and recognizable feats include the wide implementation of genetic algorithms to solve portfolio optimization problems. Less known is the application of self-organizing maps (SOM). The latter are highly capable of consuming unstructured, multi-dimensional data and classifying and ordering it by properties derived from the key attributes of these large deposits of information. While doing this, SOM reduces dimensionality, installs order, and learns in the process. SOM are neural networks by definition and are capable of unsupervised learning and correction. The Finnish computer scientist and mathematician Teuvo Kohonen pioneered the algorithms in the late 1980s and 1990s. This proliferation of private machine learning systems is our first and well-established trend.

Enter large language models (LLM) built and delivered by Big Tech vendors. These have the capabilities of neural networks and genetic algorithms. However, the advantages of proprietary machine learning systems, trained and refined over time, are manifold. Above all, firms have assessed and proven these internal and private systems over the years and by now they require minimum supervision from practitioners. Users have straightened the errors and polished up performance by countless hours of training and production. Secondly and more significantly, these systems contain the topology of risk factors of the firm. This is the core business model and philosophy, which the firm protects keenly. Hence the solution of coexistence of private machine learning systems and vendor LLM is integration. This is our second newly exposed trend.

Last but not least, we have the expansion of the reinsurance business into developing and growing markets and regions. This is our third and well-recognized trend. We will take a case in point with the oldest reinsurance contract, the quota share of catastrophe loss.

A 33% Quota Share treaty applied ‘from the 1’st dollar’ on the insurer distribution of gross loss, resulting in 33% of ceded loss to the reinsurer, and 67% insurer net retained.

Figure 1: A 33% Quota Share treaty applied ‘from the 1’st dollar’ on the insurer distribution of gross loss, resulting in 33% of ceded loss to the reinsurer, and 67% insurer net retained.

The treaty is a fitting instrument to minimize earnings volatility, while supporting ambitious market share targets. This has been the case since the time when Venetian and Genovese bankers reinsured Mediterranean and Black Sea trade. From then to now, volatility is particularly important in a growing market where underwriting targets keep up with fast expansion and a healthy degree of uncertainty. From then to now, reinsurance has always been an information business. The quality of exposure-at-risk estimation by a process of quantifying the amount of risk a cedant carries and how much of it a reinsurer assumes determines the accuracy of pricing, reserving, and capital allocation. A large share of consequential information that drives exposure lives in unstructured data formats: government circulars, regulatory filings, rating agency reviews, accounting standards, broker advisories, and increasingly, satellite-derived physical damage assessments.

Two prescient cases in point are Malaysia and Indonesia with 5-to-8 percent annual growth in Gross Underwritten Premium. This makes the region a dynamic and demanding marketplace. 

 Impact of flooding in Malaysia and Indonesia during November of 2025

Figure 2, Impact of flooding in Malaysia and Indonesia during November of 2025. Exhibit produced by Releifweb.

Reinsurance cycles move quickly. In an environment of sparse data and limited historical experience, simplicity of structure and instantaneous transparency of pricing techniques become an advantage. Quota share is the reinsurance contract with the lowest operational cost and clearest, most stable and most recognizable price. It reduces volatility across the entire book and the entire risk tail of the business. Under the linear and proportional premium-making and loss-ceding rules of the treaty, reducing uncertainty and error in underlying exposure directly and surely reduces uncertainty in loss outcomes and in earnings volatility.

The Self-Organizing Map for data processing is a tried and tested algorithm that can streamline the validation of the ceded exposure of the insurer to its partner, the reinsurer. It reduces multi-dimensional data to two-dimensional surfaces by pre-selected rules and features, while learning, training and self-correcting. This makes it perfect for ingesting substantial amounts of exposure and premium records, historical loss and claims, rates, and indices well in sync with qualitative data and narrative from brokers, government, and accounting agencies. SOM is connectable directly to exposure databases and lakes. Self-organizing and self-learning layers process volumes of ingested data to create an exposure map of linear variables of business interest.

SOM consumes unstructured, multi-dimensional data. As a neural map it creates neurons and assigns their properties. Then it reduces dimensionality, structures and organizes the data.

Figure 3, SOM consumes unstructured, multi-dimensional data. As a neural map it creates neurons and assigns their properties. Then it reduces dimensionality, structures and organizes the data.

In our case, the aim is to vet, correct and fill in sparse data on exposure variables of key business concern such as insurable values, deductible amounts, and spatial coordinates of risks. Then the output from SOM is overlaid on the ceded exposure of the insurer and the procedure itself effectively executes validation, correction, and self-adjustment. As a result, the integrated system reduces uncertainty and error. Through the proportional nature of the quota share contract, this has an immediate multiplier on containing uncertainty and volatility in earnings.

There are various integration concepts capable of addressing and reconciling the intersection of these three trends described so far. We conclude by describing in light, non-technical terms one such concept in the form of a three-layered system.

Layer/Component/Function Table

Clear and optimal understanding of architecture allows one to partition tasks, components, and layers across this multi-dimensional system.

In Layer One, a vendor LLM consumes unstructured, multidimensional numerical and qualitative data. Its tasks are to find and define the modification and feature vectors D from unstructured data .  In this distribution of labor, LLM works out Best Matching Unit [see BMU below] mapping to the neural nodes and to layers. Alpha is the trust level scaler assigned to every document and unstructured data piece from which LLM defines the feature vectors . In my view the modeler & practitioner best assigns this. The human user keeps control of a critical risk control and mitigation variable.

In Layer Two, the proprietary SOM developed in-house by the firm retains control of definition and mapping for risk factors and all business variables. It is essential that the business owner keep the SOM grid proprietary. This is the practitioner's market-making guideline and philosophy of risk topology. We do not want to outsource this to LLM. This is the business model. The core equation of the proprietary SOM, which modifies our exposure variable of interest, is transformable in the context of (re)insurance practices.

equationTerm/Context in (re)insurance Table

In Layer Three, the process becomes machine learning through a feedback loop where is the feedback learning rate, and is the retrospective learning function. The latter distributes corrections to correlated neural nodes and layers.

equation

This is as much as we will go into the mechanics and mathematics of SOM algorithms. There is a big and thriving literature on the topic. For our purposes, this is sufficient to distribute the main tasks and components of the concept system.

Lastly, the modified variable of interest  propagates to a catastrophe modeling system, such as Verisk Synergy Studio, where it enters the (re)insurance loss module and estimates reinsurer treaty and insurer retained loss.

With this concept of integration, we have preserved the utility of business intelligence developed and refined in the firm in the state of a private machine learning system. We have coupled and integrated this with the new power and capabilities of large language models.


Ivelin M. Zvezdov

Profile picture for user IvelinZvezdov

Ivelin M. Zvezdov

Ivelin Zvezdov is a financial economist by training with experience in quantitative analysis and risk management for (re)insurance and natural catastrophe modeling, fixed income and commodities trading. Since 2013 he leads the product development effort of AIR Worldwide's next generation modeling platform.

The Key to Operationalizing Data Security

Healthcare and insurance organizations face mounting data security risks as AI adoption outpaces their ability to govern sensitive information.

Woman Wearing Medical Scrubs Using a Tablet

Today's healthcare and insurance organizations manage vast amounts of highly sensitive data across an increasingly complex ecosystem. Clinical providers, insurers, and partners rely on shared data from diverse sources—including electronic medical records, imaging systems, and billing platforms.

However, many lack a clear understanding of where this data resides, who owns it, and how it's accessed. Without this foundation, they cannot confidently classify or protect sensitive information—leaving them vulnerable to compliance violations, regulatory fines, and legal risks. This lack of visibility also affects critical operations. In insurance workflows such as underwriting, decisions may be based on incomplete or inaccurate data, increasing risk—especially in high-stakes scenarios like mergers and acquisitions, where evaluating the data security posture is crucial.

These challenges are intensified by outdated, fragmented environments that are difficult to integrate and modernize. Sensitive data is scattered across disconnected systems and formats, leading to duplication, inconsistency, and reduced visibility. Meanwhile, excessive permissions remain a constant risk, increasing the likelihood of misuse, insider threats, and accidental exposure.

As organizations accelerate AI adoption, including generative AI, to enhance clinical and operational efficiency, they introduce a powerful new capability alongside a significant increase in risk. When sensitive data is used without proper governance and controls, exposure can grow quickly and unpredictably.

Operationalizing data security has long been a challenge. Despite significant investments, many organizations still lack complete visibility. Traditional tools that rely on regex, trainable classifiers, and other pattern-based methods identify only a small portion of sensitive data and often overwhelm teams with false positives.

The good news is that modern data security governance platforms have moved beyond these limitations. Healthcare and insurance organizations should seek solutions that leverage context-aware AI for discovery, risk monitoring, and remediation—delivering outcomes such as:

Gain better visibility into data: To effectively protect sensitive information, providers first need to understand exactly what data they possess, where it is stored, who accesses it, and how it is shared.

Context-aware AI scans each data record thoroughly and can identify not only personally identifiable information (PII), protected health information (PHI), and payment card information (PCI), but also detect other important business records that other tools might overlook. It also recognizes duplicate or near-identical data and determines the category and subcategory for each record. For instance, it differentiates between a HIPAA authorization and a workers' compensation document. This detailed level of information helps security teams make smarter decisions when assigning classification labels, choosing where data should be stored, or setting access and retention rules.

Prevent sensitive data leaks: Security teams must ensure that employees and third-party contractors do not access data they shouldn't and verify that authorized users do not share it. They need a solution that enables them to contextually discover, monitor, and protect their sensitive data—not only at rest but also in transit—to prevent it from being shared with unauthorized users, personal email addresses, file-sharing applications, social media, or GenAI tools.

Enable GenAI without expanding the attack surface: Generative artificial intelligence (GenAI) is reshaping our world in real time. Tools like Microsoft Copilot, ChatGPT, Perplexity, and Claude are changing the way we make decisions, solve problems, create content, and interact both at work and at home. While they offer greater operational efficiency, better decision-making, and lower costs, they also introduce significant data security risks for insurers.

Providers need a solution that helps them detect when employees use unsanctioned or "shadow AI," so they can maintain control and protect their data. They also need to ensure that, no matter where data is stored, it is accessed by the correct identities, at the appropriate times, and for the intended purposes. A comprehensive data security and governance solution will allow them to set guardrails on which data should be blocked or redacted by groups and for each GenAI application, and help them curate data when training their own proprietary GenAI models.

Excel in regulatory compliance audits: Regulatory frameworks help healthcare and insurance companies reduce risks, implement processes, and maintain customer trust. However, mapping security controls to these frameworks can quickly become overwhelming. An additional challenge is that different regions may have vastly different data handling and classification requirements.

Organizations need a clear overview of their compliance status, tools to resolve issues, and peace of mind that they aren't one audit away from disaster. They should seek a solution that offers a dashboard displaying their current compliance status across all relevant regulations and security controls, as well as support for custom frameworks. Additionally, they require granular visibility into all data records that violate compliance, with the ability to remediate them directly within the platform.

Improve the effectiveness of current security tools: Tools like zero trust network access (ZTNA) and cloud access security broker (CASB) don't analyze data to determine whether to allow or block access. Instead, they enforce policies based on labels, so if those labels are wrong or missing, they could either expose sensitive information to unauthorized users or prevent access necessary for productivity. Context-aware AI and autonomous classification help ensure that sensitive data is labeled correctly and remains accessible only to authorized individuals.

Experience faster ROI, smarter policies, and less stress: Context-aware AI significantly accelerates the data discovery process and saves countless hours that administrators used to spend on tuning algorithms and chasing false positives. However, since new data is constantly generated and continually changing, capturing only a snapshot at a single point in time is insufficient.

Security teams can save time and enhance data protection by implementing a solution that continuously monitors data, flags risks, and automates remediation steps. Picking a provider that offers managed services can also reduce the workload on overstretched security teams by providing data security experts to assist with tasks such as deploying the platform and training their teams on it, building a data governance roadmap, mapping classification labels, and reporting on and tracking progress toward their goals.

Process Too Often Replaces Thinking

Insurance standardization has quietly shifted workforce behavior from critical thinking to process execution, a dangerous trend that AI will accelerate.

Abstract Geometric 3D Render with Soft Pastels

For years, the insurance industry has built systems to reduce dependence on the human factor.  Processes, rules, controls, automation — all aimed at making decisions more stable and predictable.

But something changed.

We began building systems where people increasingly stop thinking — and start executing.

How We Got Here

Insurance has always sought to reduce uncertainty.

The natural response was standardization: more rules, more control, more structured processes.

Technology accelerated this:

  • more complex workflows
  • additional validation layers
  • stronger enforcement of consistency

The logic was sound.

But each new layer reduced the space for individual judgment.

The Subtle Shift

No one decided to "stop thinking."

It happened gradually.

Decisions became checklist validation.

Analysis turned into rule compliance.

Accountability shifted to "this is how the process works."

The system looks more controlled — but becomes less adaptable.

What This Looks Like

In underwriting, decisions are technically correct but lack context.

In claims, cases are handled "by the book" even when they don't fit it.

In distribution, process often outweighs common sense.

Nothing looks broken.

Everything appears to work.

That's the problem.

Why It Matters

This is not a crisis.

It's a slow degradation of decision quality.

Complex situations are reduced to templates.

Accountability becomes blurred.

Thinking is replaced by process adherence.

And Then Comes AI

AI performs best in structured environments.

It accelerates processes, reinforces standardization, and reduces variability.

But if thinking is already weakened,

AI doesn't fix it — it amplifies it.

And scales it.

The Hardest Part: Accountability

When decisions are made "by the process," they appear correct.

But the real question becomes harder to answer: Who is responsible for the outcome?

This may be one of the most critical shifts in the industry.

What Needs to Change

The problem is not processes.

It's when they start replacing thinking.

What matters:

  • separating where standardization is needed — and where judgment is essential
  • preserving space for thinking in critical decisions
  • not hiding complexity behind procedures

And accepting that not all decisions can be reduced to an algorithm.

A Practical Observation

Even in a highly standardized environment, a different approach is possible.

Keeping key decisions with people rather than fully transferring them to processes is not always easier — or faster in the short term.

But it allows us to stay closer to clients and respond more flexibly.

Conclusion

Insurance companies don't fail because they lack processes.

They fail when processes replace thinking.

And this happens far more quietly than any technological disruption.


Mykhailo Hrabovskyi

Profile picture for user MykhailoHrabovskyi

Mykhailo Hrabovskyi

Mykhailo Hrabovskyi is a regional director with 17 years of experience in insurance, specializing in business development, innovation, and organizational leadership across Ukraine.

Forklift Accident Costs Rise Amid Nuclear Verdicts

Inadequate documentation turns routine forklift accidents into nuclear verdicts as social inflation drives claim severity to multimillion-dollar levels.

A Man Driving a Forklift

While forklift accidents have always carried serious consequences, the financial stakes have never been higher. A single incident involving a vendor or outside visitor can quickly escalate from a routine claim into a multimillion-dollar lawsuit, especially when the defendant cannot produce training records or maintenance logs.

As social inflation and nuclear verdicts push claim severity to new levels, it's critical that companies review their training, maintenance and safety protocols. Producers can help their clients understand their exposures and put practices in place before an accident forces the conversation.

The Rising Costs of Forklift Accidents

Forklift accidents represent a growing risk in many industries. In 2024, 84 workers died and over 25,000 were injured in accidents in the United States involving forklifts, order pickers or platform trucks, according to the National Safety Council.

While forklift-related claims have remained steady at Pennsylvania Lumbermens Mutual Insurance over the past five years, claims severity is rising, with more serious injuries, rising litigation costs and nuclear verdicts. Workers' compensation insurance typically covers the claim when the injured party is an employee. However, when the injured party is a vendor, delivery driver or outside visitor, the claim falls under a business's general liability policy. And in today's environment, the costs associated can quickly accrue.

In one claim example, a forklift operator did not see a delivery driver working with his tarp and struck him, resulting in serious injury. In two other cases, an operator ran over a bystander's ankle, while another operator hit an independent truck driver. Not only do these incidents alter the victims' lives, but the claims can be particularly costly, ranging from a couple of hundred thousand to potentially millions of dollars.

There are two broader trends exacerbating the severity of forklift-related claims. Social inflation is driving up claims' costs and nuclear verdicts, where courts are awarding increasingly large judgments, which are far more common than ever before. Investigation costs, litigation expenses, and large court awards all compound the cost of a claim. Such costs can grow much larger when the plaintiff's attorney gets involved and requests training records and maintenance logs that do not exist.

Paper Trails Can Save or Sink Clients

Human error remains the number one cause of forklift accidents. However, these errors rarely happen in a vacuum and are usually a symptom of something missing upstream, typically a lack of formal, documented training or thorough maintenance practices.

OSHA notes that proper training can reduce forklift accidents by 70% and is most effective when it is consistently reinforced. OSHA Standard 1910.178, Powered Industrial Trucks, identifies specific, detailed requirements for industrial truck training. It includes classroom and hands-on training, during which operators must demonstrate a pre-operational check of the unit and competency in the tasks they would perform on the job.

Adequate training must also address the specific hazards present on the premises where an operator works. This might include establishing clearly defined loading and unloading zones and setting protocols for where vendor drivers and outside personnel should stand. In the case of the delivery driver struck while rolling his tarp, a documented standard that kept the driver clear of the unloading area may have prevented the accident.

A common problem in smaller businesses is assuming a forklift operator is competent. A business owner with one or two forklifts may believe their employees know how to operate the machines because they have been doing it for years. However, experience alone doesn't hold up in court. With no training records to show, the legal exposure and costs compound.

Another major cause of forklift accidents is inadequate or inconsistent maintenance. Informal arrangements with a mechanic down the street will not hold water in the event of a claim. As with operator training, documentation is critical. If maintenance is not documented as being performed by a certified mechanic and in accordance with manufacturer standards, from a liability standpoint, it may as well not have happened.

The Conversations Producers Should Have About Coverage and Risk Gaps

One of the most important steps an insurance professional can take is to identify coverage gaps their clients may not know exist. Many business owners list forklifts under general contents and business property when they should actually be scheduled under an inland marine policy. Inland marine insurance offers better protection for mobile equipment, such as forklifts and piggybacks. Rather than lumping it into a blanket coverage amount, the policy assigns a specific schedule value to each piece of equipment. It is especially important for operations with multiple locations, as an inland marine policy will cover the forklift even if it is moved to another location.

Beyond coverage, producers should equip clients with key risk-mitigation recommendations such as:

  • Implement a formal, documented training program that meets OSHA 1910.178 Powered Industrial Truck standards, covering both classroom instruction and hands-on training.
  • Establish defined loading and unloading zones with protocols for where forklifts operate and where outside personnel are not permitted to be.
  • Adopt a formalized maintenance program where maintenance is documented and performed by a certified technician according to manufacturer standards.
  • Establish protocols on how to respond in the event of an accident. This includes caring for the injured person, documenting the incident and calling the broker as soon as possible.
  • Consider technology as a secondary layer of protection. There are now telematics and collision-avoidance systems available for forklifts that can sense people and objects and alert the operator.

Forklift accidents are often preventable, and insurance professionals can help reduce operational risk and financial exposure. Across all areas, documentation is critical because, in litigation, if you did not document it, you did not do it.

Stop Defending, Start Anchoring

It's time to stop simply reacting to plaintiffs' counsel and to become more aggressive through data-driven counter-anchoring.

Decorative small anchor placed on weathered windowsill

Brute force has been the corporate response to the normalization of nine-figure payouts—build taller insurance towers. But by 2026, we've reached the breaking point of that strategy. Adding more capacity is no longer a hedge; it's a target. Leaders who continue adhering to a "wait-and-see" strategy will likely hand over their negotiating power to plaintiffs' counsel. It's time to stop reacting and shift to a more aggressive tactic of data-driven litigation counter-anchoring, a tactical maneuver that uses historical benchmarks and hard modeling to ground a case's valuation.

The Psychology of the First Number

Refusing to name a number isn't a denial of liability; it's a tactical surrender. When we stay silent and treat it as a problem for later, we leave a vacuum that the plaintiff is only too happy to fill. This is the psychology of anchoring: the first number heard becomes the mental hook upon which all subsequent negotiations hang. If the opening bid is a $100 million "lottery ticket," even a successful defense that cuts it in half results in a $50 million disaster.

Counter-anchoring disrupts this by providing a grounded alternative before the plaintiff's number can take root. This isn't a guess; it is a calculated figure backed by historical industry benchmarks and internal safety data. By presenting a credible, data-backed valuation early, we offer juries a "safe harbor."

Most jurors are actually overwhelmed by the emotional volatility of nuclear-risk cases; they want to be fair, but they lack a yardstick. When the defense provides that yardstick—derived from logic rather than emotion—it grants the jury the permission they need to reject an inflated demand without feeling they are dismissing the injury itself.

Deployment: When to Anchor (and When to Pivot)

Counter-anchoring is most effective in "gray area" liability cases—scenarios where the question isn't if the company is responsible, but for how much. In these high-value moments, the goal is to cap the ceiling before it vanishes. By introducing a data-backed valuation early in mediation, you effectively narrow the range between "reasonable" and "astronomical."

However, data is a double-edged sword. The greatest risk in this strategy is the "Cold Corporation" trap. If your counter-anchor looks like a sterile spreadsheet in the face of a human tragedy, you don't just lose the argument; you lose the jury.

There is a razor-thin line between being "grounded in reality" and being "callous to suffering." The math must be the foundation, but the delivery must be human. If the jury perceives your data as a tool to devalue a life rather than a method to find a fair resolution, the anchor will drag your defense to the bottom.

When executed with empathy, speed becomes your primary weapon. By removing the "valuation fog" early in the process, counter-anchoring forces both sides to deal with reality. It strips away the performative inflation of the discovery phase and gets to the heart of the settlement, often shaving months—and millions—off the litigation lifecycle.

The 2026 Toolkit: Credibility Over Calculation

In 2026, a spreadsheet is not a strategy. While internal loss runs are necessary, they are rarely sufficient to move a jury. To make an anchor stick, you must look beyond internal data. A jury will instinctively view a company's own historical figures as self-serving; to achieve true "safe harbor" status, your numbers must be validated against industry cohorts. Credibility is built on external benchmarks—proving that your valuation isn't just what you want to pay but what the broader market defines as objective reality.

The most critical hurdle, however, is the communication gap. Raw modeling is the foundation, but the courtroom narrative trumps all. If you cannot translate a complex actuarial model into a story about fairness and community standards, the data will be dismissed as "corporate math." The numbers provide the boundaries, but the narrative provides the "why."

Finally, this strategy demands a collapse of the traditional corporate silo. We are seeing the rise of the general counsel/risk manager nexus. In the past, Risk bought the insurance, and Legal fought the claims. Today, these two must merge their datasets well before a summons is served. By aligning on valuation models during the underwriting phase, the defense is armed and ready on Day 1 to set the anchor before the ink on the complaint is even dry.

The Underwriting Reality: From Defense to Differentiation

Adopting a counter-anchoring strategy does more than win cases; it fundamentally shifts the power dynamic at the renewal table. In the 2026 market, excess underwriters are no longer just looking at loss history—they are scrutinizing a firm's "litigation maturity." When you can demonstrate a repeatable, data-backed method for suppressing social inflation, you move from being a commodity risk to a "preferred risk."

The conversation with underwriters changes the moment you move beyond passive risk transfer. Instead of simply presenting a tower of limits, you are presenting a proactive defense framework. Underwriters are tired of "blank check" litigation; showing them that you have the tools to anchor damages early provides them with something they value more than anything: predictability. By proving you can cap the ceiling of a potential nuclear verdict, you provide the actuarial certainty that justifies lower attachments or more competitive pricing.

The ultimate result is a stronger strategic partnership with your carrier. You aren't just buying paper to cover a potential disaster; you are demonstrating a sophisticated operational control that protects the carrier's capital as much as your own balance sheet. In an era of escalating awards, the companies that thrive will be those that prove they aren't just insured against the storm—they have the data to ground the lightning.

A Grounded Future

The era of "buying our way out" of litigation risk is over. In a 2026 landscape where $100 million is the new baseline for a nuclear verdict, silence on damages is a luxury no risk team can afford. By embracing data-driven counter-anchoring, general counsels and risk managers can reclaim the narrative, providing juries and mediators with a logical "safe harbor" before the emotional tide takes over.

Success now requires a fusion of math and empathy—a strategy where the data is the foundation, but the story is the house. Ultimately, those who anchor early won't just lower their payouts; they will redefine what it means to be a resilient, data-forward organization in an age of outsized expectations.

What Insurers Will Learn About Trust... the Hard Way

Banks lost customers' trust one automated interaction at a time. Insurers are making the same mistakes. 

Low-Angle Shot of a Tall Glass Building under the Sky

In 1979, Gallup asked Americans how much confidence they had in banks. Sixty percent said a great deal or quite a lot. Banks ranked second out of nine institutions — behind only the church.

Today that number is 26%.

The collapse didn't happen because of one crisis or one bad actor. It happened over 40-plus years, one automated interaction at a time. ATMs that replaced tellers. Interactive voice response systems that replaced those ATMs. Digital channels that replaced the IVR. And now AI-driven decisions replacing the digital channel that replaced the thing that replaced the person who used to know your name.

Each wave came with a business case. And each wave, when it touched the moments that actually matter to customers — a confusing charge, a decision that needed explanation, the thing that went wrong at the worst possible time — quietly withdrew a small deposit from an account that doesn't show up on any balance sheet.

That account is trust. And trust, it turns out, is an organizational capability problem — not a sentiment problem.

The Moment That Reveals Everything

Here's what I observed working inside a global bank during those automation waves: the technology worked. The process was faster. The costs came down. And customers were fine — until they weren't.

When something went wrong, people didn't want a faster process. They wanted a person who understood the situation, had the authority to act on it, and demonstrated that the institution they'd trusted actually cared what happened to them. What they got, too often, was a system designed for the average case, handling something that wasn't average at all.

What struck me wasn't the technology failure. It was the organizational failure underneath it. The leaders driving automation were making efficiency decisions. Nobody was accountable for the capability question: Does this organization know how to rebuild trust when the automated system fails a real person? The answer, in most cases, was no — because that capability had never been built. It had been assumed.

That pattern — confusing an efficiency decision for a capability decision and discovering the difference too late — is what eroded four decades of public confidence in banking. And it's the pattern insurers are now repeating.

This Is Now Insurers' Problem

Insurers are making the same bet banks made, in the same places banks made it.

Claims. Denials. Coverage decisions. Underwriting. These are not commodity interactions. They are, almost by definition, the moments when a policyholder is most vulnerable — a damaged home, a health crisis, a business interruption, a death. They are the moments that test whether the relationship the insurer sold is real.

The industry is automating them anyway. With AI systems that make faster decisions, with chatbots that handle first contact, with models that assess claims before a human ever sees them. The business case is real. The efficiency gains are real. The risk is also real — and it is being systematically underestimated.

Here's what gets missed in most of these conversations: The risk isn't primarily in the technology. It's in the organizational capability gaps the technology exposes. Does this organization have the judgment infrastructure to know when a claim needs a human? Does it have the change leadership — not change management, but genuine leadership capability — to ensure that the people still in the room when it matters are empowered to act? Can it tell the difference between a process that's working and a relationship that's quietly eroding?

Most organizations can't answer yes to all three. Not yet.

What Happens to the Humans Left in the Room

Here is the part the business case doesn't model: what automation does to the agents and claims professionals who remain.

When an organization systematically automates the high-stakes moments, it doesn't just remove humans from those interactions. It degrades the humans who stay. Authority gets stripped. Judgment gets overridden. The agent or adjuster who once had the latitude to assess a situation and act on it becomes an escalation path for complaints the system couldn't handle — without the context, the tools, or the organizational backing to actually resolve them.

This matters because the agent is still the face of the insurer when the policyholder calls. The claims handler is still the voice on the other end when the denial needs explaining.

The data on this dynamic in financial services is stark. An Eagle Hill Consulting survey of more than 500 U.S. financial services employees found that 62% say their organizations have prioritized improving customer over employee experience — yet those same employees report that their own work experience directly affects their ability to serve clients. Dissatisfied employees are more than three times as likely to report that their negative work feelings reduce their willingness to help others.

Deloitte's research adds another dimension: When AI tools are introduced without careful design and change leadership, employees perceive their organizations as nearly two times less empathetic and human. That dynamic doesn't stay inside the organization. It travels. Policyholders feel it.

For insurers that rely on independent agents — professionals whose loyalty is earned, not owned — the stakes are even higher. Think of independent agents as the community bankers of insurance: For decades, they've translated corporate rules into human terms, sitting across the table from policyholders at the moments that matter most. J.D. Power's independent agent satisfaction research consistently finds that scores are dramatically higher — by hundreds of points — when carriers make agents easier to work with: faster quotes, transparent claims status, access to a human on complex cases. When AI becomes a black box agents can't explain to a policyholder, that advantage reverses. An agent who can't get a straight answer on a claim denial, or can't reach a human on an exception, doesn't complain to the carrier. They quietly shift their next piece of business elsewhere. The trust problem isn't just with policyholders. It runs through the entire distribution chain.

The Balance Sheet Doesn't Show the Problem — Until It Does

What makes this dynamic particularly dangerous is that trust erosion is invisible on a quarterly basis.

The banking sector learned this the hard way in early 2023. When Silicon Valley Bank failed, uninsured deposits left the broader banking system at the fastest rate recorded since the FDIC began tracking data in 1984 — an 8.2% quarterly decline, industry-wide, in a single quarter. The FDIC noted that SVB's deposits were "remarkably quick to run" precisely because they were concentrated among depositors whose trust, once shaken, had no friction to slow it.

Insurers don't face bank runs. But they face their own version: policy non-renewals, lapse rates, coverage migration, claims disputes that become regulatory attention, and the slow erosion of the trusted advisor position that has historically made insurance a relationship business.

The erosion rarely announces itself. It accumulates in policyholder satisfaction scores that drift, in agent feedback that doesn't make it up the chain, in claims handling data that gets read as operational variance rather than relationship signal. By the time it's visible on the balance sheet, the capability gap that caused it has been open for years.

This Is a Capability Problem. Capability Can Be Built.

The research on AI deployment in financial services confirms what the banking experience suggests. McKinsey finds that AI high performers are more than 1.5 times as likely to have changed their standard operating procedures and talent practices — not just deployed tools. MIT CISR shows that firms stuck in the pilot stage financially underperform their industries, while those that have embedded AI into their operating models significantly outperform.

What those numbers describe, underneath the data, is an organizational capability gap. The high performers aren't distinguished by better technology. They're distinguished by having built the mindsets, the skillsets, and the operating conditions — the governance, the decision rights, the human judgment infrastructure — that allow them to absorb what the technology makes possible without losing what made them trustworthy.

That's the real lesson from banking. The institutions that automated their way into a trust deficit weren't led by people who didn't care about customers. They were led by people who treated trust as a communications challenge rather than a capability one. They managed it. They didn't build it.

Insurers now face a choice that banks didn't get to make deliberately. Insurers can design AI deployments that preserve human judgment at the moments that matter most. They can build the change leadership and workforce capability that determines whether AI enhances the relationship or quietly erodes it. They can treat trust not as a sentiment to be managed after the fact but as an organizational capability to be built before the moment of truth arrives.

Or they can assume their situation is different from banking.

Banks assumed that, too.


Amy Radin

Profile picture for user AmyRadin

Amy Radin

Amy Radin is a strategic advisor, keynote speaker, and Columbia University lecturer focused on why transformation succeeds or stalls in large, complex organizations. 

Drawing on senior leadership roles at Citi, American Express, and AXA, including one of the world’s first corporate chief innovation officer roles, she helps leaders build the capabilities required to absorb, scale, and sustain change.

 

College Wrestling's Lessons for AI Innovation

The just-concluded NCAA Wrestling Championships showcased the sort of thorough competitive advantage that can come from early success with AI.

Image
2 Amateur Wrestlers Wrestling in the middle of a wrestling mat

As the Penn State wrestling team won yet another Division 1 title over the weekend--its 13th of the past 16 awarded--and did so in overwhelming fashion, I realized there is a deeper competitive advantage at play than exists even in other sports. 

College wrestling dominance requires a layer that goes beyond the normal advantages that come from having a great coach and a roster of superb college athletes. Penn State-level dominance in wrestling requires an additional, self-reinforcing factor--of the sort I think can come from early success with AI, as it builds and builds and builds on itself.

I'll explain. 

To understand that self-reinforcing factor, you need to look at the Penn State coach and at the coach whose record of 15 NCAA wrestling titles in 21 seasons Penn State is now approaching. 

The Penn State coach is Cael Sanderson, arguably the best college wrestler ever. He was undefeated in college, winning 159 matches, and won four NCAA individual titles. He also won a gold medal at the 2004 Olympics. 

The man he's chasing, Dan Gable, who coached the University of Iowa from 1976 through 1997, ranks even higher in the wrestling pantheon. He not only won two NCAA individual titles (in an era when freshmen weren't allowed in the tournament) but took the gold medal at the 1971 world championships and at the 1972 Olympics. In those tournaments, Gable won each of his six matches in those tournaments without giving up a point--a preposterous achievement given how scoring works in international wrestling.

Sanderson's and Gable's credentials are so impressive that they naturally attracted top recruits -- and started to build that self-reinforcing layer. 

Wrestling differs from most college sports because the very best tend to pursue international careers after graduating but don't have any affiliation akin to what other athletes take on in professional leagues. Post-college wrestlers need a home. They need a wrestling room. And the best go to the best room, making it even better... and on and on we go.

Penn State has easily the best roster of collegiate talent at the moment -- six wrestlers made it to the NCAA finals among the 10 weight classes last weekend, tying the record, and four won titles. And Penn State has even better talent among the international wrestlers, who bring with them scores of NCAA titles and medals from world championships and the Olympics. In the finals of the 190-pound weight class at the U.S. trials for the 2024 Olympics, two wrestlers from that room went up against each other and had an epic battle -- which qualified as just another day in the life of Penn State wrestling.

The insurance industry should, I think, draw a lesson because AI can create a flywheel effect similar to what's happening at Penn State and what happened under Dan Gable at Iowa in the '80s and '90s. 

Adopting AI won't happen overnight. Using it is an unnatural act for many people, especially older ones, so you need to find ways to get people to start to get comfortable with it. You need to produce successes that you can use to evangelize about AI. You need to create rock stars that, while not at the level of a Sanderson or Gable, can attract talented people who want to take on more ambitious projects. You need to keep testing and feeling your way toward more aspirational business models, going beyond efficiencies to, perhaps, embedding insurance in other companies' sales processes or developing services that predict and prevent losses before they can occur.

In fact, early successes with AI can generate savings that you can pump into more future projects, so you just keep accelerating. 

(I realize I made more or less this point about a flywheel in last week's commentary on Lemonade, but I think it's so important that it's worth reinforcing, and college wrestling turns out to be even a better example than Lemonade.) 

No competitive advantage lasts forever. Gable retired at age 48 -- coaches often mix it up with their wrestlers, and even an all-time great eventually wears down. The Iowa program, while still strong, has drifted in the decades since. Sanderson is now 46, and maybe he'll tire out one of these days, too. Meanwhile, David Taylor, a just-retired big name, has set up camp at Oklahoma State, which had four wrestlers make the NCAA finals. Three won. All four are freshman. So another cauldron of a wrestling room may be taking shape.

But I'll bet any insurer would be happy with an advantage on AI of the sort that Sanderson has produced at Penn State and that Gable developed at Iowa before him.

Cheers,

Paul