Download

The Shocking Cost of Litigation Funding

A new analysis finds that third-party litigation funding could cost commercial insurers as much as $10 billion a year. Plus, teen drivers and the end of AOL.

Image
person holding laptop with chart

Mid-August is supposed to be slow in the world of business, as people get their final bits of vacation in and prep for sending the kids back to school, but quite a number of items caught my eye this week. 

I'll start with an analysis of the costs of third-party litigation funding, which projects that insurers could pay as much as $5 billion a year directly to those who are investing in lawsuits against insurers. Including all the indirect costs associated with fighting those lawsuits, insurers could be out as much as $10 billion a year, the research finds. Those numbers are scarymuch higher than I, at least, would have guessed.

Then I'll share an item on how parents are increasingly getting some control over their teen drivers and their at-times-ill-considered behavior, which could bring down claims while advancing everyone's goal: fewer accidents.

Finally, I'll reflect on the end of AOL's dial-up internet service and what it says about technology life cycles, such as the generative AI revolution that is just moving out of its Wild West phase and into what I think of as the land grab era. That reflection will also let me tell my story about how an inability to find a modular phone jack in the Berkshires in the early 1990s almost kept me from filing an exclusive on IBM that led the Wall Street Journal the next morning. 

Third-Party Litigation Funding

An article in Claims Management quotes "an actuary speaking at the Casualty Actuarial Society’s Seminar on Reinsurance [as saying] the top end of a range of estimates of direct costs that will be paid to funders by casualty insurers is $25 billion over a five-year period (2024-2028)."

The article also quotes another actuary who came up with a smaller, but still frightening, number. He ran 720 scenarios and found that "the five-year cost is most likely to fall between $13 billion and $18 billion (the 25th to 75th percentile), with a mid-range average coming in at around $15.6 billion for the five years from 2024-2028."

This actuary noted that the payments to those funding the litigation could snowball: The funds could let law firms advertise more, bring in more cases, and fight claims longer.

He also cited a study that "puts total costs, including indirect costs, at roughly double the amount of direct costs.... If this were true, the high end of the range—now $50 billion—could add 7.8 points to the commercial liability industry loss ratios for each of the next five years, with the most likely scenario (50th percentile) falling between 4.5 and 5.5 loss ratio points."

Those are scary numbers—that not only hurt insurers directly but will filter into much higher rates for everyone. 

(If you're interested in how the industry can combat third-party litigation funding, you might check out the work being done by our colleagues at the Triple-I, including this piece on the need to increase transparency about who's putting up the money and about what suits they're funding.)

Safer Teen Drivers

When my older daughter, Shannon, was 16, my wife couldn't sleep one night and went off to watch some TV. CNN ran a long piece about dangerous rain, concluding with footage of a car driving through a stream of water crossing a road. The announcer intoned, "Whatever you do, don't do this." My wife was suddenly wide awake. "That's my car!" she said to herself.

It was, too. She ran the video back and confirmed that her low-set sports car was, in fact, being driven through a flash flood. Shannon made it fine, but our new driver was caught.

She explained the next day that she'd been driving to her early-morning horseback-riding lesson and desperately didn't want to be late. She only knew the backroads route to the barn, so she decided to press on, somehow not wondering why a camera crew was set up by the side of the road. 

She made it, but her mom and I delivered a stern lecture with all kinds of threats attached. And at age 31, Shannon has not only not had an accident but hasn't even had a moving violation. The same goes for her 29-year-old sister.

I've joked for years that teaching a teen to drive is easy. You just need to have a CNN crew follow them around.

That's sort of what's happening through telematics, such as dashboard cameras, as this survey from Nationwide highlights. 

There is a long way to go: The survey finds that 96% of parents think dashcams are valuable but that only 26% of teens currently use them. Still, I see encouraging signs. A recent report by Cambridge Mobile Telematics found that the use of games and social media while driving—a problem especially acute among young drivers—has plunged. And traffic deaths on U.S. highways have now fallen for 12 consecutive quarters and were at their lowest in six years in the first quarter of 2025. The number of deaths—8,055—was still ghastly, but was down 6.3% from a year earlier.

Here's to progress.

The End of AOL Dial-up

To most people, the news about AOL was probably that it still offered dial-up, not that it was finally ending the service. But technology has a "long tail," which is why 163,000 Americans were still using dial-up in 2023, why insurers and other business are still having to deal with programs written in COBOL (a language designed in 1959), etc.

The surprise about the long tail made me think it's worth spending a minute on the various stages of a technology megatrend like internet access or, say, generative AI, because there are some other surprises, too, that matter today.

The main one is the overinvestment that frequently happens. People wonder how companies building large language models can justify the hundreds of billions of dollars A YEAR that they're spending just on AI architecture. And the answer is... they can't. Not in the aggregate. 

But each of the big players can justify its spending individually because the potential win is mind-boggling. Yes, Meta may be getting carried away by offering $100 million signing bonuses to individual AI researchers and by spending some $70 billion on AI infrastructure this year to overcome what's generally seen as a lagging position, but Meta will generate trillions of dollars in market cap if the bet pays off.

In general, a tech megatrend goes like this: 

  • The Wild West
  • The land grab
  • The near-monopoly
  • The long tail

With AOL, the Wild West was the mid-1990s, when everyone wanted to get to the internet but wasn't quite sure how to do it. Dial-up was a known way to connect to a computer, but there were loads of competitors for AOL. Meanwhile, cable modems were becoming a thing, while DSL was also claiming to be the high-speed solution. WiFi was in its infancy, and Bluetooth was being positioned as a better wireless solution. 

Dial-up was pretty quickly outpaced by cable modems as a technology but still won a mass audience, setting AOL up nicely for the land grab.

The land grab was when AOL blanketed the Earth with CDs that gave people immediate access to AOL's service (and played fast-and-loose enough with the accounting for its marketing expenses that it eventually paid a fine to the Securities and Exchange Commission). AOL won the land graband was fortunate enough to merge with TimeWarner at a widely inflated stock market valuation for AOL before internet access moved to its next phase, where AOL lost big time.

That next phase, the quasi-monopoly, actually didn't happen as quickly or decisively as it did with, say, IBM-compatible personal computers, Google's search engine or Facebook in the early days of social media. AT&T and Verizon to needed many years to emerge as the dominant players. But it did become clear quickly that AOL's "walled garden" approach was a loser. AOL wanted users to sign in through its dial-up and then never leave its site; users were to do all their banking, shopping, etc. through AOL. That approach has worked for Apple, but it became clear in the early 2000s that users were going to branch out across the internet as companies figured out how to make their sites more accessible. Whatever was going to emerge as the quasi-monopoly, AOL wasn't going to be part of it.

AOL had a market cap of $164 billion when it merged with TimeWarner in 2000, but fell so far that it was sold to private equity in 2021 for just $5 billion, even though stock market indices had roughly tripled in the interimand that price included Yahoo, another former high flyer, and Verizon's ad tech business. 

That sale just left the long tail, some 35 years after AOL was founded. 

When you apply my model to AI, I'd say we're toward the end of the Wild West phasewe're not likely to again see something like Sam Altman getting fired as CEO of OpenAI, then almost immediately rehired. 

We're starting to move into the land grab, even though the technology isn't fully sorted out. Depending on how quickly a new battleground takes shape for agentic AI, we might see the technology sorting out in the next couple of years. The sorting out is when you'll see the big shakeout in the stock market valuations as companies that spent many tens of billions of dollars on AI are identified as losers. (My bet is that Tesla will be the first to lose the hundreds of billions of dollars of market cap linked to its AI aspirations, but we'll see.)

I realize this piece has run far longer than normal, but here's my story about a near-catastrophe with dial-up:

Some friends had recently bought a house in the Berkshires, and I joined them for a weekend there in the early '90s. I had reported a scoop on IBM that I knew would lead the WSJ on Monday and wrote it Sunday afternoon on the crummy little TRS-80s that we still used in those days. We then realized that their home's phones were hard-wired into the walls, so there was no way to plug in a cord.

With deadline approaching, we drove to the nearest town, Huntington, Mass., but couldn't immediately find a solution. We finally went into a store so tiny that we could see the curtain that separated the store from the room in the back where the proprietor lived. She had a modular phone. I said I'd pay her $20 if she let me make a one-minute long-distance call to New York. She looked at me like I had two heads, but she agreed.

The problem wasn't over. Her phone was set rather high in the wall, and the cord that connected the wall jack to the phone was only about four inches long. To plug the cord into my computer, I had to hold it above my head. That meant feeling my way around the keyboard as I dialed the number for the computer in New York, waited for the exchange of tones, and then typing blind as I inputted the code that gave my computer access to the WSJ system. 

It took a few tries -- as the proprietor watched anxiously, wondering what I was doing to her long-distance bill -- but it finally worked.

I hated dial-up. I'm glad it's finally on its last legs.

Cheers,

Paul

The Need to Speed Up Underwriting

Speed-driven consumer expectations are forcing life insurers to abandon legacy underwriting and adopt digital solutions.

Close Up Photo of a Person Typing on Laptop

With mobile ordering, digital banking, and payments, technology has moved life into hyperdrive, and this expectancy trickles down to all other areas, including more traditional services like insurance.

According to LIMRA, the U.S. individual life insurance premium set a record last year, with total new annualized premium increasing by 3%, but the sector is often criticized for its slow pace and limited digital presence. Manual and lengthy processes add to the natural complexity of underwriting. To gain a competitive edge, build customer loyalty and trust, and reach underinsured demographics, insurers must recognize one truth – underwriting is in urgent need of transformation.

Meeting modern expectations with not-so-modern technology

Too many insurance companies still rely on legacy technology to complete core functions. This technology cannot keep up with the demands of a modern industry, as it lacks the flexibility and scalability that are essential for growth. As insurtech continues to raise the bar with modern solutions, traditional insurance organizations are falling behind. This gap is monumental and will continue to grow if left unchecked. Legacy workflows make underwriting an arduous process and limit the speed of customer support and policy fulfillment. This may force customers to abandon their transactions altogether or to look elsewhere, damaging a firm's bottom line.

Technology to support insurance enters an efficient, client-focused era

According to the 2025 Insurance Barometer, 74 million Americans need life insurance, while 22% of individuals who do not own insurance stated that they were not sure how much they needed or had a clear understanding of what to buy. 41% of U.S. adults stated they are "somewhat or not at all knowledgeable about life insurance."

With the current market uncertainty, consumers are feeling pressure to do more to protect not only their own financial security but also thinking of their loved ones and shielding them from future financial struggles. Yet, many may be unsure of how to move forward.

This is where agents, advisors, and distributors play a crucial role in educating prospects and customers. However, the science of underwriting is not straightforward, and finding and securing the right policy for a customer can be complex. Various factors go into play, and the lengthy back and forth process can leave end customers more perplexed and, even worse, cause them to abandon the process. Digitized solutions can support intermediaries in navigating the complexities of the underwriting process by helping them inform customers early in the cycle what policies they are likely to be approved for, taking the process from weeks to minutes. AI-driven predictive models can also help distributors and agents identify and provide policy recommendations best suited to each individual's needs, which can further streamline the process and meet customer demands for greater personalization. AI also enhances transparency in underwriting by evaluating decisions in real time as data is gathered. This accelerates approvals while making the process feel more personalized and intuitive, helping build trust and improve customer satisfaction.

To enable AI's full potential, robust, structured, and unified data plays a critical role. While carriers and distributors host a mountain of client data, it is often not used strategically to help drive business growth. For example, customers looking to adjust life insurance or annuity products to meet their evolving needs often face the frustrating task of starting from scratch and being required to re-enter their information, a tedious and off-putting task. This pain point can be alleviated by leveraging digitally led solutions that facilitate automation. Automation provides the ability for forms and documents to be auto-filled based on a customer's past profile and stored data, ultimately saving individuals' time and avoiding any dissatisfaction due to repetitive tasks.

Mind the gap – personalized approach to reach underinsured demographics

Data can also play a key role in helping the L&A sector to reach new customers. According to the 2025 Barometer survey, 43% of women indicate "they need (or need more) life insurance." The gender disparity in life insurance coverage persists, and while there has been progress over the years, closing the gap should remain a priority.

With no two financial journeys the same, each consumer's financial decisions and trigger points on what they need and why they need it are going to differ. The L&A sector has the opportunity to meet the diverse needs of various demographics. By using predictive analytics, firms can leverage new insights to help create tailored products as well as personalized distribution strategies that align with the specific needs and expectations of new market segments.

Beyond customer expectations – the talent crunch

It's no secret that the insurance and annuities industry is currently facing a challenge due to an aging workforce, and this is promoting executives to concentrate on attracting new talent to fill this gap. However, this task is more challenging than it might seem, as the sector struggles with a branding issue and is often perceived as outdated, primarily due to the underuse of innovative technology solutions. It is crucial to move away from this stereotype and modernize the industry to avoid falling behind, especially considering that there are approximately 70 million members of Generation Z in the U.S. alone. As they enter the workforce, this cohort has a strong demand for technology.

As digital natives, they expect modern solutions and are unlikely to compromise on these requirements. Recognizing this reality and making a concerted effort to advance digital initiatives should be a priority for executive leaders. Equipping current teams with digital tools not only enhances efficiency but also serves as a strong selling point for attracting talent that is focused on modern solutions.

Technology to accelerate underwriting needs

Overall, traditional processes, including underwriting and customer service interactions, are going to inhibit growth if they are not improved to fit the demands of today's speed-driven world. However, technology is quickly reshaping this space by accelerating workflows such as underwriting, distribution, and customer service using predictive analysis and automation.

Digital transformation is going to be the central factor affecting insurance firms' success in the coming years. And much like the expectations of today's end customers, the key word is speed, as those that embrace technology the quickest will take the lead.


Katie Kahl

Profile picture for user KatieKahl

Katie Kahl

Katie Kahl is chief product officer of iPipeline

Kahl joined iPipeline from Applied Systems, where she was most recently senior vice president of product management. She began her career in product management at Ceridian Dayforce.

She earned a bachelor’s degree from the University of Minnesota.

3 Key Steps for Climate Risks

83% of insurers view predictive analytics as "very critical" for the future of underwriting, but just 27% say they have the necessary capabilities. 

Dramatic Thunderstorm over Oklahoma Wheat Field

Not long ago, insurers' principal interest in tracking climate-related and environmental, social, and governance (ESG) metrics was in satisfying compliance-related reporting requirements. Insurers relied on historical data from limited numbers of sources to do so.

While regulatory reporting remains a driver, that's changing fast as predictive analytics evolve from a compliance exercise to a strategic risk-management tool. Climate disasters will cost an estimated $328 billion this year, of which about 40% will be insured, and those numbers are expected to rise at about the same 6% annual clip as in the recent past.

So it's no surprise that 83% of insurance executives view predictive analytics as "very critical" for the future of underwriting. And it's cause for concern that just 27% of property and casualty (P&C) carriers say they have the ability to leverage predictive analytics in their underwriting models.

Predictive-analytics models can reduce risk exposure, identify insurable risks, and sharpen pricing. That combination can help boost profitability by avoiding losses and insuring what might otherwise have been avoided in a less-sophisticated era.

Investment-side benefits are at least as important as underwriting gains

The benefits of advanced analytics in assessing climate and risks related to environmental, social and governance (ESG) issues extend to insurers' investment portfolios. Without analytics touching investing as well as underwriting, insurers can find themselves exposed on both the claims and investment-portfolio fronts. For instance, as we enter hurricane season, an insurer with P&C liability as well as municipal bond holdings in coastal Florida could end up suffering a double hit after a storm sweeps through.

The question the roughly three-quarters of insurers who still lack climate and ESG-related analytics should be asking is not whether it makes sense to establish such capabilities but rather how to go about it. The playbook will differ depending on an insurer's scale, market distribution, and underwriting and investment portfolios. But there are three fundamental steps to consider.

First, predictive analytics is about data, and while generative AI may be able to work from the unstructured masses, predictive analytics and the emerging agentic AI that delves into the numbers need clean, high-quality data. In both cases, developing cloud-based repositories of rationalized data is essential. The data-analysis process typically leads back to applications, many of which can be trimmed down and consolidated – a bonus.

Second, predictive analytics needs tons of data, and from many sources. In the climate-risk realm, external weather and geospatial data may need to be merged with internal geographic risk factors, claims and payment data, economic data, demographic data, and so on.

Querying such combinations enables hyperlocal predictive analysis and individualized risk scores for property-tailored pricing – for example, based on the age, location, and materials of a structure that's prone to storm surge or wildfire or based on a farm's crop selection, water usage and, by extension, its resilience against drought. There's a customer-service benefit here also, because the insurer can demonstrate precisely why a policy has been priced as it is, boosting transparency and trust.

Getting there takes data assimilation into data lakes, ideally incorporating systems integrated with enterprise resource planning (ERP) that funnel third-party as well as an insurer's business data into repositories powering predictive-analytics capabilities in both the underwriting and investment sides of the house – in addition to providing for detailed sustainability tracking and reporting.

Third, predictive analytics is also about people. Given the power of predictive-analytics models, underwriters in particular may feel threatened by these models' introduction and proliferation. The maturation and increasing sophistication of AI in predictive analytics will only exacerbate that. So, involve underwriters early. Foster a rapport between analytics specialists and underwriters to make sure analytics enhances rather than hinders underwriter workflow. Show underwriters how predictive analytics can help them improve portfolio profitability, then monitor and encourage their use of these new tools.

Predictive analytics for climate and ESG risks are already out there

Some of the world's biggest insurers are leading the way with predictive analytics for climate and ESG risks. Aon incorporates chronic as well as acute risks in climate modeling to assess commercial customers' risks down to the asset level, covering freeze risk, extreme precipitation, flooding, extreme heat, drought, and more.

Allianz's Climate Adaptation and Resilience Service (CARes) platform includes a self-service tool to translate climate risks into financial and physical loss metrics at portfolio and location levels. On the investment side, its Sustainability Insights Engine (SusIE) embeds climate-relevant data into its portfolio decision-making process.

Also on the investment side, AXA IM analytics provides ESG scores across its asset classes for use by portfolio managers and analysts companywide, and AXA XL's in-house specialists bring in data from catastrophe modeling firms to understand and predict climate risks on both the underwriting and investment sides of the house.

Swiss Re's ESG risk assessment tool ranks potential transactions based on risks and even gives a direct recommendation to abstain. It uses both proprietary data based on country, sector, and a company and project watchlist, and brings in external data from Rystad, SBTi, and others.

These giants are among the pioneers of new approaches to bringing climate and ESG advanced analytics into the cores of their businesses. Others must now follow. Given the stakes of foggy risk assessments in a world where climate disasters are increasingly common, what was once a question of reporting is now one of survival. The first step is to gain command of your data, and there's no time to waste.

How to Fix Behavioral Health Coverage

Behavioral health lacks the operational infrastructure of other specialties, creating costly friction that threatens network sustainability.

Psychologist and Patient

Health plans today are under pressure to deliver on behavioral health parity, not just in theory, but in practice. Yet ask any payer executive what area causes the most administrative friction, and behavioral health will almost certainly top the list. From opaque admission justifications to inconsistent treatment documentation, psychiatric care continues to be an operational outlier.

That mismatch between need and efficiency is becoming a crisis. Behavioral health units are closing at an alarming rate, not because demand is down but because operating them has become too difficult. At the same time, health plans face escalating costs and rising complaints from members who struggle to access timely, high-quality mental health care.

It's easy to assume this friction stems from stigma or lack of will. But the truth is more structural. Behavioral health lacks the operational scaffolding that underpins other areas of medicine, namely, standardized ways to measure patient acuity and track outcomes. Without that foundation, it's nearly impossible to make the behavioral health ecosystem function smoothly for payers, providers, or patients.

Why Behavioral Health Lags Behind

In cardiology, oncology, and orthopedics, providers can point to lab results, imaging, or a consistent scale to justify their clinical decisions. A patient with a certain ejection fraction or lesion size will almost universally qualify for a given procedure or medication. This data-driven standardization enables payers to make faster, more consistent determinations about coverage and necessity.

Psychiatry, by contrast, operates in a far more subjective realm. Clinicians rely on clinical judgment, observations, and interviews to determine whether a patient meets criteria for inpatient care or continuing treatment. But without shared acuity benchmarks or universally accepted scoring tools, the same patient might receive very different assessments depending on who's evaluating them.

This subjectivity creates a perfect storm for prior authorization disputes. Payers aren't necessarily denying care out of bias. They simply don't have the tools they need to confidently approve it. A recent study from the U.S. Government Accountability Office found that commercial insurers are more likely to deny inpatient behavioral health stays than comparable medical ones, in large part due to documentation gaps and ambiguity around clinical justification.

The Cost of Operational Friction

This ambiguity ripples downstream in expensive and disruptive ways. First, it drives up administrative costs for both payers and providers, as clinical teams go back and forth submitting new notes, clarifying documentation, or appealing denials. 

Second, it damages member experience. Patients and families often don't understand why behavioral health claims take longer to process, or why care is harder to access, and end up frustrated with both the insurer and the healthcare system as a whole.

Third, the lack of standardized data undermines care quality. Without consistent acuity scoring and outcome tracking, providers can't easily benchmark performance or spot systemic issues. Payers, in turn, struggle to evaluate network adequacy or support high-performing facilities. This makes it harder to intervene early in cases of treatment-resistant conditions or to prevent readmissions, which are key drivers of both cost and patient harm.

Over time, these inefficiencies erode the financial viability of inpatient psychiatric care. Hospitals and behavioral health units, especially those operating on thin margins, face pressure to cut beds or shut down altogether. This shrinking of the network only compounds access problems for patients and headaches for payers trying to maintain parity compliance.

A Better Way Forward

The good news is that this isn't uncharted territory. Other areas of medicine have faced similar challenges and found ways to overcome them. Oncology, for example, is historically a highly variable field and has benefited greatly from the development of staging protocols, molecular diagnostics, and treatment pathways that tie directly to insurance approval criteria. Orthopedics, once plagued by inconsistent documentation, now uses tools like the Oxford Hip Score or WOMAC index to evaluate treatment needs and outcomes. These frameworks didn't emerge overnight, but they've transformed how care is delivered and reimbursed.

Behavioral health can follow suit. By adopting standardized acuity measurement tools and tracking progress using evidence-based outcome scales, psychiatric facilities can provide payers with the clarity they need to authorize care more efficiently and predictably. This doesn't mean reducing complex human conditions to a single number, but rather creating operational language that clinicians and insurers share.

I've seen firsthand how applying structured measurement and documentation practices can dramatically reduce friction in behavioral health claims. Facilities that track acuity and outcomes consistently are not only more likely to secure authorization quickly, but also more likely to see improvements in patient engagement, length of stay, and readmission rates. Payers benefit, too, with lower administrative costs, fewer appeals, and better visibility into network performance.

Toward a More Sustainable System

Fixing the operational gap in behavioral health isn't just about reducing claim denials. It's about making the system sustainable for everyone involved. Standardized measurement can help preserve inpatient units, strengthen networks, and ensure patients receive care at the right intensity, in the right setting, at the right time.

We're at an inflection point. Behavioral health is finally being recognized as central to overall health. But unless we modernize the operational infrastructure that supports it, we risk repeating the mistakes of the past, underfunding care, alienating patients, and burning out providers.

It's time to bring behavioral health up to operational parity. Not just because it's the fair thing to do, but because it's the smart thing to do, for payers, providers, and the millions of people who depend on this care.


Jim Szyperski

Profile picture for user JimSzyperski

Jim Szyperski

Jim Szyperski is co-founder and CEO of Acuity Behavioral Health .J

He is focused on transforming how mental healthcare is delivered and measured. Prior to Acuity, he held executive roles at Proem Behavioral Health, Power Generation Services, and WebTone Technologies, among others. He has also served on the boards and advisory councils of several technology companies and nonprofits.

He holds a degree in business administration from the University of North Carolina at Chapel Hill.

The Future of AI-Driven Risk Mitigation 

AI is revolutionizing insurance risk management, enabling personalized assessments and loss prevention.

Network of lights

AI and generative AI are revolutionizing risk management, replacing standardized, one-size-fits-all models with personalized risk assessments and precisely tailored mitigation strategies. This transformation empowers businesses to make data-driven decisions, fostering a more predictive and proactive approach to risk. Insurance organizations that embrace AI-driven solutions in risk management will not only enhance their resilience but also gain a competitive edge in an increasingly dynamic market. 

This article explores key foundational strategies for risk mitigation, emphasizing the integration of AI and agentic AI to enhance workflow efficiency, mitigate hazards and enable proactive decision-making.

Aggregating Data and Seamless Integrations

To truly realize the power of "predict and prevent," insurers must first establish a strong data foundation. This involves bringing together enormous and diverse pools of data – everything from prior claims and customer data to real-time IoT sensor data and external environmental data – into a single coherent and shareable repository. 

Modern architectures like the data lakehouse bring the scalability and flexibility required to set up such an endeavor, while allowing structured and unstructured data to exist together. Flexibility to consolidate this data centrally and integrate it with sophisticated analytics tools and third-party data sources is also key. This can be achieved through partnerships with specialized data providers that offer pre-packaged risk intelligence or by building sophisticated homegrown integration solutions, whereby actionable insights can be extracted and communicated efficiently within the organization. 

The ability of AI to aggregate and interpret information from increasingly heterogeneous sources (text, images, video, sensor inputs, geospatial data, biometric data, etc.) will produce ever more inclusive and contextually aware risk estimates.

The Dual Approach to Future-Proof Risk Management

To mitigate risks by addressing hazards in real time, organizations must implement a well-structured central repository to store rules, manage versions, track changes, and facilitate reuse across applications or processes. Logical grouping—whether by process stage or product line—enhances navigability and ensures streamlined rule deployment. 

A centralized rules engine powered by agentic AI can enable carriers to dynamically adapt straight-through processing workflows, allowing for real-time risk mitigation and proactive decision-making. With access to central data, agentic analytics can be used for live streams of IoT sensor data, telematics, and customer behavior to detect risk patterns as they develop. For instance, by analyzing claims data anomalies in combination with external signals (such as weather and market trends), agentic platforms can alert insurers to high-risk conditions before they translate into large-scale losses.

While agentic AI analytics detect anomalies, simulate exposure, and trigger actions in real time, centralized rules ensure uniform decision-making across underwriting, claims, compliance, and catastrophe response. Together, they reduce fraud, improve regulatory adherence, and enhance operational resilience in an increasingly complex risk landscape.

Ethical AI in Risk Mitigation

The integration of AI comes with complex ethical considerations. Bias in data can result in unfair or discriminatory outcomes, creating significant reputational risks. Privacy and data security remain critical concerns, given the extensive use of personal information in AI-driven systems. The opaque nature of certain reasoning models, particularly their hidden layers, raises challenges around explainability and trust.

To ensure responsible adoption, organizations must establish clear accountability measures and conduct regular audits of AI systems. Concerns about AI fabricating responses further underscore the importance of verification methods, such as retrieval augmented generation (RAG), to ground outputs in factual data. 

Additionally, discussions around AI bias in underwriting highlight the necessity of objective criteria and human oversight to prevent discriminatory practices. By integrating actuarial oversight into the AI lifecycle, organizations can create a more transparent, accountable, and resilient risk mitigation strategy. Actuaries play a vital role in contextualizing model-generated predictions within real-world constraints, protecting against overfitting, bias, and unforeseeable consequences.

Looking Ahead

The shift to a "predict and prevent" model of risk management is a continuing approach, and both opportunity and challenge are compelling it forward. As catastrophic loss prediction using AI becomes increasingly sophisticated, the industry has the rare opportunity to put more effective preventative solutions in place, potentially reducing the impact of major events and creating a positive effect on the property market. 

In spite of risks like data bias and job displacement, AI's role as an intelligent assistant is indisputable—augmenting human knowledge instead of replacing it. The insurance sector is at a tipping point, having the technology and strategic intent to create a more robust tomorrow. 

Effective use of AI and GenAI for pre-loss risk prevention demands an integrated approach—one that weighs technological advancement with a resolve for responsible development, ethical application, and continuing learning. As these technologies develop, their impact in helping to protect assets, save lives, and create improved organizations and communities will only increase, heralding a new era where loss prevention is no longer a dream but a reality.

Insurers Need Better Supplier Access Management

Legacy B2B identity systems create security vulnerabilities and operational bottlenecks for insurers managing digital suppliers.

Businesswoman in White Shirt Using Her Tablet Computer

The insurance industry is built on trust, scale, and history. But legacy systems and decades-old infrastructure are slowing insurers as they navigate increasingly digital supplier relationships. External administrators, legal service providers, and managed IT vendors all depend on digital access, yet many insurers still rely on identity systems built for internal employees.

These systems were not designed for today's demands. As insurers lean more on third parties to deliver services, the inability to manage supplier access efficiently becomes a source of risk, delay, and noncompliance.

The Hidden Cost of Supplier Friction

Suppliers are critical to daily insurance operations, but their user experience is often overlooked. Onboarding can take days. Most insurers still rely on fragmented tools to manage supplier access. These include email requests, ticketing systems, and one-off provisioning scripts. The workflows are slow, inconsistent, and heavily reliant on institutional knowledge. As external relationships grow more complex, this patchwork leads to errors, delays, and blind spots in access visibility. Communication is fragmented. Manual provisioning introduces delays and errors. These bottlenecks do not just frustrate external teams. They delay policy servicing, claims handling, and tech rollouts.

Loose identity verification also opens the door to impersonation and fraud, especially when outdated processes rely on email requests and human approvals.

Inadequate Delegated Access

Insurance workflows often mean insurers must manage multiple external users or teams across various systems, be they claims adjusters, legal representatives, or IT support. If they cannot autonomously manage access rights, they are forced to rely on centralized IT intervention, creating bottlenecks and increasing the risk of human error.

Not unlike the challenge insurers face with delegated access for policyholders and their proxies, suppliers frequently operate under a hierarchy of users that need different levels of access. Without well-designed, role-based access controls, these relationships can introduce vulnerabilities and inefficiencies.

Security Vulnerabilities

The increase in third-party integrations has expanded insurers' attack surface. Poorly managed suppliers can become inadvertent conduits for cyberattacks. High-profile incidents, such as the Infosys McCamish Systems breach, highlight how external access points can be a stepping stone to compromising millions of sensitive records.

Bad actors are highly adept at exploiting fragmented identity and access management (IAM) systems, pivoting between digital portals and human-assisted channels like call centers. If a supplier's access is not continuously monitored and intelligently verified, attackers can escalate privileges or move laterally across systems unnoticed.

Regulatory Compliance Challenges

Insurance providers operate under growing regulatory mandates such as GDPR, CCPA, PIPEDA, and industry-specific compliance requirements. When suppliers interact with sensitive customer data, complexity around consent, data minimization, auditability, and breach reporting is inevitable.

When suppliers are not fully integrated into an IAM system, insurers battle to track which external users accessed what data and when, facing a lack of visibility that can endanger the business.

Operational Inefficiencies

Many insurers still rely on manual processes to create and remove supplier accounts. This increases the chance of human error and makes it harder to ensure that access is removed when a contract ends.

This mirrors a broader insurance industry challenge: outdated customer directories that aren't regularly audited or verified. Just as insurers must revisit and clean up dormant policyholder records, they must also manage the supplier identity lifecycle continuously.

How B2B IAM Addresses These Challenges

Modern B2B IAM solutions are designed to handle the scale, complexity, and operational nuance of insurance-related industries. Key capabilities include:

Federated Identity and Single Sign-On (SSO)

In the insurance sector, third-party agents, brokers, and service providers often need access to internal portals for claims processing, underwriting tools, or policy management systems. Federated identity enables these external users to authenticate using their own trusted identity providers, reducing the need for duplicated credentials and minimizing overprovisioning. Combined with single sign-on (SSO), federated identity ensures seamless and secure access while maintaining strict access controls aligned with compliance requirements.

Self-Service Onboarding and Automated Lifecycle Management

Modern B2B IAM solutions automate the entire supplier onboarding process. Self-service portals, identity proofing, and pre-configured workflows simplify access provisioning and apply consistent verification requirements to all users. Access is automatically revoked when contracts or relationships end, reducing human error and limiting risk.

Delegation

B2B IAM enables suppliers to manage their own users and access rights within strict, pre-defined boundaries through delegated user management. This model solves a key scalability problem: Insurers cannot realistically handle every external access request themselves. By allowing trusted third parties to manage their internal teams, insurers reduce operational overhead without giving up control. Governance and security policies still apply, and the process avoids the bottlenecks of central IT intervention.

Adaptive Authentication and Risk-Based Access

Advanced B2B IAM systems enforce strong, continuous authentication, including multi-factor authentication (MFA) and adaptive access based on behavioral analytics. real-time monitoring and detection of anomalies, like access from high-risk geographies or at odd times.

Fine-Grained Authorization

Most insurers rely on role-based access control (RBAC) as the foundation for managing access. It assigns permissions based on a user's function and is effective for internal teams. But in supplier ecosystems, roles alone are not enough.

As external relationships become more complex, attribute-based access control (ABAC) helps refine access using context like geography, business unit, or risk level. Even then, a key dimension remains missing: who the user represents.

Relationship-based access control (ReBAC) fills that gap. It evaluates the connection between the user and the insurer. A supplier working on behalf of Insurer A should only see data tied to that relationship, even if they have the same role and attributes as a supplier representing Insurer B.

RBAC, ABAC, and ReBAC are not competing models. Together, they provide the layered control insurers need to manage external access precisely, reduce exposure, and support growing third-party networks without added risk.

Audit Logging and Compliance Reporting

To meet regulatory standards, these solutions provide detailed audit trails, consent monitoring, and policy-enforced access controls. Every supplier activity is logged, verifiable, and auditable.

Managing Supplier Relationships Securely and Efficiently

Insurers will always depend on external partners to deliver digital services, so the challenges of supplier integration will become more complex and riskier. Whether the threat comes from dormant accounts, weak verification standards, or inefficient workflows, the consequences can be dire: data breaches, regulatory fines, and lost trust.

B2B IAM is becoming a critical capability in managing these supplier relationships securely and efficiently. It improves security and compliance while enhancing agility, UX, and operational alignment. In a digital insurance market, entities prioritizing flexible, risk-aware identity strategies will mitigate threats and set themselves apart as trusted, modern partners.

Salvage Fraud: The Overlooked Risk

Salvage fraud quietly drains value as employees misappropriate damaged goods meant for disposal.

Red Coupe on Flatbed Trailer

In the world of insurance and asset-heavy industries, all eyes are typically on big-ticket items — policy claims, premium collection, and operational risk. But lurking in the shadows is a low-profile, high-impact problem that's quietly draining value from balance sheets: salvage fraud.

While not as sensational as staged collisions or inflated invoices, salvage fraud is often systemic and undetected, especially in businesses that deal with high volumes of returns, damaged goods, or warranty claims.

What Is Salvage Fraud?

Salvage fraud occurs when damaged, returned, or written-off items — which still hold residual value — are misappropriated by the very people entrusted with disposing of them. These items are supposed to be sold, auctioned, or destroyed, but instead, they may be diverted, undervalued, or sold off informally for personal gain.

This isn't limited to the insurance industry. It affects:

  • Retailers dealing with high-volume returns
  • Manufacturers handling warranty parts
  • Construction and auto industries with equipment write-offs
  • Logistics firms responsible for damaged inventory
Real Example: A Car Quietly Written Off

In one investigation, our team was engaged by the head office of a major auto dealership chain. A customer had returned a vehicle twice, citing minor defects. Instead of undergoing a proper evaluation or being repaired for resale, the car found its way into the customer service department, where it was quietly written off the books.

Despite being in almost perfect condition, it disappeared from inventory without a trace — no auction, no resale, no recovery. Because it was "damaged" and outside the core business focus, no one followed up. And no value was reclaimed.

The "Destroy to Dispose" Loophole

In many industries, parts and products that are returned under warranty or recall are required to be destroyed. This is often done to prevent damaged, outdated, or brand-sensitive items from entering the resale market at a discount — something manufacturers want to avoid to protect brand integrity and pricing power.

But here's the problem: When a company expects the item to be destroyed — not sold — and has already budgeted for disposal costs, it becomes even easier for internal actors to write off the item and quietly divert it.

If no one expects revenue from a destroyed item, there's no red flag when nothing is received. That's the loophole.

The Typical Scheme

Here's how salvage fraud often plays out:

  1. An item is returned or deemed unfit for regular sale.
  2. A manager or staff member marks it for destruction or disposal.
  3. Instead, the item is sold privately, often to a friend or contact.
  4. Because no proceeds were expected anyway, no questions are asked.

Over time, this creates a low-risk, high-reward opportunity for fraud, especially when oversight is weak or nonexistent.

Why This Matters

Each case might seem minor, but over time, the cumulative losses can be substantial. Even worse, salvage fraud creates a culture of dishonesty and entitlement — and it often spreads if left unchecked.

Organizations face:

  • Undocumented financial losses
  • Inventory and audit discrepancies
  • Reputational damage if the fraud is exposed
  • Legal risk if regulatory or tax reporting is affected
How to Prevent Salvage Fraud
  1. Track salvage value – Keep records even for items marked for destruction.
  2. Implement clear chain-of-custody protocols – No single person should control the disposal process.
  3. Audit salvage and disposal activities – Especially those showing zero revenue.
  4. Use third-party disposal or auction services – With transparent records and verification.
  5. Educate staff – Make clear that "damaged" or "returned" does not mean "valueless" or "free to take."
Final Thought

Salvage fraud thrives in blind spots. Whether it's a slightly damaged vehicle, a batch of electronics returned under warranty, or parts earmarked for destruction — if it has value, it carries risk.

It's time for insurers, manufacturers, and retailers to rethink how they handle salvage. Those items written off the books may be more valuable — and more vulnerable — than anyone realizes.

Fraud doesn't always wear a mask. Sometimes, it just wears a company badge and walks out the side door with a "worthless" return.

Actuarial Transformation Hinges On Human Elements

While many insurers attempt actuarial transformation, most fail by prioritizing technical solutions over organizational communication.

Group of People Working Together

It's easy to find an insurer working on actuarial transformation. It's less easy to find one that's getting it right.

That's because moving actuarial data and the key tools of the trade to the cloud is much less a technical challenge than it is a political and organizational one, and that distinction remains surprisingly opaque among insurers.

Yes, actuarial transformation is inherently technical, involving Python, SQL, Databricks, Dataiku, and the like. But success and the business benefit it can bring depend most on human communication at both the executive and implementation-team levels.

The unique context of actuarial transformation puts a premium on communication, because differing motivations and shared turf are a recipe for friction between the actuarial and IT teams, and that friction can stall projects.

Actuaries want control over their tools, because their deep understanding of those tools' strengths and weaknesses is critical to producing precise risk models and actuarial processes. The IT team wants to control all of an insurer's IT infrastructure for a host of reasons, not least to prevent security and maintenance problems that can stem from systems that actuaries are more than capable of developing on their own.

Any major IT project needs buy-in from both IT and the business it will serve. The change management involved in fostering such acceptance is exceptionally important in actuarial transformation, because actuaries are at least as comfortable with data as the IT team, albeit with different aims.

Start at the executive level

The transformation project's executive sponsor must engage in the earliest days of an actuarial-transformation effort. This can't be an arbitrary assignment based on title or convenience. The executive sponsor must have real skin in the game tied to the transformation's success or failure and the right level of formal and informal authority within the organization. Their main role involves aligning executive peers as well as stakeholders on the actuarial and IT teams and others whose support will advance the project (or whose resistance could impede it). That happens in two ways.

First, the executive sponsor builds and presents the business case for why actuarial transformation is worth doing with messaging tailored to the top of the organization as well as to the actuarial and IT staff who will be doing the transformation work. Those messages include clarifying process changes and technology upgrades; providing estimates on the time, effort and money involved in the project; and explaining the expected benefits. Those benefits may include expected revenue increases thanks to more accurate risk assessment and pricing accuracy, higher productivity, improved actuarial creativity resulting in new products, and better positioning for future growth.

Second, the executive sponsor clearly conveys what's needed to achieve that business case. That can include the processes to be automated, the nature and timing of the implementation effort, the outside resources to be engaged, the budget and cost centers bearing it, and the KPIs against which you'll measure success. To effectively inform, persuade, and cajole peers and stakeholders, the executive sponsor must be engaged throughout preplanning, readiness assessments, and execution plans.

Next, engage with the actuarial and IT teams

The actuarial and IT teams then need special attention. Actuaries must comprehend the value of automating and codifying processes they've been doing in their heads and using custom combinations of algorithms in various data-manipulation runs. They must be convinced that the new solutions are at least as reliable as their old standbys.

IT staff involved in developing the systems enabling actuarial transformation must be comfortable with the actuarial team's processes and approaches and be able to explain the IT team's needs effectively to them. Not every developer has the combination of IT skills, business knowledge, and communication abilities required. Importantly, the chosen few should be embedded with the actuarial function, where they can interact, collaborate, and build trust.

That trust will only deepen as deliverables accrue and the actuary team sees the value of transformation. That trust building goes both ways: Actuaries should respect the IT organization's best practices related to scope definition, documentation, version control, testing, and deployment, especially as actuaries gain skills in Python, SQL querying, and other tools enabling them to develop their own cloud-based solutions.

Like so many technology initiatives, successful actuarial transformation hinges on the human element. Foremost, it takes the right executive sponsor. That person must be heavily invested in the transformation and have the formal and informal authority to make sure the work gets done and delivers the right business outcomes.

How Insurtechs Must Build Ethical AI

Insurtechs embracing AI gain competitive advantages but face mounting ethical, security and compliance challenges.

An artist’s illustration of human responsibility for artificial intelligence

While artificial intelligence brings massive opportunities, it also introduces serious ethical and compliance risks, especially when it comes to data privacy, algorithmic bias, and cybersecurity.

According to McKinsey, generative AI alone could contribute up to $4.4 trillion in annual global profits. Studies show that AI can improve productivity by as much as 66%. In insurance, chatbots and virtual assistants are now offering 24/7 support, cutting wait times and improving customer satisfaction. Behind the scenes, algorithms are analyzing vast datasets to better assess risk and detect fraud. Claims that used to take days can now be processed in hours – sometimes even minutes – with AI-powered damage assessments and automation. Internally, insurtechs are using AI to improve onboarding, tailor employee training, and streamline HR processes.

These are powerful shifts. But they also raise tough questions. How do we ensure the data being used is handled ethically? How do we guard against discrimination in pricing or hiring algorithms? What happens when a customer is denied coverage based on an opaque machine-learning model?

These aren't theoretical concerns. According to KPMG's most recent CEO Outlook Survey, more than half of executives pointed to ethics as one of their top worries around AI. That's especially true in insurance, where fairness and transparency are essential; people's financial futures often depend on it.

That means insurtechs need to think carefully about how AI decisions are made and how they're explained. Customers deserve to know when a bot or model is involved in determining their premiums or claim payouts. Employees must understand how performance data is being tracked and used. And both groups need reassurance that their data is being protected.

Beyond ethics, there's a growing threat from cyberattacks. With AI systems increasingly integrated into core operations, the attack surface is expanding. Insurtechs handle enormous volumes of personal and financial data – prime targets for cybercriminals. The stakes are high, especially under regulations like GDPR, which carry heavy penalties for non-compliance.

So, how can companies stay ahead of both the innovation curve and the compliance curve?

One answer lies in global standards, specifically ISO 42001 and ISO 27001. ISO 42001 is a new framework designed to help organizations govern AI responsibly. It offers guidance on managing risk, ensuring transparency, preventing bias, and embedding ethical practices across the AI lifecycle. For companies building and deploying AI systems, it's a powerful tool to operationalize responsible innovation.

Meanwhile, ISO 27001 focuses on information security. It's been around longer but is just as critical – especially for insurtechs handling sensitive customer data. This standard helps organizations identify and treat information security risks, implement appropriate safeguards, and respond to incidents quickly and effectively.

Used together, these standards provide a strong foundation for AI governance and cybersecurity. But they're not plug-and-play. Each insurtech faces a unique set of risks, serves different use cases, and operates under distinct expectations. Rather than treating compliance as a checkbox exercise, the best approach is to align it with the practical needs of the business and its customers.

As AI becomes more deeply woven into the fabric of the insurance industry, the expectations around its responsible use will only increase. Regulatory scrutiny will intensify. Cyber risks will grow more sophisticated. And stakeholders – from customers to regulators to employees – will demand greater transparency, fairness, and accountability.

Insurtechs that recognize this shift and take steps now will be better positioned for long-term success. Clear governance, ethical AI practices, and alignment with international standards like ISO 42001 and ISO 27001 are no longer just best practices. They're quickly becoming competitive requirements.

International Broking: Beyond the Transactional View

As multinational programs grow complex, international broking demands system coordination over traditional workflow management.

A graphic of two hands clasping and shaking across a sky with clouds and the outline of the continents

People still talk about international broking as if it were a chain of tasks. Quotes come in, documents go out, placements get issued, premiums are tracked. The workflow looks neat on a dashboard. But that view hides the real substance of the work. 

What actually holds a multinational insurance program together is not the list of deliverables. It's the structure that lets those deliverables happen at all. International broking is not a service pipeline. It is a system under pressure, held together by coordination, interpretation, and rhythm. And when it's misunderstood, things don't break loudly. They slowly fall out of alignment.

The broker operates at the center of this system. Not just as a go-between but as the one who sees how each part interacts with the others. The client has expectations. The local broker has constraints. The insurer has timelines. The reinsurer has their own frame. None of them see the full picture, but the broker is meant to keep the whole thing coherent. That requires more than following process. It requires shaping a space where tension doesn't turn into friction and where deviation doesn't spiral into drift.

What makes this system fragile is not a lack of competence. It's the volume of silent stress it absorbs. Every actor is operating under pressure. The client's HQ wants clarity. The local office wants autonomy. The underwriter wants data formatted their way. Meanwhile, no one controls the whole structure. And yet they all expect it to work. This is why coordination in international broking isn't just about sending reminders. It's about sustaining a system of partial control through alignment, trust, and timing.

The hardest part isn't the volume of work. It's the fact that the system is always on the edge of losing coherence. A delay in one country causes a shift in the client's internal deadlines. A misunderstood term in one binder raises doubts across the whole program. The structure reacts. And unless someone reads those reactions in time, they compound. One quote late becomes two. One frustrated broker becomes disengaged. One unclear update becomes a signal that the broker isn't in control. And trust doesn't vanish in a crisis. It thins over time.

Tools can help track movement, but they don't create meaning. You can have a perfect table of milestones and still lose control of the system. That's because what actually drives program success is interpretive. Knowing when a silence matters. Knowing when to push and when to wait. Understanding why one office is late, not just that it is. These things aren't captured in KPIs. But they're what determines whether the program holds.

International coordination is often mistaken for middle management. In reality, it's the most exposed and structurally important role in the system. It's where tension accumulates, where ambiguity lands, and where recovery starts. When a system begins to drift, it's not the tools that correct it. It's the person who notices that the tone of updates has shifted. That someone who used to respond in one hour now takes three days. That there's no escalation, just ambient hesitation. And knowing how to restore rhythm before anything explodes.

Thinking of broking as infrastructure changes how performance is judged. It's no longer about just being fast or responsive. It's about whether the structure can absorb a shock. Whether it holds coherence after a failed claim. Whether actors remain aligned under pressure. You can only judge that by seeing how the system reacts when something goes wrong. Does the team coordinate to protect narrative, or does it collapse into blame and silence? The answer tells you more than any placement stat.

As international programs become more complex, coordination becomes less about movement and more about design. Not design in a visual sense, but in how the system is built to bear weight. Do timelines match the actual capacity of the people involved? Are updates structured so different actors interpret them the same way? Is there a shared sense of how long silence is acceptable before intervention? These are structural questions. And if they're not answered in advance, they get answered in crisis.

The broker's role is not just to respond to requests. It is to hold the shape of the program while it moves through pressure. That includes building relationships that absorb short-term failure. That includes managing time not just in days but in signals. And that includes preserving a narrative that makes the program make sense to all sides. Because when that narrative is lost, even efficient delivery feels random. And when it feels random, the client starts asking questions they used to trust you to handle.

What international broking needs is not more dashboards. It needs a clearer understanding of what holds the system together. And it's not speed or volume. It's coherence. The broker is not just a deliverer of coverage. They are the person responsible for keeping a structure alive. And when that role is taken seriously, the system starts to respond differently. People work with you, not around you. Delays become manageable. Tension becomes signal, not threat.

This is not a romantic view. It's the practical reality of running complex programs across jurisdictions, languages, legal codes, and internal client politics. The work doesn't fail loudly. It frays. And the only way to keep it together is to see the job for what it really is. Strategic control through structural care. Quiet pressure management. And design in motion.


Arthur Michelino

Profile picture for user ArthurMichelino

Arthur Michelino

Arthur Michelino is head of international coordination at OLEA Insurance Solutions Africa.

Michelino previously worked at Diot-Siaci as an international coordinator for key accounts. He began his career at Willis Towers Watson (formerly Gras Savoye), implementing international programs for the mid-market segment.