Download

How to Respond to a Post-Claim Premium Increase

Switching carriers after a claim might cost more than the premium increase you're trying to avoid.

Man in Black Suit Jacket Sitting Beside Woman in Brown and Black Long Sleeve Shirt

When alleged errors or breaches of duties give rise to professional or management liability claims, renewal premium increases are likely to follow. Policyholders often push their brokers to remarket the account in pursuit of more competitive pricing. The question is: Should insurance programs be remarketed to avoid any post-loss premium increase? 

The answer is often "no" (as long as the carrier is acting in fairly good faith and the increase is reasonable). Doing so is often penny wise, pound foolish. 

Here's why:

If the carrier has tendered the claim, they are demonstrating good faith by doing so (particularly if it's a claim that falls in a gray area). The fact that they are willing to offer renewal terms is additional testament to that good faith. It's uncertain whether another carrier would have taken the same coverage stance or been more aggressive in disclaiming coverage. Brokers and policyholders are better off working with insurers that have demonstrated their willingness to stand by them. Additionally, if the client has built a long history with this particular insurer and coverage is replaced, the client is effectively beginning a new relationship.

Even if the carrier has only shown partial good faith, covering only a portion of the claim (while disputing coverage for a portion of what should be covered damages) it may still make sense to renew coverage. In such cases, brokers (and the insured's counsel) may wish to challenge the coverage decision. When making such challenges, policyholders are likely to encounter less abrasion when coverage is still with the insurer in question – those who elected to replace coverage immediately following a claim may encounter greater resistance.

It's important to maintain a good relationship with the insurers during the claims process. It's not that replacing coverage will necessarily change the insurer's coverage determination, but it could make the claims process and any coverage determinations for future related claims more contentious.

Replacing coverage also leaves open the possibility for errors. Strong directors and officers (D&O) programs are often built over time, and rounds of policy term negotiations. Any enhancements obtained will need to be carried over to a new carrier. Errors such as incorrectly applied retroactive dates, advanced prior and pending litigation dates, overly broad related claims clauses or specific matter exclusions, and unaccounted for subsidiaries, are just a few examples of very basic general errors that can occur when replacing coverage, all of which can have a crippling effect.

As a practical matter,  replacement can also have unintended coverage consequences. Take the following example: An insured maintains a D&O policy, in which the 2024-2025 term is with carrier "A". A claim is noticed to the D&O carrier during that term, and the carrier has agreed to tender coverage. Shortly afterward, the carrier provides a renewal with a 35% increase, which prompts the insured to replace coverage for the 2025-2026 term, issuing a new policy with carrier "B". Months into the new term, the organization receives a new, separate demand, which is tendered to the new carrier. However, the new carrier has determined the allegations are similar enough to the prior litigation, and per the policy's terms (which will very likely include a specific matter exclusion), the new carrier disclaims coverage because it is "related" to the initial claim the year before. Carrier "A," however, has determined that the two claims are not related, and also disclaims coverage. Such a situation sets the grounds for an obvious battle.

These are just some of the many considerations brokers and their insureds should consider prior to making premium-based decisions, which may be more harmful than beneficial. That being said, there are situations in which it may be prudent to consider another carrier, namely: if the carrier is perceived as being overly contentious with what should be a covered claim, if the renewal terms being offered are more restrictive, or if the renewal premium is unreasonable.


Evan Bundschuh

Profile picture for user EvanBundschuh

Evan Bundschuh

Evan Bundschuh is a vice president at GB&A

It is a full-service commercial and personal independent insurance brokerage with a special focus on professional liability (E&O), cyber and executive/management liability (D&O). 

Bet the Over on Enterprise AI 

Enterprises are adopting five distinct approaches to AI agents, reshaping how organizations build and deploy artificial intelligence.

Abstract 3D Ring Sculpture with Purple Waves

Enterprises are engaging with agentic AI in five distinct ways:

  1. Agent-Open: Developers are building AI agents on open-source Agent Development Kits (ADKs) such as LangChain, LLamaIndex (Meta), Haystack (Deepset), and Transformers Agents (HuggingFace).
  2. Agent-Closed: Developers are building AI agents on "big-software" ADKs from the likes of Microsoft, IBM, Salesforce, and SAP.
  3. Data-Small: Data engineers are building data pipelines on which to train and inference proprietary AI agents using ADK-like tools from mainly Databricks and Snowflake. I call this "small" because only a small amount of enterprise data is typically fit for consumption by AI.
  4. Data-Big: This approach makes major investments in ontologizing and unifying the full corpus of enterprise data to be consumable by AI at scale at some point. Some enterprises are attempting this work themselves; others are paying Palantir to do it with their Foundry platform. These are big, hairy engagements; think: SAP enterprise resource planning (ERP) of AI.
  5. Expert Agents: The first four approaches are for building agents to streamline work and workflows for productivity and operational efficiency. An expert agent is, for example, the clinician in a clinical workflow, i.e., the cardiologist or nephrologist. (Yes, it's coming.) These expert agents are by their nature GPU-chip intensive, andm as NVIDIA makes the GPU chips powering 90% of AI, their CUDA, NeMo, and Clara tools are by far the most cost-effective option for building expert agents.

Enterprise leaders seem to be asking two questions, the first of which is, "Can we connect an agent--however it's built--to our core systems?"

Google has built--and open-sourced--over 600 connectors against the likes of Microsoft Office, Adobe Acrobat, Salesforce, Workday, and ServiceNow, enabling agents of any origin to understand the data models of these core enterprise systems. These connector models are trained to understand different data elements, so in a Salesforce customer relationship management (CRM) dataset, for example, the connector understands "What's an account?", "What's an opportunity?", "What's a product?" It also knows when to access data, maintaining permissions "seeing" only what it's authorized to see.

The takeaway: You don't need to use Big Software ADKs to build agents interacting with Big Software datasets.

The second question is, "Can we have agents of different origin on the same team? Will teams built on one ADK work with teams built on another?"

Anthropic's open-source Model Context Protocol (MCP) has rapidly become the industry standard for agent-to-tool, and agent-to-data integrations. For agent-to-agent communications, the standard is Agent Protocol (AP).

Recently, Google, in league with 50 technology and consulting partners, announced the new Agent-to-Agent (A2A) protocol. A2A offers significant upgrades over AP including enterprise-grade security by default, support for long-running and asynchronous tasks, modality-agnostic communication (AP is text only, A2A adds images, audio, video), and the biggie: vendor-neutral and framework-agnostic design. This reduces vendor lock-in and allows organizations to compose best-of-breed agent networks easily.

It looks like A2A, open-source and available by the end of the year, will become the industry standard for agent-to-agent communications working seamlessly with MCP on agent-to-tool, and agent-to-data interactions.

The takeaway: McKinsey pegs the current market for AI products and services at $85 billion, forecasting growth to a low expectation of $1.6 trillion and a high expectation of $4.7 trillion by 2040. That sets the over/under line at $3.2 trillion: Bet the over.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.   

The Future of TPAs

Third-party administrators face intense market consolidation as private equity drives unprecedented M&A activity in insurance services.

Photo Of People Doing Handshakes

In the last half of 2024, there were over 300 announced M&A transactions in the insurance space, valued at more than $20 billion. While the number of deals was down due to economic uncertainty, persistent high interest rates, and regulatory scrutiny, deal value was higher than normal.

What's driving a substantial amount of insurance M&A activity?

The third-party administrator (TPA) market.

TPA investment and acquisitions are nothing new. But several factors are creating a hotter market, encouraging acquisitions now and over the next few years:

  • Private Equity Demand: Private equity interest is driven by a desire to deploy capital and achieve greater returns through growth potential and operational efficiency.
  • Top Line Growth: TPAs are embracing a growth-through-acquisition model – improve top-line growth through acquisition, then implement cost takeout initiatives to improve business unit margin.
  • Market Consolidation: TPAs have to achieve critical size to effectively provide services across multiple lines of business and remain relevant in the market – consolidation is how larger TPAs remain relevant.

The result is clear – acquire or be acquired. As a TPA, your acquisition strategy is either to add services to provide multiple lines of business at scale or become focused on a particular niche and establish market dominance through expertise.

Where Do TPAs Go From Here?

In the midst of this acquisition rush, TPAs (and other interested parties) are always seeking the next big opportunity for growth. But not all growth opportunities are the same. TPAs need to be forward-thinking. Over the next decade, the best TPAs will:

  1. Leverage automation, technology, and AI to significantly reduce both labor and non-labor costs to aggressively improve margin
  2. Develop a customer-centric selling model through upsell and cross-sell opportunities, using multiple lines of business to meet evolving client needs and gain greater client penetration
  3. Provide clients with expertise to solve increasingly complex challenges through services and claims administration
  4. Use a target operating model that encourages integration of new acquisitions to achieve synergies and improve overall enterprise performance through shared services and enterprise corporate functions

No single acquisition will fit every TPA – a variety of factors will influence which particular acquisition makes sense to a given firm.

Instead, TPAs that are seeking to grow across multiple lines of business should focus on markets that are primed for expansion. While there are several opportunities, there are four growth markets that TPAs should strategically evaluate their desire and capabilities to serve:

1. Healthcare Claims and Administration:

The combination of increasing costs around healthcare administration, complexity of health claims, and shifting demographics suggest a growing, lucrative market for TPAs. But investors have more than just top-line revenue growth to focus on. AI and automation opportunities can significantly reduce medical errors, accelerate claims processes, and reduce costs by reducing manual effort and standardizing key processes. TPAs have an additional reason to enter this space – the growth of the market is not just driven by processing insurance claims. Employers seeking to self-insure and expanding healthcare networks (e.g., hospital networks) provides another customer base. There are a variety of services that can be provided – bill review, long-term care assistance, or mental health care. Future-minded TPAs will assess opportunities to leverage data and analytics insights to provide opportunities to reduce healthcare costs. PE firms may seek to purchase or develop a TPA to administer claims associated with wholly owned long-term facilities.

2. International Claims

The insurance market in emerging economies is expected to experience significant growth over the next five to 10 years, far outpacing growth in established Western markets. Consider that life insurance premiums are expected to grow by approximately 6% in countries such as China, India, and Latin America, compared with the standard 1-3% annual growth seen in the U.S. Property and casualty (P&C) is expected to follow a similar trend. Protection gaps need to be addressed, and a strengthening middle class will have disposable income to address them. Insurance providers may have an interest in entering those markets but will likely partner with claims administrators to support global markets. Growth in the market and the opportunity to leverage global shared services models to significantly reduce cost position TPAs to be critical as carriers expand. Strategically, global TPAs will need to consider regional strategies to navigate geopolitical risk (e.g., supply chain/tariff challenges and international sanctions against countries).

3. Cyber-Related Claims

Increasing frequency and severity of cyberattacks, such as ransomware, data breaches, and phishing, are driving demand for cyber insurance, particularly for businesses. Small and medium-sized businesses are becoming more aware of their vulnerabilities. TPAs have at least two types of services to focus on through acquisition. One opportunity is in the B2B space, where TPAs can provide claims adjudication and processing in support of businesses facing a variety of cyber-related issues. In that space, TPAs may focus on fraud detection services, particularly combating AI-enhanced threats. The second opportunity is for TPAs to focus on cyber insurance sold through an embedded insurance model. One example that will become increasingly common: Individuals who purchase software or AI tools will have the opportunity to buy basic cyber insurance at the point of sale, with the opportunity to enhance coverage for specific AI protection gaps (e.g., protection against intellectual property (IP) infringement tied to AI-driven operations).

4. Legal Claims Administration

Over the next decade, legal claims administration will present another frontier for TPAs seeking to grow. Specifically, class action lawsuits and mass torts will provide opportunities for TPAs to administer legal claims. Class action lawsuits will arise, particularly as data breaches and greater data connectivity will take center stage for businesses across all industries. And mass torts will continue to be more relevant – evolving legal theories are increasing mass tort possibilities, both through expansion of harmed parties and creative theories on liability. For example, public nuisance claims were used in opioid litigation and are being considered for climate change and data privacy litigation. The challenge for law firms is that they do not do well in handling settlement, tracking down claimants, and managing documents. TPAs that can leverage technology to simplify and support law firms will position themselves well in the market. For example, smart contracts could automate the distribution of settlement funds to class members. Once eligibility criteria are verified, the smart contract could release payments directly to claimants, reducing administrative costs and delays.

The Price of Admission

While each of these areas presents a significant growth opportunity for TPAs, there are barriers to entry.

  • Regulatory and Compliance Challenges: Each of these services has stringent regulatory considerations – for example, international claims administration requires understanding jurisdiction-specific laws and regulations. TPAs need to have a plan for how they will satisfy the compliance obligations associated with any new business unit or line of business.
  • Technology Integration: Most TPAs rely on proprietary systems for claims processing and data management. This forces TPAs to take one of two paths – allow newly acquired businesses to continue to run as-is, without integration, or attempt data and platform migrations. Organizations need a technology integration strategy as a part of their M&A.
  • Market Competition and Valuation: In the current economic environment, high-interest rates and economic uncertainty make only the best deals viable. Increased competition drives up valuation, making deals less financially viable to many of the firms. Specialized TPAs, such as healthcare of cyber-focused TPAs, face valuation inflation risk.

TPAs attempting to grow will overcome these challenges through comprehensive strategy and a commitment to providing services to carry them into an evolving market. TPAs should use strong due diligence, explore partnerships, and evaluate lessons learned from competitors' acquisitions to give themselves the best chance for success.


Chris Taylor

Profile picture for user ChrisTaylor

Chris Taylor

Chris Taylor is a director within Alvarez & Marsal’s insurance practice.

He focuses on M&A, performance improvement, and restructuring/turnaround. He brings over a decade of experience in the insurance industry, both as a consultant and in-house with carriers.

Agentic AI Will Transform Business

Agentic AI revolutionizes enterprise operations by enabling autonomous, adaptive systems that transform business processes across industries.

An artist’s illustration of artificial intelligence

Agentic AI represents a paradigm shift. It perceives information; understands the context and intent; and autonomously creates, modifies, and orchestrates workflows through contextual reasoning and continual learning. These AI agents enable enterprises to be perpetually adaptable to the dynamic needs of customer and market conditions. 

The true revolution emerges in human-AI collaboration and its ability to drive business transformation across industries.

What is Agentic AI?

An Agentic AI framework has the following key components:

  • Model - To reason over goals, planning and generating responses
  • Tools - To retrieve data and perform actions by invoking an application programming interface (API) or services
  • Orchestration - To maintain the memory, state, tools, data acquired/retrieved etc.
Agentic AI Components
Why the buzz?

Agents have been in existence in various forms such as robotic process automation (RPA), workflows etc., but their applications were limited to non-complex and rule-based tasks that lacked adaptability to dynamic needs and often require human intervention. This is where Agentic AI strikes a perfect symphony for knowledge and complex tasks. With autonomy at its core, Agentic AI moves from an assist role to business transformation.

In today's volatile, competitive and complex business environment, enterprises and businesses are looking to continually adapt. The recent advancements in AI, IoT, robotics, etc. together with the need to drive efficiency and agility, make Agentic AI suitable for various applications across industries. They range from horizontal services such as knowledge management, quality assurance, HR, finance, etc. to industry-vertical services such as underwriting/risk assessment, loan processing, market research, claim processing, fraud detection, clinical management, cyber security, customer servicing, supply chain management, self-driving cars etc.

Agentic AI is enabling a paradigm shift, and new business models are emerging. Enterprises that were focused on the software-as-a-service model are pivoting to service-as-a-software. Also, there is a rise in the number of Agentic AI frameworks such as AutoGen, LangChain, LangGraph, CrewAI, Agentspace etc. to realize this vision.

Four potential applications in industry

Below is a list of a few applications in the healthcare, insurance, manufacturing and technology services industries.

1. Drug discovery – It is a complex problem in the scientific community, which involves years of research, analysis, experimentation and collaboration to arrive at possible solutions such as drug discovery for COVID-19. The challenge is that solutions must adapt to dynamic needs, and new information may become available (such as new variants of COVID-19).

This complex biological problem requires an approach where it can be decomposed into manageable sub-tasks with specialized tools for targeted problem areas (specialized agents, digital twin, research databases, etc.). The process involves brainstorming of ideas (e.g.: brainstorming agent), extracting and synthesizing information from research databases (e.g. search agent), experimental tools such as genome sequencing, analyzing the results (e.g. analysis agent), reasoning the various outcomes simulated using digital twin via techniques such as chain-of-thought, graph-of-thought or tree-of-thought along with feedback loops for continuous learning.

2. Claims Management – It is the core of customer servicing in healthcare and insurance and involves complex process and workflows to determine eligibility, process large datasets such as electronic health records (EHR), X-rays, treatment procedures, diagnosis, recoveries, medical bills, etc. and payouts. For instance, in group benefits (such as disability), this time-consuming problem requires a human-in-the-loop approach to reduce the financial burden and accelerate recovery to participants.

The claim intake agent involves sensors, spatial data about the accident environment, visual language models to analyze the injury details; validation and fraud detection agent to process the claim -- spatial and image analysis, knowledge graph and digital twin to test the hypothesis space. Once a hypothesis is validated, a decision-making agent must weigh in on the job-specific impact, claimant's ability and timelines to recovery and accelerated payout via blockchain. The agent can further actuate its role as a recovery and support agent to continuously monitor the progress, adjust the payout based on progress and optimize recovery to improve overall experience with explainability.

3. Manufacturing - From controlling the flow of production lines to customizing products to making suggestions for improved product design, Agentic AI is likely to have multiple applications in smart manufacturing.

Data from sensors attached to machines, components, and other physical assets in factories and transportation can be analyzed by Agentic AI systems to predict wear-and-tear and production outages, avoiding unscheduled downtime and associated costs to manufacturers. German AI start-up Juna.ai deploys AI agents to run virtual factories, with the aim of maximizing productivity and quality while reducing energy consumption and carbon emissions. It even offers tailored specific goals, such as production agents and quality agents.

4. Technology Services – Enterprises need to be perpetually adaptable, which hinges on speed, quality and cost. Agentic AI will play a prominent role by emulating capabilities of:

  • "Requirement analysis agent" such as creating user stories based on standards and template (LLM + RAG)
  • "Design agent" to interpret the requirements and create a blueprint based on approved technology, architecture patterns, data flows/source to target mapping (e.g.: for data migration efforts) etc.
  • "Data engineering agent" for automated data discovery, build ingestion pipelines leveraging appropriate connectors
  • "Data quality agent" for AI/ML driven anomaly detection, de-duplication, self-healing/auto-correction (e.g.: using GIS data wrt location/address anomalies) in conjunction with various tools
  • "Synthetic data generator" for test data generation
  • Digital twin to create and test hypothesis via "test and learn" simulations, thereby improving productivity and efficiency of data and tech. services/roles.
The way forward

As with any technology advancements, fundamental principles must be applied, such as guardrails for ethics, values, empathy and to address potential bias; explainability and auditability to enable transparency; human-in-the-loop for oversight and decision-making; and accountability on critical areas such as healthcare, financial services etc., 

Human-AI collaboration are to be evaluated closely as this frontier of AI expands its horizons.


Prathap Gokul

Profile picture for user PrathapGokul

Prathap Gokul

Prathap Gokul is head of insurance analytics with the analytics and insights group in TCS’s banking, financial services and insurance (BFSI) business unit.

He has over 25 years of industry experience in commercial and personal insurance, life and retirement, and corporate functions. 

Vibe Everything: From Vibe Coding to Vibe Insurance

The emerging Vibe paradigm shifts insurance from cold transactions into AI-powered, emotionally intelligent experiences.

Hand with watch typing code on laptop

This article explores how the emerging "Vibe" paradigm—rooted in intuition, emotion, and seamless interaction—is redefining human-machine collaboration. Extending beyond development, we propose Vibe Insurance: an AI-native model that reduces friction, builds trust, and transforms transactional processes into empathetic, user-centric experiences. 

In a world shaped by generative AI, Vibe Insurance reimagines not just what technology can do but how it should feel. Unlike conventional insurtech solutions focused on automation or efficiency, Vibe Insurance centers on emotional resonance, trust-building, and fluid interactions—bridging the gap between digital precision and human warmth.

Andrej Karpathy's post on X

Figure 1: Andrej Karpathy's post on X

The literal meaning of "vibe" refers to a sense of atmosphere or feeling. When combined with terms like "coding," it forms the phrase "Vibe Coding"—a concept introduced by Andrej Karpathy earlier this year (Figure 1). While the term is gaining attention, its translation and interpretation in different languages remain fluid. Instead of focusing on a literal translation, many highlight its defining features: It is intuition-driven, free-form, and centered on creativity. As a result, it is sometimes referred to more descriptively as "intuitive coding," "freeform coding," or "spontaneous coding."

Some interpret Vibe Coding as a new development paradigm: Developers focus on application functionality and architecture design, while AI coding agents assist in writing the actual code. This interpretation highlights a new "human-machine collaboration" model, involving clear module division, precise prompt design, and iterative testing and refinement.

But "Vibe" isn't limited to coding. For example, when filling out online forms, traditional static, linear, and generic tools—such as dropdown menus, radio buttons, and one-way workflows—often feel tedious and frustrating, sometimes even causing users to abandon the process. The alternative is "Vibe Survey." Other extensions include Vibe Design and Vibe Marketing.

The concept of "Vibe" represents atmosphere, freedom, intuition, flexibility, and creativity. It emphasizes breaking away from mechanical interactions, striving for more natural and fluid experiences, dynamic process adjustments, and even the ability to perceive emotions. Compared with traditional methods, Vibe is faster, more efficient, and capable of meeting personalized needs while delivering a pleasant user experience.

People don't fill out forms to meet the demands of a company or individual; they do so because they have a need—whether for a job, a service, or a connection. This process is essentially about "matching." Surveys match potential customers with products, job applications match candidates with ideal roles, and event registrations match consumers with their preferred activities.

In this sense, Vibe is a mindset, a "human-centric" methodology. Its goal isn't to pursue speed but reducing unnecessary friction in user interface (UI) design, or human-AI coding collaboration, thereby enhancing user experience (UX) and achieving seamless integration between the virtual and real worlds.

Darren Yeo, in his article "The Hype and Risks of Vibe Coding," writes about Vibe and design: "For now, I'll keep those vibes in check and continue to treasure what remains valuable to me. Because at the end of the day, design isn't just about speed—it's about humanity." Indeed, the focus of evolution is never speed but "humanity."

Large language models (LLMs) are making this vision (Vibe Everything) a reality. With the right models and prompts, we can present content in a Vibe format to users. Achieving this isn't about retrofitting static products with AI features but rethinking the experience users desire when performing simple tasks like filling out forms.

This represents a natural evolution of interaction between AI-native products—those built with AI capabilities from the ground up—and users in the age of generative AI (GenAI). It discards rigid rules in favor of algorithm- and model-driven interactions, enabling dynamic workflows, multi-role collaboration, multimodal formats, and multi-channel touchpoints.

For example, if a user says, "Your interface is great, but the price is too high," the LLM can identify: "Positive: UI design; Negative: Price sensitivity," and respond with a thank-you message from the design team and a discount coupon. Prompts must define clear objectives (e.g., role + task instructions), and contextual memory ensures interaction consistency. LLMs are ideal for realizing Vibe interactions, transforming mechanical processes into warm conversations—whether in forms or code, natural language becomes the new human-machine interface.

However, challenges remain. For instance, freeform outputs may deviate from expectations, contextual memory limitations can disrupt interactions, and emotional/affective cognition is still underdeveloped. Other issues include reasoning for complex problems, latency, multimodal processing, security, and privacy. Despite these limitations, technologists are gradually improving these models through human-AI collaboration and fine-tuning for vertical-scenarios.

Emotional/affective cognition is essential to natural interactions and user engagement. However, current technology has significant gaps, including insufficient multimodal fusion for emotion recognition, poor contextual emotional coherence, and weak generalization across different cultures and individuals. In the Vibe interaction paradigm, user demand for anthropomorphic interactions (questioners) and the solutions provided by tech teams (solvers) forms a dynamic cycle, reshaping the foundation of human-machine collaboration.

This dynamic cycle resembles a "spiral causality diagram of demand-driven innovation." When people ask, "Why can't this be simpler/smarter?" (questioners), it exposes technological shortcomings. Engineers then develop tools to address them (solvers). As people enjoy the benefits of these innovations, they naturally ask, "Can it be even better?" This creates a self-reinforcing cycle of technological advancement. From touchscreen phones to voice assistants to emotion-aware devices, each breakthrough redefines how humans interact with technology.

The demand-innovation spiral in mobile phone technology

Figure 2: The demand-innovation spiral in mobile phone technology

As shown in Figure 2, the evolution of mobile phones vividly illustrates the dialogue between human needs and technological innovation. From Motorola's "just make calls" brick phones to Nokia's "texting and cameras" feature phones, to the iPhone's "smart and connected" touchscreen revolution, each generation meets current demands while quietly paving the way for the next breakthrough. Today, as people expect devices to "understand emotions and show warmth," affective computing is opening a new chapter. While phones can't yet interpret frowns or voice tremors, they can infer needs from usage patterns. The best innovations, from functionality to emotional resonance, always respond to humanity's deepest desires.

Given the universality of Vibe as a methodology, what is "Vibe Insurance"? Unlike conventional insurtech solutions focused on automation or efficiency, it centers on emotional resonance, trust-building, and fluid interactions—bridging the gap between digital precision and human warmth. We believe that establishing a new paradigm of human-machine interaction in insurance—Vibe Insurance—requires combining emotional intelligence with dynamic workflow design, reshaping user trust and service experiences through AI-native interactions.

  • For users, it reduces mechanical friction, enabling "seamless" experiences.
  • For businesses, it rebuilds service value chains through emotional intelligence and trust quantification.
  • For technology, it balances data-driven precision with human-centric warmth, achieving "algorithms with empathy."

Vibe isn't just a technological innovation but a mindset revolution—"intuition-driven, experience-first." When Vibe becomes the foundation of design, human-machine interactions will no longer be constrained by rigid rules but will evolve into creative, warm, and algorithm-driven exchanges. From Vibe Coding to Vibe Insurance, the core principle remains: "Reduce mechanical friction, let interactions flow naturally." Whether engineers collaborate with AI on code, users fill out dynamic forms, or policyholders engage in emotionally intelligent insurance planning, Vibe transforms cold processes into warm conversations.

The future of Vibe Everything hinges on balance. We must navigate the boundary between AI's "simulated emotions" and "avoiding overreliance." The ultimate goal of technology isn't to replace humans but to bridge the virtual and real worlds in a more humane way, using natural language as the universal interface. Vibe will redefine how we coexist with the digital world.

References and Notes:

1. Andrej Karpathy: "There's a new kind of coding I call 'vibe coding,' where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g., Cursor Composer w/Sonnet) are getting scarily good."

2. Cassius Kiani (April 1, 2025), Freeform Update: Why Vibe Surveys Beat Static Forms, https://every.to/source-code/freeform-update-why-vibe-surveys-beat-static-forms.

3. Darren Yeo (March 9, 2025), The Hype and Risks of Vibe Coding, https://medium.com/user-experience-design-1/the-hype-and-risks-of-vibe-coding-0d1e1ccd71d7.

4. ESCP Business School (Feb. 17, 2025), Artificial Intelligence and Emotional Intelligence: The New Frontier of Human-AI Synergy, https://escp.eu/de/news/artificial-intelligence-and-emotional-intelligence.

5. David E. Nye's "demand-innovation spiral," from Technology Matters: Questions to Live With. The core idea: "New technologies never emerge in a vacuum but respond to the flaws of existing ones—yet every solution becomes the incubator for new demands, creating a self-reinforcing cycle."

6. For explorations of affective computing in insurance, refer to LingXi Technology's articles:

  • Emotional Intelligence Breakthrough: How Emotional Prompts Define Next-Gen Insurance Planning https://mp.weixin.qq.com/s/VTZ5S6hOlcRfWY75iSj3OQ.
  • The Dual Faces of AI: Role-Playing and Emotion Recognition https://mp.weixin.qq.com/s/4dmkTNjcyUF3pwxsrjgxAw.
  • The Paradox of AI Companionship: The Delicate Balance Between Emotional Support and Dependence https://mp.weixin.qq.com/s/loX_Yr3ItXgD0tq81uC7Gw.
  • Experiment 23: AI and Trust—The Future of Insurance https://mp.weixin.qq.com/s/cLpa0BSKSkWeb3zlrsjqrQ.

 


David Lien

Profile picture for user DavidLien

David Lien

David Lien is a partner at Lingxi (Beijing) Technology. 

He wrote “Decoding New Insurance” (2020), which ranked among JD.com’s top books. Lien has held leadership roles at Sino-US MetLife, Sunshine Insurance and Prudential Taiwan, leading digital transformations and multi-channel marketing. A 2018 e27 Asia New Startup Taiwan Top 100 nominee, he holds a patent for the "Intelligent Insurance Financial Management System." 

AI Document Processing Transforms Medical Reviews

As a look at Medicare Set-Asides shows, AI can create huge efficiencies but also brings new risks.

An artist’s illustration of artificial intelligence

Claims professionals habitually spend hours sifting through hundreds of pages of medical records for every single claim. Now, thanks to generative AI that sorts and flags key information up front, claims professionals can skip the document grind and focus on what matters: making smart calls and avoiding expensive slip-ups.

However, this miraculous time-saving efficiency isn't without its challenges. Along with the ability to rapidly process and extract meaning from vast collections of complex documents, many organizations have stumbled using AI for document processing by setting unrealistic expectations, leading to widespread disillusionment when the technology fails to deliver.

Three specific challenges directly affect the success of AI document systems: workforce adoption issues, compliance risks, and cost concerns.

First, workforce adoption issues arise when employees, without proper expectation-setting, experience immediate frustration. This causes them to conclude, "This isn't working," at the first sign of error, often resulting in abandoned projects before the AI system can demonstrate its value. Second, in highly regulated processes, errors can trigger significant legal and financial consequences that create substantial risk. Third, organizations frequently underestimate the operational costs of running sophisticated AI models at scale.

These challenges are particularly evident in highly regulated insurance processes that involve complex and lengthy documentation with significant compliance requirements but can be avoided with understanding of the technology's limitations and wise usage and mindful oversight of the programmed skillset.

Take Medicare Set-Asides (MSAs) managed by Medicare secondary payer compliance companies. MSAs are complex financial arrangements primarily used in workers' compensation and liability claims to allocate funds for future medical treatment. Handling MSAs demands analysis of extensive medical records, billing statements, physician recommendations, and prescription histories.

Claims professionals invest 15 to 20 hours manually reviewing an average of 300 to 500 pages of medical documentation per claim. Complex cases can often exceed 1,000 pages. This creates a large opportunity to leverage AI to help with the understanding and processing of data. However, mistakes can come at a significant cost, potentially resulting in rejected MSA submissions, delayed settlements, additional reserve requirements, and even long-term Medicare recovery actions against insurers or claimants who failed to properly protect Medicare's interests.

These potentially costly consequences make a thoughtful AI implementation essential for MSA processing. Success with AI for document processing occurs when it is used as a tool that enhances workflows. This is where intelligent document processing (IDP) systems demonstrate their potential, as they can combine AI with document management technologies to transform how complex, unstructured documents are handled.

By presenting AI as an enhancement to the claims professional's workflow rather than a replacement, a company is able to address both workforce adoption concerns and error risks simultaneously. The key is creating a system where claims professionals maintain decision-making authority while the AI handles the time-consuming organizational tasks. This integration is what makes document processing improvements possible.

Breaking down a typical MSA review process, roughly 30% to 40% of that time is spent on manual document organization and navigation. This includes sorting pages, identifying document types, and locating relevant information across hundreds of pages. The IDP system tackles these challenges by handling the initial heavy lifting. It digitizes and organizes documents, identifying important details automatically. Claims professionals can then work with this pre-organized data, significantly reducing the time spent on manual document sorting and navigation. The result is a structured foundation that allows claims professionals to navigate efficiently through what was once an overwhelming volume of information.

The most effective implementations of these systems incorporate human verification. Claims professionals begin with the AI-organized information, make refinements and corrections where needed, and then use this enhanced foundation to perform their specialized analysis. This verification step ensures accuracy while still capturing significant time savings. Once the claims professional confirms or corrects the AI's initial processing, the system can then perform more sophisticated tasks with the validated information.

For example, the AI system can identify and extract date references across hundreds of pages of documents, creating an initial chronological sequence. Rather than manually finding each date throughout hundreds of pages, claims professionals review the pre-assembled timeline to verify its accuracy and completeness. They can spot missing events, incorrect dates, or sequence errors by reviewing the overall pattern of care rather than hunting for individual date references page by page. Once the claims professional validates this timeline, correcting any errors they find, the system uses this confirmed data to generate a comprehensive chronological view of medical events.

This could also work with keyword flagging. The AI system can be programmed to identify critical terms such as "surgery" throughout the documentation, whether this is in images or PDFs. This is especially valuable because surgical procedures often represent significant costs that must be accounted for in MSA calculations. When the AI highlights these terms, claims professionals can navigate to relevant sections instead of manually sifting through them with the risk of overlooking something. When poor document quality causes the system to inadvertently miss important keywords, claims professionals can flag them, helping the system learn and improve.

This brings us to the challenge of managing operational costs. Sophisticated IDP systems address this by intelligently determining the appropriate level of AI processing needed for each document. Rather than routing everything through the most expensive large language models, these systems analyze document complexity, classification certainty, and business value. This analysis allows them to allocate computational resources optimally. Routine documents can be processed using lightweight models, while only complex or high-value documents require advanced generative AI capabilities.

This intelligent resource allocation creates significant cost savings without sacrificing performance. As claims professionals verify results and provide corrections for document misclassifications, missed medical events, procedure code errors, and ambiguous treatment dates, the system gradually improves its ability to assess document complexity and determine appropriate processing levels. Rather than creating additional verification work, the system focuses human attention only on elements with low confidence scores or high business impact.

By using this feedback to come up with better instructions, the system is able to learn from claims professional corrections to recognize similar documents in the future, becoming more efficient with each processed claim. This creates a positive cycle where accuracy increases while resource requirements decrease over time, addressing the operational cost challenge head-on.

This approach to implementing IDP systems provides solutions to the challenges related to workforce adoption issues, compliance risks, and cost concerns. It prevents employee frustration by positioning the claims professional as the decision-maker while the AI serves as a sophisticated but at times imperfect assistant. It maintains crucial quality controls to reduce legal and financial risks by keeping the responsibility directly with the claims professional. Through learning the type of intelligence that is needed, it also manages operational costs effectively over time.

This MSA case demonstrates how AI can enhance human judgment in document-intensive processes. Even when claims professionals must still review key documents, the value comes from making that review more structured and focused. By creating a feedback loop that continuously improves performance and managing computational resources intelligently, organizations can transform initial AI disappointment into sustainable success. This balanced approach delivers better outcomes for all stakeholders while avoiding the pitfalls that derail many AI initiatives.

To Keep the Talent, Fix the System

Insurance leaders keep leaning on the “best practices” mantra, but without real investment in AI, they won't see more than incremental change.

A Woman in Red Long Sleeve Shirt Gives a Talk on Digital Evolution

"Best practices" are on the docket at every leadership offsite, conference panel and board-level meeting. But as currently acted on, best practices don't amount to much more than "doing a bit better."

A vague and watered-down expression can only result in equally vague and watered-down improvement measures: maybe a new dashboard, a revised call script, new key performance indicators (KPIs), a new customer relationship management (CRM) system. Sure, a dollar might be saved here or there. But what's needed is bold, lasting, transformative change.

To truly achieve "best practices," meaning evidence-based, scalable, and continuously refined over time, insurers would need to undergo a complete operational overhaul. The problem is, that kind of overhaul is an unappealing prospect—ripping out one system and trying to replace it with another can mean a slowdown in productivity and a focus on change management, rather than the work of actually doing insurance. Moreover, it can be controversial, disruptive, and highly visible—three things insurers tend to avoid, especially when margins are thin across many lines of business. Major change risks rattling investor confidence, unsettling personnel, and triggering concerns from customers and board members alike.

But how much of this resistance to fundamentally reexamine insurance operations is truly about protecting against disruption—and how much is a reluctance to think in new ways? The industry has long been defined by its conservatism, and that mindset continues to shape its decision-making. When younger professionals think of insurance, they picture fluorescent lights, thin cubicles, outdated software—nothing that relates, say, to an intuitive digital app. There's safety in legacy processes. "That's how we've always done it," the thinking goes. "It's worked so far. Our people are used to it. Why do things differently?"

But this needs to change. Introducing AI to areas of insurance that aren't yet using it can provide the necessary overhaul, one that fundamentally reimagines how agents do their work and how insurers collect, analyze, and apply data that will help them. 

Fortunately, this transformation no longer needs to be abrupt or alienating. Today's technology allows insurers to roll out change in a piecemeal, custom-tailored way—minimizing disruption while maximizing long-term impact.

The result won't just be checking a "best practices" box on paper. It will mean real operational improvements: higher margins, greater employee satisfaction, an easier time attracting younger talent amid a talent crisis, and higher customer satisfaction—all of which can help reframe the reputation of an industry long seen as inhuman and overly bureaucratic, especially in light of recent events.

Forget the Firm Handshake—Focus on the Data

First, data collection. Over the past decade, insurers have leaned on broad metrics like total premiums written, retention rates, and revenue per agent. But these numbers are too general to offer meaningful insight. They tell us what happened—but not why. Sure, we know this agent wrote this many policies in the past year. But are we any closer to understanding what actually drove that performance?

AI is often praised for its ability to zoom out—processing and connecting thousands of data points far beyond what the human mind can track. But what's underrated is its ability to zoom in. With the right inputs, AI can deconstruct the behavior of top producers, revealing the subtle habits that set them apart from their peers. It's not about collecting the most data—that just leads to a glut. It's about collecting the right data.

Traditional thinking still dominates how many executives explain producer success. They'll chalk it up to someone's alma mater, a trusty handshake, a family legacy in insurance—or fall back on vague clichés like "work ethic" or "wanting it more." The problem is, these explanations frame success as innate and unteachable. If top producers are simply born with it, then there's no hope—or strategy—for helping average producers improve.

Top producers' inner workings can be uncovered with the help of AI. It could be the speed and timing of their follow-ups. Or the exact phrasing they use to tailor pitch strategies to different clients. Or even their ability to strike the perfect balance between persistence and discretion. How a producer structures their day for maximum efficiency can also create ripple effects that lead to higher conversion rates.

Insurers need to strip away the mystique of high performance—not just to help average producers improve, but to show that success isn't luck or legacy; it's a learnable system. When producers can see the path, they're more likely to walk it—and more likely to stay.

Define "Best Practices," Not "Somewhat Better Practices"

Yet even if AI can generate accurate observations and build a data-driven template for the ideal agent, those insights won't translate into better performance if producers are still asked to use CRM systems they're reluctant to go into. This is the biggest bottleneck to achieving best practices: The systems meant to support agents are often the very ones holding them back.

The select few from the younger generation who are genuinely excited about working in insurance often become jaded quickly—usually thanks to the daily frustrations of using clunky CRM systems. All it takes is one lunch, one venting session with a friend in finance or tech, to realize how far behind their tools really are—and to start thinking about jumping ship.

These systems often demand additional work: manual data entry using clumsy interfaces with little to no integration with calendars or phones. Worse, the systems lack AI-driven insights—so agents are forced to treat every lead the same, regardless of how cold or warm it is. It's no wonder turnover rates among agents remain so high.

Incorporating AI into these systems isn't just a promising retention strategy for policyholders—it's a powerful one for agents, too. Success breeds success, so when they can instantly see what practices work, what paths to take to close a sale, they'll want to do more. It's human nature. In this case, advanced technology doesn't strip the job of its humanity—it restores it. It gives agents space to focus on what drew them to the field in the first place: building lasting, mutually beneficial relationships with clients.

But ease and humanity aren't the only reasons agents stay motivated and loyal to their agency. There's also a financial incentive. Modern AI-powered systems identify cross-selling and upselling opportunities that might otherwise go unnoticed, letting agents maximize their commissions.

Provide More Data Points to Underwriting

Just as performance analysis has historically relied on too few data points, underwriting has long been constrained by limited inputs—typically just credit scores and claim histories. But consumer behavior is evolving too quickly, and often unpredictably, for insurers to keep relying on such narrow datasets.

AI allows a far more diverse range of data points to be taken into account. It can factor in social media activity, purchasing behavior, and real-time insights from Internet of Things (IoT) devices. For example, telematics in vehicles enables insurers to monitor driving habits continuously, allowing for dynamic premium adjustments based on real-world behavior rather than outdated, static models.

For too long, insurers have been playing catch-up. Some lag times have shortened, but we should be aiming to eliminate the lag entirely. Any delay just measures how much margin is leaking. For the first time in history, we're within reach of truly real-time risk pricing.

Underwriters shouldn't fear an AI-driven overhaul of their sector. Just like producers, they didn't enter this industry to be buried in repetitive administrative work—only to be blamed for every oversight. With AI handling what it's best at, like fact-checking and surface-level analysis, underwriters can return to what they're best at: making high-level statistical judgments and strategic decisions.

Expect Regulatory Pressure From the States, Not the Federal Government

Even with the prospect of federal deregulation under the current administration, insurers shouldn't assume a more relaxed compliance environment. Several states, especially California, are already enacting stricter environmental regulations in response to escalating wildfire risk—putting pressure on insurers to offer broader coverage in high-risk zones. At the same time, backlash over prior authorization delays in health insurance is gaining political traction, with new legislation on the horizon. Insurers that move too slowly could face not just financial penalties but long-term reputational fallout.

AI can help insurers stay ahead of the U.S. regulatory maze by monitoring policy changes in real time, flagging discrepancies across states, and identifying inconsistencies in claims, contracts, and internal processes. In a market where compliance expectations differ across all 50 states, these capabilities are becoming indispensable—especially for regional carriers aiming to scale nationally without stumbling into regulatory blind spots.

Complying with anti-discrimination laws is another area where AI can make a real impact. But its value goes beyond just staying compliant—it creates fairer, more consistent decision-making that can help shift public perception. The insurance industry has faced long-standing scrutiny for biased practices, and AI—if used responsibly—can be a tool to change that narrative.

Know the Risks—But Don't Overstate Them

While AI holds real promise for improving sales, underwriting, and compliance, insurers shouldn't jump in without a clear plan. Regulators are becoming more cautious—and in some cases, more aggressive—about how AI is used. Without thoughtful implementation, AI may see the expected efficiencies undone by compliance issues.

Insurance is, by nature, a risk-averse industry—understandably so. The job, after all, is to anticipate consequences before they happen. But when it comes to AI, many insurers are overstating the risks in ways that aren't rational. The greater danger isn't adopting AI too soon—it's falling behind as AI becomes the standard across every other industry.

Understanding California Wildfire Risk

California's evolving wildfire risks mean insurers must abandon traditional, generalist models and adopt specialized underwriting approaches.

Smoke Clouds Coming from a Dense Forest

The start of 2025 brought two devastating wildfires to Southern California: the Palisades fire and the Eaton fire. These events, fueled by severe Santa Ana winds and abundant post–atmospheric river vegetation, left behind widespread destruction, including thousands of damaged and destroyed structures. They also reinforced a larger trend of increasingly volatile wildfire behavior in the region—an outcome of shifting climatic conditions, altered precipitation patterns, and extended fire seasons.

The lessons from these latest fires underscore the evolving nature of wildfire and the need for it to be treated as a specialist peril rather than a generalist one. Most people get their wildfire coverage through their homeowners insurance, and most of the perils that are covered under a homeowners policy are seen as generalist, so you can use fairly traditional actuarial methodologies to figure prices. Wildfire used to fit that description. There was not much change. You may have had some bad years from time to time, but it wasn't bad enough to merit a specialist kind of approach by the entire industry, like we see for cyber risks.

This changed in 2017 when California's wine regions had surprising, devastating wildfires and then the Camp fire and Carr fire happened a year later. It became clear that the traditional approach of the industry was no longer very effective. Wildfire needs to be treated as a specialty kind of peril that requires much more targeted resources to underwrite and mitigate properly.

Why is the wildfire risk evolving in California?

Wildfires have long been a natural part of Southern California's landscape. However, their frequency, severity, and behavior have shifted dramatically in recent years due to human activity and climate change, necessitating a reassessment of risk and mitigation strategies.

The El Niño-Southern Oscillation (ENSO), a key climate driver, has become increasingly frequent and severe due to climate change. This has amplified atmospheric river events like the Pineapple Express, which bring heavy rainfall but exacerbate wildfire risk by fostering rapid vegetation growth followed by prolonged dry periods. For example, the Palisades and Eaton fires followed a strong El Niño event in late 2024 that shifted abruptly into a La Niña phase, creating abundant vegetation during the rainy period and extreme dryness in the months leading up to the fires.

Historically, Santa Ana winds were more likely to occur after the precipitation season had begun, mitigating their fire-spreading potential. However, as climate change has pushed the beginning of the precipitation season later in the year, these winds increasingly are occurring during drought conditions, and the resulting risk of large, destructive wildfires has grown significantly. Though wildfires have long been part of the region's ecological cycle, factors such as the ENSO, lengthening drought conditions, and extreme wind events have significantly altered fire behavior in recent years. As these elements converge, traditional models built on historical fire patterns are increasingly challenged, leaving both communities and insurers grappling with unpredictable risks.

Adding to this challenge is the expansion of the fire season seen across decades. Data on maximum fire sizes by month reveals a troubling trend. From 1985 to 1999, fires peaked in July and diminished after August. Between 2000 and 2009, fire sizes began to show secondary peaks later in the year. Most recently, from 2010 onward, a pronounced secondary peak has emerged in October and December, signaling an extended fire season. This shift, combined with the proliferation of invasive plant species, declining forest health, and worsening climate conditions, has exposed previously low-risk areas to significant wildfire hazards. These evolving dynamics present challenges for models relying solely on historical fire patterns, further highlighting the need for advanced predictive approaches.

Underwriting models need to keep up

The speed with which wildfire has evolved is making it even harder for traditional models to adapt as close to real time as possible. Despite the complex and evolving nature of the wildfire risk, it is possible to develop effective wildfire risk assessment models. Naturally, models must be more sophisticated and rely on advanced technology to make sense of the myriad of data needed to create the assessment.

As an example, the Delos model first integrates high-resolution data on fuel, wind, climate, and fire behavior alongside hundreds of additional layers of supporting data, providing comprehensive insight into wildfire risks. Second, it employs advanced machine learning methodologies looking at wildfire behavior independent from historical events to ensure that there are no surprises from tail-end risk events like the Palisades and Eaton fires. Finally, the model undergoes rigorous back-testing against historical fires and is reviewed by wildfire experts to ensure both accuracy and reliability. This approach has successfully predicted the full extent of all the major fires in the past five years, including the recent LA fires.

Conclusion

The Palisades and Eaton Fires serve as a stark reminder of the evolving wildfire risks in Southern California and the need for innovative solutions in wildfire risk mitigation. As climate change and environmental shifts continue to affect fire behavior, traditional models struggle to keep pace with emerging risks.

I have high hopes for progress in better analytical understanding of how to harden homes and broader communities. This should mean some areas that are considered unaffordable to insure now will, in future years, where homes have performed enough hardening against wildfire, be able to obtain affordable coverage. There are a lot of efforts taking place in the aftermath of the Los Angeles fires to figure out how to make these communities safer. Additionally, the California Department of Insurance has put a lot of effort into having insurers respond to these kinds of things.

Together, we can build a more resilient future in the face of evolving wildfire threats.

Delos has published a whitepaper providing more detail on the LA fires, which can be viewed here

Managing Investment Risk Through Political Change?

Despite market volatility and regulatory changes, insurers remain optimistic and plan to increase portfolio risk in 2025.

United States Capitol in Washington

Volatility can be problematic for insurers for two reasons. First, investment income makes up a very large proportion - typically at least two-thirds - of an insurer's profitability. Market volatility such as we are seeing in the first half of 2025 makes it harder to assess optimal investment strategies to pursue that income; will interest rates continue their recent downward trend or will they reverse, given sticky inflation?

Second, the investment decisions insurers make today can affect results for years to come due to the nature of their products and accounting rules. For example, under U.S. statutory accounting, most life insurers' portfolios are still earning income based on yields from bonds issued prior to 2022, i.e., before interest rates rose from a more than decade-long period of historic lows.

From trade to taxation to the role of government, insurers are not immune from dramatic policy swings. Given that, it's no surprise that insurers rated "Domestic Political Environment" as their top risk in Conning's latest investment risk survey. But it is not correct to assume that all the changes the industry faces are due to a change in presidential administrations. In fact, many of the uncertainties (e.g., the pending changes from the NAIC's Generator of Economic Scenarios) have been in the works for years.

Political and market volatility are not the only major uncertainties for insurers: there's also a large amount of regulatory change in the offing. For example, the NAIC is looking to adjust capital charges for a wide range of assets to ensure that assets with comparable risk have comparable charges. While we await the final adjustments, we know from the NAIC's recent increase in charges for securitization residual tranches - to 45% from 30% - that the impact may be quite large. If that isn't enough, life insurers are also preparing for the pending change in reserve and capital calculations for many of their products, a result of the transition from the Academy Interest Rate Generator (AIRG) to the new NAIC GOES scenarios.

So, what can we make of all this? Clearly, it's important to recognize the potential risks that insurers face in today's environment. During the 2008 financial crisis, we saw how uncertainty can lead to a rapid derisking of insurers' portfolios, a process that can have a long-lasting impact on everything from product design to profitability.

But we also need to remember that insurers are in the risk business. Whether it's asset risk or catastrophe risk, the successful insurers are the ones that find the right balance between seeking profitability and taking on variability. More importantly, many of those companies have been maintaining this balance for decades during all types of market storms: the 2008 Financial Crisis, 1970s Stagflation, world wars, the Great Depression, and more.

Given all that, you might expect insurers were planning to dramatically scale back portfolio risk. Yet the Conning survey showed the exact opposite: Most insurers were expecting to continue increasing their portfolio risk. For example, more than 40% of respondents expected to increase their allocations to both public and private equity. While those values are down from the 2024 survey, they are still well above the portion of respondents expecting to reduce their allocations to those assets. In fact, the overwhelming majority of respondents - nearly 80% - actually had an optimistic view of 2025.

One aid to that resiliency is a set of customized tools allowing insurers to analyze a wide range of potential futures. With a properly calibrated model, insurers can better understand the potential upside and risk associated with an asset allocation strategy. They can also use these tools to help fine-tune their expected risk/reward balance across a range of strategic questions, such as whether to seek reinsurance to offload risk or how to refine product design to help limit risk exposure. These tools may also give them a leg up in developing concrete action plans for handling the next major unexpected event.

There is no question that today's risks may appear new and daunting. And we know that past performance does not guarantee future results. However, we can take comfort in the knowledge that the insurance industry has handled many significant and unprecedented challenges over the years and has survived and thrived. We are confident the industry can and will handle whatever comes next.

References

National Association of Insurance Commissioners, Capital Adequacy (E) Task Force RBC Proposal Form, April 20, 2023. 

Conning, Inc., "Investment Risk Survey: Insurer Optimism Cools on Markets, Adding Risk; Private Assets Still an Interest but Inflation No Longer a Leading Concern," Matt Reilly, Feb. 11, 2025.


Daniel Finn

Profile picture for user DanielFinn

Daniel Finn

Daniel Finn, FCAS, ASA, is a managing director at Conning.

He is responsible for providing asset-liability and integrated risk management advisory services and oversees the support and development of Conning’s proprietary financial software models. Prior to joining Conning in 2001, he was in an asset-liability management unit with Swiss Re Investors'. 

Finn earned an MA in mathematics from the University of Rochester and an MBA from Loyola College.

How to Minimize Financial Threats

Modern risk management leverages AI and machine learning to greatly improve how organizations predict and mitigate financial threats.

An artist’s illustration of artificial intelligence

Organizations require a robust risk management framework for sustained growth. That means developing a structure that considers all risk aspects, from macroeconomic factors to credit and operational issues. Today's risk management framework also leverages the latest technological advances to create procedures and guidelines for managing those risks. Strategies to achieve the goal of balanced risk management include investing in technologies that can identify and assess both immediate and future risk factors, establishing thresholds based on the organization's appetite for risk, identifying and monitoring risks that could breach this framework, incorporating those risks into strategy development, and executing those strategies effectively.

Use of artificial intelligence in risk management

Artificial intelligence (AI) and complex machine learning models allow for a more dynamic financial prediction framework, integrating real-time and cross-domain data. AI may not have been created with risk management in mind, but the technology is tailor-made for predicting and mitigating financial threats, aiding in improved decision-making, and providing protection and safeguards for various asset classes. Over a 10-year period ranging from 2022 to 2032, the size of the global AI risk management market is predicted to more than quadruple from $1.7 billion (2022) to $7.4 billion by 2032—a compound annual growth rate (CAGR) of more than 16%. Increased trust in technology is the most significant driving force, with previous ethical misgivings easing and a general improvement in quality and trustworthiness in emerging models.

Predictive analytics enable companies to foresee, prepare, and ultimately lessen the potential impact of previously unexpected scenarios, with the financial crisis of 2007 and 2008 as the most recent and relevant example. During the crisis, a clear correlation emerged between a skyrocketing unemployment rate and an increase in defaulted mortgage payments. Now organizations can build and update models that analyze key metrics during economic unrest or uncertainty, letting those businesses implement mitigative or protective measures. More routine, everyday examples include strategic decision-making in offering loan terms, as underwriters can use the model to better understand the loan's expected performance, net present value (NPV), probability of default, and other critical measures. Traditional models continue as the industry's standard, but increasingly dynamic modeling facilitates real-time updating.

Meanwhile, an industry as data-driven as insurance quickly adapted to the AI era, with the new technology contributing to everything from crafting individualized policies to automating underwriting procedures. Even claim processing, traditionally identified as the top source of customer frustration with the industry, has enjoyed advancements in the forms of:

  • Claim prioritization. Programs can search for key terms to help adjusters deal with claims in order of their urgency.
  • Addressing incomplete or disorganized claims. AI can identify missing information, documentation, or identification from claims and request necessary details from clients via automated emails or chatbots.
  • Fraud detection. By identifying patterns of behavior and scanning enormous volumes of data in real time, AI detects and uncovers trends that can indicate an increased possibility of fraudulent activity.
Real-world examples of dynamic risk management

The term "risk management" is typically assumed to pertain to the financial realm, and with good reason. There are predictable and unforeseen risks in every industry, and an AI-related application exists to address just about all of them. For example, a traditional risk management strategy in the industrial world involves prioritizing extensive preventive safety measures to minimize accidents and liabilities. But managers with access to a more dynamic approach can blend preventive and reactive strategies, allocating resources based on actual risk exposure rather than worst-case scenarios. These companies rely on AI-driven predictive maintenance rather than overinvesting in preventive measures. By using sensors to detect wear and tear in machinery, they can intervene only when necessary, reducing unnecessary spending while still managing safety risks effectively.

In the traditional risk management-related financial field, consumer lending organizations are historically hesitant to offer favorable terms or even eligibility to customers with no or limited credit history. These customers ultimately struggle to secure loans in a risk-averse market. Organizations can use a combination of inter-domain data and predictive modeling to analyze the true risk presented by offering loans to these customers. Examples include:

  • Bill payments. A history of on-time payments for utilities, rent, and other monthly expenses indicates an individual who is likely to be a good credit risk for the organization.
  • Secondary loans. While they aren't included in traditional FICO scores, any record of repaying an advance loan on a paycheck can also reflect positively on an individual's credit.
  • Income/spending habits. With access to a person's bank account data, machine learning can quickly identify income patterns and compare them with outgoing expenditures, determining account balances and other relevant information. Pay stubs and W2s can also be immediately scanned and evaluated.
  • Social media. Behavioral patterns or online browsing history can be useful in gaining an overall sense of customer behavior patterns and doubles as a useful tool in predicting a customer's creditworthiness.

Lastly, while a traditional risk manager in the insurance industry might purchase comprehensive coverage to protect against potential physical or employee-related risks, those who take a more agile risk approach use AI and real-time data to continuously assess risks and adjust coverage accordingly. In the commercial transportation sector, some companies are leveraging telematics and driver behavior analytics to customize insurance coverage. Instead of a fixed insurance policy, safer drivers receive lower premiums, and riskier ones face dynamic adjustments, optimizing costs while managing exposure effectively.

Of course, the greater role of the insurance industry is the implementation of risk management for other industries–some of which are increasingly related to AI-specific uncertainties. These include handling data, formulas, algorithms, and other machine learning features that, if improperly managed, can result in financial and reputational harm. Through smartphone apps, wearable devices, and GPS monitoring, insurance companies can base premiums on real-time customer behavior rather than a preconceived idea of how much risk that customer's demographic profile presents.

The establishment of a proper risk management framework includes considering business income, credit, operational risk, and uncontrollable factors related to the greater global economic scene, world events, and industry-specific details. By identifying the contributing factors and investing in the latest technological advancements, today's risk managers can position themselves ideally in an increasingly uncertain marketplace.


Sriharsha Thungathurthy

Profile picture for user SriharshaThungathurthy

Sriharsha Thungathurthy

Sriharsha Thungathurthy is a senior manager/risk professional.

He has 15 years of experience in identifying, managing, and mitigating risks and helping drive business decisions through complex data analytics and using predictive models. 

He is an alumnus of Georgetown University McDonough School of Business.