Breaking the 'POC Purgatory' Barrier in AI

Emerging benchmarks will let firms make informed decisions on whether to scale, streamlining the AI implementation process.

An artist’s illustration of artificial intelligence (AI)

In 2024, "model benchmarking" is set to be one of the emerging trends in AI adoption, particularly within the insurance sector. Enterprises have struggled to address the persistent challenge of "POC purgatory," where promising AI solutions often become stalled in the proof-of-concept (POC) stage and struggle to scale across the organization.

To combat this issue, specific benchmarking criteria will gain prominence. These benchmarks will serve as essential metrics to evaluate progress during the development and deployment phases, enabling businesses to make informed decisions on whether to scale, ultimately streamlining the AI implementation process.

Understanding Model Benchmarking

Model benchmarking involves assessing AI solutions based on their performance and impact. There are two main categories:

Technical Benchmarks:

These benchmarks employ various metrics, such as precision, accuracy, recall and F1-score, to gauge how effectively the model performs specific tasks. These metrics help assess the model's ability to make correct predictions:

  • Precision: The ratio of correctly predicted positive observations to the total predicted positive observations.
  • Accuracy: The ratio of correctly predicted observations to the total observations.
  • Recall: The ratio of correctly predicted positive observations to all actual positives.

See also: 4 Key Questions to Ask About Generative AI

Product Value Benchmarks:

Unlike technical benchmarks that focus on model metrics, product value benchmarks assess the real-world impact of AI solutions on end users and businesses. These benchmarks measure how the AI solution affects user experiences and business outcomes. They include:

  • Retention rates: The percentage of customers/users who continue to use a product over a specific period.
  • Churn rates: The rate at which customers stop using a product or service.
  • Engagement metrics: Various user activity indicators such as daily and monthly active users, time spent on a platform, interactions per user, etc.

Product value benchmarks are crucial, as they showcase the practical significance of the AI model's performance. A high-performing model may not always translate to valuable business outcomes if it doesn’t improve user engagement or retention or reduce churn rates.

By considering both technical benchmarks and product value benchmarks, insurance companies gain a comprehensive understanding of an AI model's effectiveness. This holistic approach ensures that the AI solutions not only perform well technically but also improve the end-user experience and help with business objectives.

Challenges in Scaling AI Solutions

Despite the potential, insurers struggle with challenges in scaling AI solutions beyond the POC stage. The industry has witnessed a “POC purgatory” scenario where only a meager 10% of tested AI models in financial organizations progress to production and scalability. 

Complex workflows and legacy data architectures are major hurdles. The interdependence of workflows heightens the risk of error propagation across systems. Legacy data silos obstruct efficient access to unified data, essential for machine learning (ML) model training and fine-tuning. The lack of human adoption of AI tools adds another layer of complexity. Even highly proficient and accurate AI tools fail to deliver lasting value if not embraced within an organization or by customers.

To navigate these challenges, insurers can establish stage gates and success criteria, creating specific milestones for AI projects. For example, an agile governance board, employing benchmarks as guiding tools, can aid in decision making, ensuring alignment with strategic objectives and customer needs. Involving key stakeholders early in the process fosters buy-in and enhances the viability of AI solutions.

See also: AI: Beyond Cost-Cutting, to Top-Line Growth

Deciding on AI Implementation and Future Outlook

When considering scaling AI use, insurers must evaluate if AI is necessary or if other approaches suffice. Compliance use cases might better suit rules-based algorithms due to their explainability. Examining the current infrastructure and data architecture determines the feasibility and scalability of AI implementations.

Looking ahead into 2024, generative AI will continue to headline discussions. However, there is already a shift toward specialized, smaller language models tailored for specific insurance use cases. Vision algorithms, like OpenAI's ChatGPT with vision, promise more accurate visual assessments, such as claims estimates. These developments are indicative of a future where AI's integration aligns seamlessly with insurance processes, paving the way for enhanced efficiency and better customer experiences.


Dustin Ping

Profile picture for user DustinPing

Dustin Ping

Dustin Ping is a senior associate at Silicon Foundry, a Kearney company, and DIFC Launchpad.

He specializes in assessing and synthesizing trends in disruptive technologies such as AI. Prior to joining the Launchpad team, Ping was a senior research analyst at the Brattle Group. 

He holds a bachelors in mathematics from Williams College and studied for a year at the London School of Economics.

 

Read More