How 'Explainable AI' Changes the Game

AI often performs its magic with little insight into how it reached its recommendations. "Explainable AI" makes all the difference.

Artificial intelligence (AI) drives a growing share of decisions that touch every aspect of our lives, from where to take a vacation to healthcare recommendations that could affect our life expectancy. As AI’s influence grows, market research firm IDC expects spending on it to reach $98 billion in 2023, up from $38 billion in 2019. But in most applications, AI performs its magic with very little explanation for how it reached its recommendations. It’s like a student who displays an answer to a school math problem, but, when asked to show the work, simply shrugs.

This “black box” approach is one thing on fifth-grade math homework but quite another when it comes to the high-impact world of commercial insurance claims, where adjusters are often making weighty decisions affecting millions of dollars in claims each year. The stakes involved make it critical for adjusters and the carriers they work for to see AI’s reasoning both before big decisions are made and afterward so they can audit their performance and optimize business operations.

Concerns over increasingly complex AI models have fired up interest in “explainable AI” (sometimes referred to as XAI,) a growing field of AI that asks for AI to show its work. There are a lot of definitions of explainable AI, and it’s a rapidly growing niche — and a frequent subject of conversation with our clients. 

At a basic level, explainable AI describes how the algorithm arrived at the recommendation, often in the form of a list of factors that it considered and percentages that describe the degree that each factor contributed to the decision. The user can then evaluate the inputs that drive the output and decide on the degree to which it trusts the output.

Transparency and Accountability

This "show your work" approach has three basic benefits. For starters, it creates accountability for those managing the model. Transparency encourages the model’s creators to consider how users will react to its recommendation, think more deeply about them and prepare for eventual feedback. The result is often a better model.

Greater Follow-Through

The second benefit is that the AI recommendation is acted on more often. Explained results tend to give the user confidence to follow through on the model’s recommendation. Greater follow-through drives higher impact, which can lead to increased investment in new models.

Encourages Human Input

The third positive outcome is that explainable AI welcomes human engagement. Operators who understand the factors leading to the recommendation can contribute their own expertise to the final decision — for example, upweighting a factor that their own experience indicates is critical in the particular case.

How Explainable AI Works in Workers' Comp Claims

Now let’s take a look at how explainable AI can dramatically change the game in workers' compensation claims.

Workers comp injuries and the resulting medical, legal and administrative expenses cost insurers over $70 billion each year and employers well over $100 billion — and affect the lives of millions of workers who file claims. Yet a dedicated crew of fewer than 40,000 adjusters across the industry is handling upward of 3 million workers' comp claims in the U.S., often armed with surprisingly basic workflow software.

Enter AI, which can take the growing sea of data in workers' comp claims and generate increasingly accurate predictions about things such as the likely cost of the claim, the effectiveness of providers treating the injury and the likelihood of litigation.

See also: Stop Being Scared of Artificial Intelligence

Critical to the application of AI to any claim is that the adjuster managing the claim see it, believe it and act on it — and do so early enough in the claim to have an impact on its trajectory.

Adjusters can now monitor claim dashboards that show them the projected cost and medical severity of a claim, and the weighted factors that drive those predictions, based on:

  • the attributes of the claimant,
  • the injury, and
  • the path of similar claims in the past

Adjusters can also see the likelihood of whether the claimant will engage an attorney — an event that can increase the cost of the claim by 4x or more in catastrophic claims.

Let’s say a claimant injured a knee but also suffers from rheumatoid arthritis, which merits a specific regimen of medication and physical therapy.

If adjusters viewed an overall cost estimate that took the arthritis into account but didn’t call it out specifically, they may think the score is too high and simply discount it or spend time generating their own estimates.

But by looking at the score components, they can now see this complicating factor clearly, know to focus more time on this case and potentially engage a trained nurse to advise them. Adjusters can also use AI to help locate a specific healthcare provider with expertise in rheumatoid arthritis, where the claimant can get more targeted treatment for a condition.

The result is likely to be:

  • more effective care,
  • a faster recovery time, and
  • cost savings for the insurer, the claimant and the employer

Explainable AI can also show what might be missing from a prediction. One score may indicate that the risk of attorney involvement is low. Based on the listed factors, including location, age and injury type, this could be a reasonable conclusion.

But the adjuster might see something missing. They adjuster might have picked up a concern from the claimant that he may be let go at work. Knowing that fear of termination can lead to attorney engagement, the adjuster can know to invest more time with this particular claimant, allay some concerns and thus lower the risk the claimant will engage an attorney.

Driving Outcomes Across the Company

Beyond enhancing outcomes on a specific case, these examples show how explainable AI can help the organization optimize outcomes across all claims. Risk managers, for example, can evaluate how the team generally follows up on cases where risk of attorney engagement is high and put in place new practices and training to address the risk more effectively. Care network managers can ensure they bring in new providers that help address emerging trends in care.

By monitoring follow-up actions and enabling adjusters to provide feedback on specific scores and recommendations, companies can create a cycle of improvement that leads to better models, more feedback and still more fine-tuning — creating a conversation between AI and adjusters that ultimately transforms workers' compensation.

See also: The Future Isn’t Just for Insurtech

Workers' comp, though, is just one area poised to benefit from explainable AI. Models that show their work are being adopted across finance, health, technology sectors and beyond.

Explainable AI can be the next step that increases user confidence, accelerates adoption and helps turn the vision of AI into real breakthroughs for businesses, consumers and society.

As first published in Techopedia.


Dustin Oxborrow

Profile picture for user DustinOxborrow

Dustin Oxborrow

Dustin Oxborrow, senior vice president of global sales, brings more than 20 years of experience building and selling SaaS platforms to CLARA Analytics, the leading provider of artificial intelligence (AI) technology in the commercial insurance industry.

MORE FROM THIS AUTHOR

Read More