'Explainable AI' Builds Trust With Customers

Insurance is moving toward a world in which carriers will not be allowed to make decisions that affect customers based on black-box AI.

Image

Artificial intelligence (AI) holds a lot of promise for the insurance industry, particularly for reducing premium leakage, accelerating claims and making underwriting more accurate. AI can identify patterns and indicators of risk that would otherwise go unnoticed by human eyes. 

Unfortunately, AI has often been a black box: Data goes in, results come out and no one — not even the creators of the AI — has any idea how the AI came to its conclusions. That’s because pure machine learning (ML) analyzes the data in an iterative fashion to develop a model, and that process is simply not available or understandable. 

For example, when DeepMind, an AI developed by a Google subsidiary, became the first artificial intelligence to beat a high-level professional Go player, it made moves that were bewildering to other professional players who observed the game. Move 37 in game two of the match was particularly strange, though, after the fact, it certainly appeared to be strong — after all, DeepMind went on the win. But there was no way to ask DeepMind why it had chosen the move that it did. Professional Go players had to puzzle it out for themselves. 

That's a problem. Without transparency into the processes AI uses to arrive at its conclusions, insurers leave themselves open to accusations of bias. These concerns of bias are not unfounded. If the data itself is biased, then the model created will reflect it. There are many examples; one of the most infamous is an AI recruiting system that Amazon had been developing. The goal was to have the AI screen resumes to identify the best-qualified candidates, but it became clear that the algorithm had taught itself that men were preferable to women, and rejected candidates on the basis of their gender. Instead of eliminating biases in existing recruiting systems, Amazon’s AI had automated them. The project was canceled.

Insurance is a highly regulated industry, and those regulations are clearly moving toward a world in which carriers will not be allowed to make decisions that affect their customers based on black-box AI. The EU has proposed AI regulations that, among other requirements, would mandate that AI used for high-risk applications be “sufficiently transparent to enable users to understand and control how the high-risk AI system produces its output.” What qualifies as high-risk? Anything that could damage fundamental rights guaranteed in the Charter of Fundamental Rights of the European Union, which includes discrimination on the basis of sex, race, ethnicity and other traits. 

Simply put, insurers will need to demonstrate that the AI they use does not include racial, gender or other biases. 

But beyond the legal requirements for AI transparency, there are also strong market forces pushing insurers in that direction. Insurers need explainable AI to build trust with their customers, who are very wary of its use. For instance, after fast-growing, AI-powered insurer Lemonade tweeted that it had collected 1,600 data points on customers and used nonverbal clues in video to determine how to decide on claims, the public backlash was swift. The company issued an apology and explained that it does not use AI to deny claims, but the brand certainly suffered as a result.

Insurers don’t need to abandon the use of AI or even “black-box” AI. There are forms of AI that are transparent and explainable, such as symbolic AI. Unlike pure ML, symbolic AI is rule-based, with codes describing what the technology has to do. Variables are used to reach conclusions. When the two are used together, it’s called hybrid AI, and it has the advantage of leveraging the strengths of each while remaining explainable. ML can target pieces of a given problem where explainability isn’t necessary.

For instance, let’s say an insurer has a large number of medical claims, and it wants AI to understand the body parts involved in the accident. The first step is to make sure that the system is using up-to-date terminology, because there may be terms used in the claims that are not part of the lexicon the AI needs to understand. ML can automate the detection of concepts to create a map of the sequences used. It doesn’t need to be explainable because there’s a reference point, a dictionary, that can determine whether the output is correct. 

See also: The Intersection of IoT and Ecosystems

The system could then capture the data in claims and normalize it. If the right shoulder is injured in an accident, symbolic AI can detect all synonyms, understand the context and come back with a code of the body part involved. It’s transparent because we can see where it’s coded with a snippet from the original report. There’s a massive efficiency gain, but, ultimately, humans are still making the final decision on the claim.

AI holds a lot of promise for insurers, but no insurer wants to introduce additional risk into the business with a system that produces unexplainable results. Through the appropriate use of hybrid AI, carriers can build trust with their customers and ensure they are compliant with regulations while still enjoying the massive benefits that AI can provide.

Read More