Explainable AI Is the Holy Grail

AI doesn't help much if it just tells you a customer is likely to leave. It has to be able to explain why, so you have a chance to fix the issue.

person holding a high-tech tablet with a projected screen

While many promises of disruption from insurtechs have fallen flat, now is the time to lean into digital transformation. On the cusp of a potential economic downturn, organizations must continue to adopt advanced solutions that foster quantifiable ROI and impact. And while there is a focus on how artificial intelligence (AI) and automation are critical when functioning with fewer resources on hand (and rightfully so), there isn’t enough focus on explainable AI. 

Explainable AI, or the ability to look inside the “black box” of decision-making of an algorithm and understand the reasons behind its predictions, is pivotal to better understanding customers, detecting fraud and staying ahead of potential legislation that has ability to disrupt the industry. Insurers are not alone in their struggle to obtain useful and unbiased data. Explainable AI paves the way for improving business outcomes and keeping companies accountable to new and emerging ethical standards. 

Mass customization through AI 

On the surface, using explainable AI to predict customer churn might seem less exciting than AI-driven telematics or using AI to configure risk models for states affected differently by natural disasters. But picture this: Your AI model alerts you that Customer X has a significant chance of taking their business to a competitor. Without explainable AI, which makes it easy to understand precisely why this is likely, your organization would have to accept defeat and watch the customer close their account once their contract was up. 

With explainable AI, it may be revealed that the customer is extremely price-sensitive, and because their rates went up due to an accident last year, they’re looking for solutions that are more within their price range. The AI arrives at this prediction from extensive first-party data, including past interactions with customer service representatives and subsequent surveys. In this instance, offering a lower rate could reduce the chance of Customer X leaving to 50%. One customer may not hold an organization afloat, but thousands of instances like this one can help keep consistent growth in economic fluctuations.

Aside from the obvious benefit to the insurer, targeted, customer-centric personalization of policies and customer service interactions contribute to a better customer experience and thus, a loyal customer base.

Detect fraud faster and improve your data

It wasn’t long ago that Geico unveiled its use of AI to speed collision estimation. Essentially, after an accident, customers submit photos of damage to their vehicles, which helps speed claims and repair processes. Without explainable AI to outline why certain damages or costs were identified, the software could cause considerable challenges if customers were unhappy with the decisions. It’s only fair for the insurer to provide details for why a claim was denied or only partially approved.

In the case of fraudulent claims, insurers need a way to quickly detect when something’s amiss. In verticals like retail, with lots of data constantly being added to systems, data can be updated instantaneously based on real-time interactions to improve AI-backed decision-making. However, this method requires a steady cadence of data to keep up with changing trends. Machine learning (ML) models predicting insurance claim fraud may be limited to adapting on a much less frequent basis, causing what is sometimes called model drift. 

This means that enterprise data, and therefore ML models, may be inaccurate for a period, until the feedback loop closes and the model is able to update. Implementing rules systems on top of ML can provide an automation stop-gap so that, until relevant data is fed into the system, rules can act as a guardrail and reduce risk for ML models when data drifts from its training distribution.

Further, the ability to analyze a model and its recommendations is crucial for identifying erroneous or biased data that should never have made it into training. Data science workflows that use explainable AI to drive upstream data improvements continuously boost the quality of their organization's data, while boosting confidence in output and results. 

See also: Modernizing Insurance for the Digital Era

Stay ahead of pending legislation 

In the last few months, regulators have upped the ante with a clear desire to create uniform, ethical standards for using AI and automation. For example, New York City is instituting a law that penalizes employers for bias in AI hiring tools starting in January 2023; as a result, companies are scrambling to audit their AI programs before the deadline. On the federal level, the Biden administration released a Blueprint for an AI Bill of Rights, which will likely inform more rigid legislation focused on transparency and accountability.

Compliance-minded insurers have no choice but to turn to explainable AI, using software to understand — and prove — the variables that came into consideration for sensitive decision-making. This is underscored by a December 2022 lawsuit alleging disparities between how a leading insurance carrier processes claims for minority policy holders. The suit cites the company’s relationship with a claims management platform provider – and its partnership with a Netherlands-based AI firm that delivers a fraud detection score to indicate the likelihood of fraud throughout the claims process. 

This lawsuit is a bellwether: as wider adoption of AI and automation software penetrate the insurance industry, the use-cases for ethical, transparent AI will skyrocket.

Insurance needs explainable AI

Insurers can’t stop the momentum of digital disruption. With rumblings of an economic downturn, insurers can't pump the brakes while competitors ramp up processes reliant on AI and automation. With the help of explainable AI, insurers are set up to succeed in attracting and retaining customers, detecting fraudulent activities and staying compliant with pending legislative efforts ensuring AI is accessible and fair.


Rik Chomko

Profile picture for user RikChomko

Rik Chomko

Rik Chomko is co-founder and CEO of InRule Technology, an intelligence automation company providing integrated decision-making, machine learning and process automation software to the enterprise.

Chomko started the company in 2002 with CTO Loren Goodman. He became chief executive officer in 2015 after serving as chief operating officer since 2012. Chomko also served as chief product officer prior to his role as COO.

Before co-founding InRule, Chomko was chief technology officer with Calypso Systems, a consulting firm. Chomko also worked for Health Care Service from 1991 to 1995.

Read More