Tag Archives: explainable ai

The Importance of Explainable AI

Most businesses believe that machine learning models are opaque and non-intuitive and no information is provided regarding their decision-making and predictions,” — Swathi Young, host at Women in AI.

Explainable AI is evolving to give meaning to artificial intelligence and machine learning in insurance. The XAI (explainable AI) model has the key factors, which are explained in the passed and not passed cases. The features that are extracted from the insurance customer profile and the accident image are highlighted in the XAI model. The rules and logic for claim processing are presented in the output. In every case, passed cases are explained by showing the passed rules or coverage rules associated with the claim. Similarly, in the case of failed cases, the failed rules are displayed by the XAI model. 

In many enterprises in the insurance vertical, the underwriting engine or policy rules engine is a black box. The recommendations, quotes, insights, and claim rejections/passes are generated by the black box without any explanation. The decisions are trusted by not only IT team members but also by the business team members. Usage of AI/ML for claim processing or generating a policy quote is high in the insurance domain. AI/ML algorithms are based on different techniques, which might lead to issues related to bias, cost and mistakes. Explainable AI has come to the rescue by explaining the decisions and comparing/contrasting them with the other decisions. This helps in customer experience improvement, customer satisfaction, operational efficiency, financial performance and enterprise performance.

Most of the AI projects are a failure because enterprises in insurance always thought AI models are not trustworthy or are biased. AI models never had implicitly explained the output. XAI helps in closing the gap between the black box and trustworthy AI (responsible AI). XAI has been used in enterprise risk management, fraud prevention, customer loyalty improvements and market optimization. XAI has not just improved operational efficiency but also the fairness in the recommendations, insights and results. The explainable AI provides the software strengths, weaknesses, features, criteria for decisions, conclusion details and bias/error corrections.

Let us now look at the basic tenets of XAI (Explainable AI). Those tenets are transparency, fidelity, domain sense, consistency, generalizability, parsimony, reasoning and traceability. Many insurance enterprises are planning to adopt explainable AI in their decision-making. The decisions that affect customers, like quote generation, policy quote payment options and policy package options, are being modified with XAI showing the differencing based on the criteria and features.

A recent survey found that 74% of consumers say they would be happy to get computer-generated insurance adviceForbes

Regulatory policies can be imposed and explained by XAI to the insurance enterprise. This helps them to abide by regulation laws. Claim processing can be improved, and analysis presented can be enhanced with the bias corrections and decisions that were not taken. Fraud can be prevented easily using AI/ML with XAI. Fraud rules can be verified, and the violations can be displayed to identify the area of the fraud. This helps in improving the revenue of the enterprise and cutting down the losses. The detection accuracy can be measured using true positives and false-positive analysis. This helps in cutting down the cost as the claim process is better streamlined.

See also: Stop Being Scared of Artificial Intelligence

Customer loyalty and retention can be improved by using AI/ML for customer behavior analysis. The prediction algorithms can be used for churn prediction and recommendation engines. Insurance pricing engines can use AI/ML for price prediction. The price predicted can be explained based on the customer profile, history and customer expectations. This helps in improving customer satisfaction and loyalty. XAI helps in making the AI model management more responsible. Business users like to know why the decision or the output is better. They can use the decisions easily and improvise.

What’s Next?

Responsible AI will be the next technology that ensures that decisions are taken wisely and trust is developed on the AI model. Casual AI can help in making the model more operational. The causes and effects can be described during the modeling, training, testing and execution. The complexity hidden will be simplified by inference engines and causality details. The next-level AI models and engines can help in adapting to new scenarios and make fair decisions with implicit causality.

‘Explainable AI’ Builds Trust With Customers

Artificial intelligence (AI) holds a lot of promise for the insurance industry, particularly for reducing premium leakage, accelerating claims and making underwriting more accurate. AI can identify patterns and indicators of risk that would otherwise go unnoticed by human eyes. 

Unfortunately, AI has often been a black box: Data goes in, results come out and no one — not even the creators of the AI — has any idea how the AI came to its conclusions. That’s because pure machine learning (ML) analyzes the data in an iterative fashion to develop a model, and that process is simply not available or understandable. 

For example, when DeepMind, an AI developed by a Google subsidiary, became the first artificial intelligence to beat a high-level professional Go player, it made moves that were bewildering to other professional players who observed the game. Move 37 in game two of the match was particularly strange, though, after the fact, it certainly appeared to be strong — after all, DeepMind went on the win. But there was no way to ask DeepMind why it had chosen the move that it did. Professional Go players had to puzzle it out for themselves. 

That’s a problem. Without transparency into the processes AI uses to arrive at its conclusions, insurers leave themselves open to accusations of bias. These concerns of bias are not unfounded. If the data itself is biased, then the model created will reflect it. There are many examples; one of the most infamous is an AI recruiting system that Amazon had been developing. The goal was to have the AI screen resumes to identify the best-qualified candidates, but it became clear that the algorithm had taught itself that men were preferable to women, and rejected candidates on the basis of their gender. Instead of eliminating biases in existing recruiting systems, Amazon’s AI had automated them. The project was canceled.

Insurance is a highly regulated industry, and those regulations are clearly moving toward a world in which carriers will not be allowed to make decisions that affect their customers based on black-box AI. The EU has proposed AI regulations that, among other requirements, would mandate that AI used for high-risk applications be “sufficiently transparent to enable users to understand and control how the high-risk AI system produces its output.” What qualifies as high-risk? Anything that could damage fundamental rights guaranteed in the Charter of Fundamental Rights of the European Union, which includes discrimination on the basis of sex, race, ethnicity and other traits. 

Simply put, insurers will need to demonstrate that the AI they use does not include racial, gender or other biases. 

But beyond the legal requirements for AI transparency, there are also strong market forces pushing insurers in that direction. Insurers need explainable AI to build trust with their customers, who are very wary of its use. For instance, after fast-growing, AI-powered insurer Lemonade tweeted that it had collected 1,600 data points on customers and used nonverbal clues in video to determine how to decide on claims, the public backlash was swift. The company issued an apology and explained that it does not use AI to deny claims, but the brand certainly suffered as a result.

Insurers don’t need to abandon the use of AI or even “black-box” AI. There are forms of AI that are transparent and explainable, such as symbolic AI. Unlike pure ML, symbolic AI is rule-based, with codes describing what the technology has to do. Variables are used to reach conclusions. When the two are used together, it’s called hybrid AI, and it has the advantage of leveraging the strengths of each while remaining explainable. ML can target pieces of a given problem where explainability isn’t necessary.

For instance, let’s say an insurer has a large number of medical claims, and it wants AI to understand the body parts involved in the accident. The first step is to make sure that the system is using up-to-date terminology, because there may be terms used in the claims that are not part of the lexicon the AI needs to understand. ML can automate the detection of concepts to create a map of the sequences used. It doesn’t need to be explainable because there’s a reference point, a dictionary, that can determine whether the output is correct. 

See also: The Intersection of IoT and Ecosystems

The system could then capture the data in claims and normalize it. If the right shoulder is injured in an accident, symbolic AI can detect all synonyms, understand the context and come back with a code of the body part involved. It’s transparent because we can see where it’s coded with a snippet from the original report. There’s a massive efficiency gain, but, ultimately, humans are still making the final decision on the claim.

AI holds a lot of promise for insurers, but no insurer wants to introduce additional risk into the business with a system that produces unexplainable results. Through the appropriate use of hybrid AI, carriers can build trust with their customers and ensure they are compliant with regulations while still enjoying the massive benefits that AI can provide.

How to Put a Stop to AI Bias

Imagine you were suddenly refused insurance coverage, or your premium increased 50% just because of your skin color. Imagine you were charged more just because of your gender. It can happen, because of biased algorithms.

While technology improves our lives in so many ways, can we entirely rely on it for insurance policy?

Algorithmic Bias

Algorithms will most likely have flaws. Algorithms are made by humans, after all. And they learn only from the data we feed them. So, we have to struggle to avoid algorithmic bias — an unfair outcome based on factors such as race, gender and religious views.

It is highly unethical (and even illegal) to make decisions based on these factors in real life. So why allow algorithms to do so? 

Algorithmic Bias and Insurance Problems

In 2019, a bias problem surfaced in healthcare. An algorithm gave more attention and better treatment to white patients when there were black patients with the same illness. This is because the algorithm was using insurance data and predictions about which patients are more expensive to treat. If algorithms use biased data, we can expect the results to be biased.

It doesn’t mean we need to stop using AI — but, rather, that we must make an effort to improve it.

How Does Algorithmic Bias Affect People?

Millions of people of color were already affected by algorithmic bias. This bias mostly occurred in algorithms used by healthcare facilities. Algorithmic bias has also influenced social media.   

It is essential to keep working on this problem. In the U.S. alone, algorithms manage care for about 200 million people. It is difficult to work on this issue because health data is private and thus hard to access. But it’s simply unacceptable that Black people had to be sicker than white people to get more serious help and would be charged more for the same treatment. 

How to Stop This AI Bias?

We have to find factors beyond insurance costs to use in calculating someone’s medical fees. It’s also imperative to continually test the model and to offer those affected a way of providing feedback. By acknowledging feedback every once in a while, we ensure that the model is working as it should. 

See also: How to Evaluate AI Solutions

We have to use data that reflects a broader population and not just one group of people — if there is more data collected on white people, other races may be discriminated against.

One approach is “synthetic data,” which is artificially generated and which a lot of data scientists believe is far less biased. There are three main types: data that has been fully generated, data that has partially been generated and data that was corrected from real data. Using synthetic data makes it much easier to analyze the given problem and come to a solution.  

Here is a comparison: 

If the database isn’t big enough, the AI should be able to input more data into it and make it more diverse. And if the database does contain a large number of inputs, synthetic data can make it diverse and make sure that no one was excluded or mistreated. 

The good news is that generating data is less expensive. Real-life data requires a lot more work, such as collecting or measuring data, while synthetic data can rely on machine learning. Besides saving a lot of money, synthetic data also saves a lot of time. Collecting data can be a really long process.

For example, let’s say we are operating with a facial recognition algorithm. If we show the algorithm more examples of white people than any other race, then the algorithm will work best with Caucasian samples. So we should make sure that enough data has been produced that all races are equally represented.

Synthetic data does have its limitations. There isn’t a mechanism to verify if the data is accurate.

AI is obviously having a significant role in the insurance sector. By the end of 2021, hospitals will invest $6.6 billion in AI. But it’s still essential to have human involvement to make sure the algorithmic bias doesn’t have the last say. People are the ones that can focus on making algorithms work better and overcoming bias.

See also: How AI Can Vanquish Bias

Explainable AI

Because we can’t entirely rely on synthetic data, a better solution may be something called “explainable AI.” It is one of the most exciting topics in the world of machine learning right now.

Usually, when we have a certain algorithm doing something for us, we can’t really see what’s going on in the work with the data. So can we trust the process fully?

Wouldn’t it be better if we understood what the model is doing? This is where explainable AI comes in. Not only do we get a prediction of what the outcome will be, but we also get an explanation of that prediction. With problems such as algorithmic bias, there is a need for transparency so we can see why we’re getting a specific outcome. 

Suppose a company makes a model that decides which applications warrant an in-person interview. That model is trained to make decisions based on prior experiences. If, in the past, many women got rejected for the in-person interview, the model will most likely reject women in the future just because of that information.

Explainable AI could help. If a person could check the reasons for some of these decisions, the person might spot and fix the bias. 

Final words

We need to remember that humans make these algorithms and that, unfortunately, our society is still battling issues such as racism. So, we humans must put a lot of effort into making these algorithms unbiased.

The good news is that algorithms and data are easier to change than people.

How ‘Explainable AI’ Changes the Game

Artificial intelligence (AI) drives a growing share of decisions that touch every aspect of our lives, from where to take a vacation to healthcare recommendations that could affect our life expectancy. As AI’s influence grows, market research firm IDC expects spending on it to reach $98 billion in 2023, up from $38 billion in 2019. But in most applications, AI performs its magic with very little explanation for how it reached its recommendations. It’s like a student who displays an answer to a school math problem, but, when asked to show the work, simply shrugs.

This “black box” approach is one thing on fifth-grade math homework but quite another when it comes to the high-impact world of commercial insurance claims, where adjusters are often making weighty decisions affecting millions of dollars in claims each year. The stakes involved make it critical for adjusters and the carriers they work for to see AI’s reasoning both before big decisions are made and afterward so they can audit their performance and optimize business operations.

Concerns over increasingly complex AI models have fired up interest in “explainable AI” (sometimes referred to as XAI,) a growing field of AI that asks for AI to show its work. There are a lot of definitions of explainable AI, and it’s a rapidly growing niche — and a frequent subject of conversation with our clients. 

At a basic level, explainable AI describes how the algorithm arrived at the recommendation, often in the form of a list of factors that it considered and percentages that describe the degree that each factor contributed to the decision. The user can then evaluate the inputs that drive the output and decide on the degree to which it trusts the output.

Transparency and Accountability

This “show your work” approach has three basic benefits. For starters, it creates accountability for those managing the model. Transparency encourages the model’s creators to consider how users will react to its recommendation, think more deeply about them and prepare for eventual feedback. The result is often a better model.

Greater Follow-Through

The second benefit is that the AI recommendation is acted on more often. Explained results tend to give the user confidence to follow through on the model’s recommendation. Greater follow-through drives higher impact, which can lead to increased investment in new models.

Encourages Human Input

The third positive outcome is that explainable AI welcomes human engagement. Operators who understand the factors leading to the recommendation can contribute their own expertise to the final decision — for example, upweighting a factor that their own experience indicates is critical in the particular case.

How Explainable AI Works in Workers’ Comp Claims

Now let’s take a look at how explainable AI can dramatically change the game in workers’ compensation claims.

Workers comp injuries and the resulting medical, legal and administrative expenses cost insurers over $70 billion each year and employers well over $100 billion — and affect the lives of millions of workers who file claims. Yet a dedicated crew of fewer than 40,000 adjusters across the industry is handling upward of 3 million workers’ comp claims in the U.S., often armed with surprisingly basic workflow software.

Enter AI, which can take the growing sea of data in workers’ comp claims and generate increasingly accurate predictions about things such as the likely cost of the claim, the effectiveness of providers treating the injury and the likelihood of litigation.

See also: Stop Being Scared of Artificial Intelligence

Critical to the application of AI to any claim is that the adjuster managing the claim see it, believe it and act on it — and do so early enough in the claim to have an impact on its trajectory.

Adjusters can now monitor claim dashboards that show them the projected cost and medical severity of a claim, and the weighted factors that drive those predictions, based on:

  • the attributes of the claimant,
  • the injury, and
  • the path of similar claims in the past

Adjusters can also see the likelihood of whether the claimant will engage an attorney — an event that can increase the cost of the claim by 4x or more in catastrophic claims.

Let’s say a claimant injured a knee but also suffers from rheumatoid arthritis, which merits a specific regimen of medication and physical therapy.

If adjusters viewed an overall cost estimate that took the arthritis into account but didn’t call it out specifically, they may think the score is too high and simply discount it or spend time generating their own estimates.

But by looking at the score components, they can now see this complicating factor clearly, know to focus more time on this case and potentially engage a trained nurse to advise them. Adjusters can also use AI to help locate a specific healthcare provider with expertise in rheumatoid arthritis, where the claimant can get more targeted treatment for a condition.

The result is likely to be:

  • more effective care,
  • a faster recovery time, and
  • cost savings for the insurer, the claimant and the employer

Explainable AI can also show what might be missing from a prediction. One score may indicate that the risk of attorney involvement is low. Based on the listed factors, including location, age and injury type, this could be a reasonable conclusion.

But the adjuster might see something missing. They adjuster might have picked up a concern from the claimant that he may be let go at work. Knowing that fear of termination can lead to attorney engagement, the adjuster can know to invest more time with this particular claimant, allay some concerns and thus lower the risk the claimant will engage an attorney.

Driving Outcomes Across the Company

Beyond enhancing outcomes on a specific case, these examples show how explainable AI can help the organization optimize outcomes across all claims. Risk managers, for example, can evaluate how the team generally follows up on cases where risk of attorney engagement is high and put in place new practices and training to address the risk more effectively. Care network managers can ensure they bring in new providers that help address emerging trends in care.

By monitoring follow-up actions and enabling adjusters to provide feedback on specific scores and recommendations, companies can create a cycle of improvement that leads to better models, more feedback and still more fine-tuning — creating a conversation between AI and adjusters that ultimately transforms workers’ compensation.

See also: The Future Isn’t Just for Insurtech

Workers’ comp, though, is just one area poised to benefit from explainable AI. Models that show their work are being adopted across finance, health, technology sectors and beyond.

Explainable AI can be the next step that increases user confidence, accelerates adoption and helps turn the vision of AI into real breakthroughs for businesses, consumers and society.

As first published in Techopedia.