The Risks of AI and Machine Learning

If the proper guardrails and governance are not put into place early, insurers could face legal, regulatory, reputational, operational and strategic consequences down the road.

Abstract image representing technology and machine learning

Artificial intelligence (AI) and machine learning (ML) are transforming the insurance industry. Many companies are already using them to assess underwriting risk, determine pricing and evaluate claims. But, if the proper guardrails and governance are not put into place early, insurers could face legal, regulatory, reputational, operational and strategic consequences down the road. Given the heightened scrutiny surrounding AI and ML from regulators and the public, those risks may come much sooner than many people realize.

Let's look at how AI and ML function in insurance for a better understanding of what could be on the horizon.

A Quick Review of AI and Machine Learning

We often hear the terms "artificial intelligence" and "machine learning" used interchangeably. The two are related but are not directly synonymous, and it is important for insurers to know the difference.

Artificial intelligence refers to a broad category of technologies aimed at simulating the capabilities of human thought.

Machine learning is a subset of AI that is aimed at solving very specific problems by enabling machines to learn from existing datasets and make predictions, without requiring explicit programming instructions. Unlike futuristic "artificial general intelligence," which aims to mimic human problem-solving capabilities, machine learning can be designed to perform only the very specific functions for which it is trained. Machine learning identifies correlations and makes predictions based on patterns that might not otherwise have been noted by a human observer. ML's strength rests in its ability to consume vast amounts of data, search for correlations, and apply its findings in a predictive capacity.

Limitations and Pitfalls of AI/ML

Much of the potential concern about AI and machine learning applications in the insurance industry stems from predictive inference models - models that are optimized to make predictions primarily or solely on correlations in the datasets, which the models then employ in making predictions. Such correlations may reflect past discrimination, so there is a potential that, without oversight, AI/ML models will actually perpetuate past discrimination moving forward. Discrimination can occur without AI/ML, of course, but the scale is much smaller and therefore less dangerous.

Consider if a model used a history of diabetes and BMI as factors in evaluating life expectancy, which in turn drives pricing for life insurance. The model might identify a correlation between higher BMI or incidence of diabetes and mortality, which would drive the policy price higher. However, unseen in these data points is the fact that African-Americans have greater rates of diabetes and high BMI. Upon a simple comparison of price distribution by race, these variables would cause African-Americans to have higher pricing.

A predictive inference model is not concerned with causation; it is simply trained to find correlation. Even when the ML model is programmed to explicitly exclude race as a factor in its decisions, it can nevertheless make decisions that lead to a disparate impact on applicants of different racial and ethnic backgrounds. This sort of proxy discrimination from ML models can be far more subtle and difficult to detect than the example outlined above. They also might be acceptable, as in the prior BMI/diabetes example, but it is critical that companies have visibility into these elements of their model outcomes.

There is a second major deficiency inherent in predictive inference models, namely that they are incapable of adapting to new information unless or until they are properly acclimated to the "new reality" by training on updated data. Consider the following example.

Imagine that an insurer wishes to assess the likelihood that an applicant will require long-term in-home care. They train their ML models based on historical data and begin making predictions based on that information. But, a breakthrough treatment is subsequently discovered (for instance, a cure for Alzheimer's disease) that leads to a 20% decrease in required in-home care services. The existing ML model is unaware of this development; it cannot adapt to the new reality unless it is trained on new data. For the insurer, this leads to overpriced policies and diminished competitiveness.

The lesson is that AI/ML requires a structured process of planning, approval, auditing, and continuous monitoring by a cross-organizational group of people to successfully overcome its limitations.

See also: Eliminating AI Bias in Insurance

Categories of AI and Machine Learning Risk

Broadly speaking, five categories of risk related to AI and machine learning exist that insurers should concern themselves with: reputational, legal, strategic/financial, operational, and compliance/regulatory.

Reputational risk arises from the potential negative publicity surrounding problems such as proxy discrimination. The predictive models employed by most machine learning systems are prone to introducing bias. For example, an insurer that was an early adopter of AI recently suffered backlash from consumers when its technology was criticized due to its potential for treating people of color differently from white policyholders.

As insurers roll out AI/ML, they must proactively prevent bias in their algorithms and should be prepared to fully explain their automated AI-driven decisions. Proxy discrimination should be prevented whenever possible through strong governance, but when bias occurs despite a company's best efforts, business leaders must be prepared to explain how systems are making decisions, which in turn requires transparency down to the transaction level and across model versions as they change.

Key questions:

  1.  In what unexpected ways might AI/ML model decisions impact our customers, whether directly or indirectly?
  2.  How are you determining if model features have the potential for proxy discrimination against protected classes?
  3.  What changes have model risk teams needed to make to account for the evolving nature of AI/ML models?

Legal risk is looming for virtually any company using AI/ML to make important decisions that affect people's lives. Although there is little legal precedent with respect to discrimination resulting from AI/ML, companies should take a more proactive stance toward governing their AI to eliminate bias. They should also prepare to defend their decisions regarding data selection, data quality, and auditing procedures that ensure bias is not present in machine-driven decisions. Class-action suits and other litigation are almost certain to arise in the coming years as AI/ML adoption increases and awareness of the risks grows.

Key questions:

  1.  How are we monitoring developing legislation and new court rulings that relate to AI/ML systems?
  2.  How would we obtain evidence about specific AI/ML transactions for our legal defense if a class-action lawsuit were filed against the company?
  3.  How would we prove accountability and responsible use of technology in a court of law?

Strategic and financial risk will increase as companies rely on AI/ML to support more of the day-to-day decisions that drive their business models. As insurers automate more of their core decision processes, including underwriting and pricing, claims assessment, and fraud detection, they risk being wrong about the fundamentals that drive their business success (or failure). More importantly, they risk being wrong at scale.

Currently, the diversity of human actors participating in core business processes serves as a buffer against bad decisions. This doesn't mean bad decisions are never made. They are, but as human judgment assumes a diminished role in these processes and as AI/ML take on a larger role, errors may be replicated at scale. This has powerful strategic and financial implications.

Key questions:

  1.  How are we preventing AI/ML models from impacting our revenue streams or financial solvency?
  2.  What is the business problem an AI/ML model was designed to solve, and what other non-AI/ML solutions were considered?
  3.  What opportunities might competitors realize by using more advanced models?

Operational risk must also be considered, as new technologies often suffer from drawbacks and limitations that were not initially seen or that may have been discounted amid the early-stage enthusiasm that often accompanies innovative programs. If AI/ML technology is not adequately secured - or if steps are not taken to make sure systems are robust and scalable - insurers could face significant roadblocks as they attempt to operationalize it. Cross-functional misalignment and decision-making silos also have the potential to derail nascent AI/ML initiatives.

Key questions:

  1.  How are we evaluating the security and reliability of our AI/ML systems?
  2.  What have we done to test the scalability of the technological infrastructure that supports our systems?
  3.  How well do the organization's technical competencies and expertise map to our AI/ML project's needs?

Compliance and regulatory risk should be a growing concern for insurers as their AI/ML initiatives move into mainstream use, driving decisions that impact people's lives in important ways. In the short term, federal and state agencies are showing an increased interest in the potential implications of AI/ML.

The Federal Trade Commission, state insurance commissioners, and overseas regulators have all expressed concerns about these technologies and are seeking to better understand what needs to be done to protect the rights of the people who live under their jurisdiction. Europe's General Data Protection Regulation (GDPR), California's Consumer Privacy Act (CCPA), and similar laws and regulations around the world are continuing to evolve as litigation makes its way through the courts.

In the longer term, we can expect regulations to be defined at a more granular level, with the appropriate enforcement measures to follow. The National Association of Insurance Commissioners (NAIC) and others are already signaling their intentions to scrutinize AI/ML applications within their purview. In 2020, NAIC released its guiding principles on artificial intelligence (based on principles published by the OECD) and in 2021, created a Big Data and Artificial Intelligence Working Group. The Federal Trade Commission (FTC) has also advised companies across industries that existing laws are sufficient to cover many of the dangers posed by AI. The regulatory environment is evolving rapidly.

See also: Time to Embrace AI in Climate Change Fight

Key questions:

  1.  What industry and commercial regulations from bodies like the NAIC, state departments of insurance, the FTC, and digital privacy laws affect our business today?
  2.  To what degree have we mapped regulatory requirements to mitigating controls and documentary processes we have in place?
  3.  How often do we evaluate whether our models are subject to specific regulations?

These are all areas we need to watch closely in the days to come. Clearly, there are risks associated with AI/ML; it's not all roses when you get beyond the hype of what the technology can do. But understanding these risks is half the battle.

New solutions are hitting the market to help insurers win the risk war by developing strong governance and assurance practices. With their help, or with in-house specialists on board, risks will be overcome to help AI/ML reach its potential.

As first published in Dataversity.


Anthony Habayeb

Profile picture for user AnthonyHabayeb

Anthony Habayeb

Anthony Habayeb is founding CEO of Monitaur, an AI governance software company, that serves highly regulated enterprises like flagship customer Progressive Insurance.

MORE FROM THIS AUTHOR

Read More