AI and Discrimination in Insurance

AI's algorithms can lead to inadvertent discrimination against protected classes. Insurers must be vigilant.

This past summer, a group of African-American YouTubers filed a putative class action against YouTube and its parent, Alphabet. The suit alleges that YouTube’s AI algorithms have been applying “Restricted Mode” to videos posted by people of color, regardless of whether those videos actually featured elements YouTube restricts, such as profanity, drug use, violence, sexual assault or details about events resulting in death. The lawsuit alleges that this labeling has occurred through targeting video keywords like “Black Lives Matter,” “BLM,” “racial profiling,” “police shooting” or “KKK.” YouTube says its algorithms do not identify the race of the poster.

Whether the allegations are true or not, the case illustrates AI’s potential for inadvertent discrimination. It is easy to see how an algorithm could learn to use variables seemingly unrelated to race, sex, religion or another protected class to predict the outcomes it was designed to target. In the YouTube example, we could imagine the algorithm noting a link between the mentioned keywords and videos depicting violence, thus adding the keywords to factors it weighs when deciding whether Restricted Mode should be applied to a given video. The algorithm is simply programmed to restrict sequences containing violence, but in such a situation it could end up illegally restricting videos posted by African-American activists that depict neither.

In response to such potential pitfalls, the NAIC this past August issued a set of principles regarding AI. The set includes principles about transparency, accountability, compliance, fairness and ethics. The only way to ensure compliance, fairness and that ethical standards are maintained is for AI actors to be accountable for the AI they use and create — and the only way for these actors to properly monitor their AI tools is by ensuring transparency.

As Novarica’s most recent joint report with the law firm Locke Lord on insurance technology and regulatory compliance notes, all states follow some version of the NAIC’s Unfair Trade Practice Act (“Model Act”), “which prohibits, generally, the unfair discrimination of ‘individuals or risks of the same class and of essentially the same hazard’ with respect to both rates and insurability.” There are many possible insurance use cases that AI and data-based technology enable, like analytics-driven targeting, pre-underwriting, rules-based offer guidance and pre-fill data. Although these capabilities can be delivered without AI, the effort required to do so has historically been prohibitive, meaning that using AI will be essential in the coming years — as will ensuring that AI does not discriminate against protected classes.

A key area for insurers to monitor is the use of third-party data in underwriting processes that may not be directly related to the risk being insured. A good example of this is credit score, the use of which several states have restricted during the pandemic. NYDFS’s Circular No. 1 lists other external consumer data and information sources for underwriting that have “the strong potential to mask the forms of [prohibited] discrimination… Many of these external data sources use geographical data (including community-level mortality, addiction or smoking data), homeownership data, credit information, educational attainment, licensures, civil judgments and court records, which all have the potential to reflect disguised and illegal race-based underwriting.” Insurers must thus have transparency into what factors an algorithm is considering and how it arrives at decisions, and they must be able to adjust the included factors easily.

What will the regulatory future hold? Benjamin Sykes of Locke Lord foresees new model regulations requiring data to be subject to regular calls on underwriting criteria and risk-scoring methods, certification by insurers that the proper analysis to avoid any material disparate impact has been performed and a penalty regime focused on restitution above and beyond the difference in premium to those hurt by an algorithm’s decisions.

CIOs will need to consider how to handle the evolution of various regulations as they arise and their implications for how third-party data is used, how machine-learning algorithms are developed and applied and how AI models “learn” to optimize outcomes. Both the regulations and the technology are moving targets, so CIOs and the insurers they represent must keep moving, too.


Mitch Wein

Profile picture for user MitchWein

Mitch Wein

Mitch Wein is senior vice president of research and consulting at Novarica with international expertise in IT leadership and transformation as well as technology strategy for life, annuities, health, personal lines, commercial lines, wealth management and banking.

MORE FROM THIS AUTHOR

Read More