Beware the Dark Side of AI

Apple Card's algorithm sparked an investigation soon after it launched when it appeared to offer wives lower credit lines than their husbands.

Within the Biden administration's first weeks, the Office of Science and Technology Policy has been elevated to a cabinet-level position. Biden has appointed Alondra Nelson as deputy director. She is a scholar of science, technology and social inequality. In her acceptance speech, Nelson shared, "When we provide inputs to the algorithm, when we program the device, when we design, test and research, we are making human choices." We can expect artificial intelligence (AI) bias, ethics and accountability to be more significant issues under our new president. 

The financial services industry has a long and dark history of redlining and underserving minority communities. Regardless of regulation, insurers must take steps now to address the ethical concerns surrounding AI and data. 

Insurers are investing heavily and increasingly adopting AI and big data to improve business operations. Juniper Research estimates the value of global insurance premiums underwritten by AI will exceed $20 billion by 2024. Allstate considers its cognitive AI agent, Amelia, which has more than 250,000 conversations per month with customers, an essential component of its customer service strategy. Swiss Re Institute analyzed patent databases and found the number of machine-learning patents filed by insurers has increased dramatically from 12 in 2010 to 693 in 2018. 

There is no denying that AI and big data hold a lot of promise to transform insurance. Using AI, underwriters can spot patterns and connections at a scale impossible for a human to do. AI can accelerate risk assessments, improve fraud detection, help predict customer needs, drive lead generation and automate marketing campaigns. 

However, AI can reproduce and amplify historical human and societal biases. Some of us can still remember Microsoft's disastrous unveiling of its new AI chatbot, Tay, on social media site Twitter five years ago. Described as an experiment in "conversational understanding," Tay was supposed to mimic the speaking style of a teenage girl, and entertain 18- to 24-year-old Americans in a positive way. Instead of casual and playful conversations, Tay repeated back the politically incorrect, racist and sexist comments Twitter users hurled her way. In just one day, Twitter had taught Tay to be misogynistic and racist. 

In a study evaluating 189 facial recognition algorithms from more than 99 developers, the U.S. National Institute of Standards and Technology found algorithms developed in the U.S. had trouble recognizing Asian, African-American and Native-American faces. By comparison, algorithms developed in Asian countries could recognize Asian and Caucasian faces equally well.

Apple Card's algorithm sparked an investigation by financial regulators soon after it launched when it appeared to offer wives lower credit lines than their husbands. Goldman Sachs has said its algorithm does not use gender as an input. However, gender-blind algorithms drawing on data that is biased against women can lead to unwanted biases. 

Even when we remove gender and race from algorithm-models, there remains a strong correlation of race and gender with data inputs. ZIP codes, disease predispositions, last names, criminal records, income and job titles have all been identified as proxies for race or gender. Biases creep in this way. 

See also: Despite COVID, Tech Investment Continues

There is another issue: the inexplicability of black-box predictive models. Black-box predictive models, created by machine-learning algorithms from the data inputs we provide, can be highly accurate. However, they are also so complicated that even the programmers themselves cannot explain how these algorithms reach their final predictions, according to an article in the Harvard Data Science Review. Initially developed for low-stakes decisions like online advertising or web searching, these black-box machine-learning techniques are increasingly making high-stakes decisions that affect people's lives. 

Successful AI and data analytics users know not to go where data leads them or fall into the trap of relying on data that are biased against minority and disadvantaged communities. Big data is not always able to capture the granular insights that explain human behaviors, motivations and pain points. 

Consider Infinity Insurance, an auto insurance provider focused on offering non-standard auto insurance to the Hispanic community. Relying on historical data, insurers had for years charged substantially higher prices for drivers with certain risk factors, including new or young drivers, drivers with low or no credit scores or drivers with an unusual driver's license status. 

Infinity recognized that first-generation Latinos, who are not necessarily high-risk drivers, often have these unusual circumstances. Infinity reached out to Hispanic drivers offering affordable non-standard policies, bilingual customer support and sales agents. Infinity has grown to become the second-largest writer of non-standard auto insurance in the U.S. In 2018, Kemper paid $1.6 billion to acquire Infinity. 

Underserved communities offer great opportunities for expansion that are often missed or overlooked when relying solely on data sets and data inputs. 

Insurers must also actively manage AI and data inputs to avoid racial bias and look beyond demographics and race to segment out the best risks and determine the right price. As an industry, we have made significant progress toward removing bias. We cannot allow these fantastic tools and technologies to enable this harmful and unintended discrimination. We must not repeat these mistakes. 


Nick Frank

Profile picture for user NickFrank

Nick Frank

Nick Frank is a partner with Simon-Kucher, where he leads the North American Insurance practice.

He has more than 20 years of experience helping insurance carriers and producers reimagine sales, product design and revenue models. Frank has worked closely with insurance leaders to implement advanced digital technologies to improve sales funnel ratios, refine customer segmentation and optimize pricing. His expertise spans across property and casualty, life and annuities, reinsurance carriers and producer organizations.

Frank has a BSc in computer engineering and mathematics from the University of Florida.

MORE FROM THIS AUTHOR


Wei Ke

Profile picture for user WeiKe

Wei Ke

Wei Ke, Ph.D. is a managing partner at Simon-Kucher. He heads the company's financial services activities in North America. Ke has advised leading financial institutions on many topics.

Read More