The Inherent Problem with AI

sixthings

As much promise as artificial intelligence is showing in insurance, many forms of it present a formidable challenge related to "back-traceability." In English, that means the AI is a black box.

If it makes a mistake, you can't trace the error back to its origin because the AI doesn't think like we do. It doesn't go from A to B to C and end up at Z, 23 steps later. It takes a massive amount of data, processes it in some mysterious way—from A to X to M-prime to...?—and spits out an answer. Take it or leave it.

Generally, we'll take the answer, because the AI is either more accurate than human analysis or can be trained to be more accurate by providing an influx or new data. Sometimes, though, it would be great to be able to follow the logic and fix a specific misperception, such as in some of the complex situations that are presenting themselves with driverless cars. But sorry. That's not how so-called deep learning and much of machine learning works. 

Insurers actually face a deeper problem than all but a few other industries: Even when an answer is right, it can be wrong. 

The reason: An AI could well make an accurate prediction of risk but could, without anyone intending or even knowing, be drawing on some inference about gender or race or some other attribute that, by law, can't be used in underwriting. Woe to that AI and the company using it.

As we think about how insurers use data, we split the process into four stages—collecting, organizing, analyzing and applying—and the first two stages should be safe from any unintended bias. AI can help collect lots of new information and can organize it in much more flexible ways; rather than have, say, all the information related to annuities be held in a silo just for that line of business, the information could be used to inform other businesses and combined with other data in ways that could allow for new insights into customers or products and services.

But there needs to be a double-check on uses for analysis and for application, to make sure that right answers aren't somehow wrong from a regulatory standpoint. The exact form of that double-checking will vary by jurisdiction but basically means some sort of statistical doublecheck to make sure bias isn't creeping into the decisions. 

In the black box age created by AI, it's not enough to be right. You also have to be right. 

Good luck.

Paul Carroll
Editor-in-Chief


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

MORE FROM THIS AUTHOR