Cautionary Tales on AI

Here are examples of problems with AI to remind us of what it can't do, at least not yet, and of what problems overreliance on it can cause.

Image
AI

While generative AI is a game changer for business, including insurance, the rush of excitement over any new technology always makes me a bit nervous. Remember when the metaverse was going to change everything? How about Google Glass before that? Virtual reality? And I'm not just talking about all the VR hype a decade ago, when Facebook bought a startup, Oculus, for $2 billion. I'm also talking about the first time virtual reality was going to change everything, back in the 1980s. 

I think it's worthwhile to look out for examples of problems with AI from time to time, if only to remind us of what it can't do, at least not yet, and of what problems overreliance on it can cause. Just to keep us honest. 

So that's what I'll do this week: I'll run through some recent examples of problems and the lessons I think they provide. I'll start with a story that I was initially tempted to dismiss as simply lunacy but realized showed in extreme form two mistakes that people make with AI all the time. For one, people often treat output from AI as gospel, seemingly just because it comes from a computer. For another, people often pay far too little attention to the quality of the data they feed into the AI -- and it can be poor. 

The Cold Case

In 2017, police investigating a sexual assault and murder that occurred in 1990 in Berkeley, California, learned of a company, Parabon NanoLabs, that claimed it could produce a good likeness of a person's face from their DNA. The police sent the company genetic material collected from the crime scene decades earlier and were sent what was purported to be a 3D rendering of the murderer's face. Police published an image, asking the public for help identifying the man, but received no leads. 

According to the article in Wired that I'm drawing from here, "Parabon says it can confidently predict the color of a person's hair, eyes and skin, along with the amount of freckles they have and the general shape of their face.... [But] Parabon’s methods have not been peer-reviewed, and scientists are skeptical about how feasible predicting face shape even is."

Even a top technical executive at Parabon is quoted as saying, "'What we are predicting is more like—given this person’s sex and ancestry, will they have wider-set eyes than average. There’s no way you can get individual identifications from that.'”

Still, a detective took the next step in 2020 and asked to have the image run through facial recognition software and matched against a police database. While this particular investigation went no further, numerous police officials from around the country are quoted in the story as defending the practice, so the odds seem high that "matches" are going to be generated based on such speculative images.

That makes no sense. Feeding semi-reliable input into a semi-reliable facial recognition system will lead to so many false positives that there is bound to be trouble. Just because you can point to AI as the source of a match doesn't make it accurate.

Which brings me to my next story.

The False Identification

In January 2022, two men waving guns robbed a Sunglass Hut in Houston. EssilorLuxottica, Sunglass Hut's parent company, used facial recognition software on security video and identified a man as a suspect. When a store employee who had witnessed the robbery picked that man from a photo lineup, police arrested him and held him in jail for 10 days. But he had an alibi, and it was ironclad. According to the Washington Post, he had been in jail in California on the day of the robbery, on unrelated charges. 

Prosecutors dropped charges, but, in a lawsuit filed a week and a half ago, the man says he was raped while in jail and is demanding $10 million from EssilorLuxottica and Macy's, whose facial recognition software it used on what the suit claims was low-quality surveillance footage. 

So, at least according to the lawsuit, we again have buggy technology being fed buggy data -- and the result was treated seriously, this time with all sorts of repercussions. (While police doublechecked by using a photo lineup, those lineups are buggy, too, as all sorts of research has found. While eyewitness testimony was long considered to be the gold standard, it's now recognized that people who are being robbed at gunpoint often can't remember the events clearly.)

The Biden Deep Fake

Just last week, I read a note from Andreessen Horowitz bragging about an investment in a company that can mimic people's voices, at a valuation that made it a "unicorn." Now, I read in Wired that technology from the company, ElevenLabs, was probably used to make the deepfake of President Biden that was used in robocalls before the New Hampshire primary to tell people not to vote. 

My takeaway is pretty simple: AI giveth, and AI taketh away. It'll always reflect a contest between the good guys and the bad guys, with both using great new technology for their very different purposes.

The Taylor Swift Deep Fakes

I hesitate to even mention the situation, because the deep fakes of Taylor Swift are reprehensible, but they underscore the lesson from the Biden deep fake. It seems that a Microsoft tool was used to generate deep fakes of all sorts of celebrities. Safeguards were built into the tool that were supposed to prevent such uses, but some sick people circumvented them. Microsoft has closed the loopholes, but nobody is terribly sanguine that they'll stay closed -- or that hackers won't find vulnerabilities in someone else's tool. 

There are calls for new laws to punish this sort of sleazy behavior, so perpetrators can't just crawl back under their rocks when caught, and maybe those could work. In the meantime, I'd say the takeaway is still that there will be a constant tussle between good and bad uses of AI.

The George Carlin Not-So-Deep-Fake

This is a weird one. A comedy podcast sold an hour-long comedy special as being generated by AI based on the work of the late, great comedian George Carlin. To no one's surprise, his estate sued for copyright infringement. One of the podcast hosts now acknowledges that the material was actually written by a human.

My takeaway: The term "AI" will get sprinkled onto all sorts of products and services like fairy dust. Some of the claims (many?) will be as fake as the Carlin AI podcast.

***

While none of these stories has a direct tie to insurance, I think it's worth keeping them in mind as we incorporate AI into all sorts of processes -- and to keep a weather eye out for more cautionary tales. There will be temptations to trust AI too much, based on inputs that we don't vet thoroughly enough. While we focus on the good that AI can do, we may overlook the problems that can come with it, including those caused by bad actors. We'll certainly be peppered with claims about AI in everything. And I'm sure we'll encounter problems that I, at least, haven't envisioned yet. 

I still think the end result will be a breakthrough for the world of insurance and all of business, but a lot will happen between here and there. We should learn from others' mistakes, so we don't have to make them all ourselves. 

Cheers,

Paul