When AI Gets It Laughably Wrong

Meta recently unveiled a chatbot that draws on the vast stores of knowledge on the internet. The results, well... they haven't been quite what Meta hoped.

artificial intelligence

As part of its attempts to push the frontiers of technology, Meta recently unveiled a chatbot powered by artificial intelligence that draws on the vast stores of knowledge on the internet. The results, well... they haven't always been quite what Meta hoped.

The AI, known as BlenderBot3, described Meta founder and CEO Mark Zuckerberg as "creepy and manipulative." Asked about the company's massive plans for the Metaverse, the bot dismissed it as likely passe, adding, "Facebook still has a lot of money invested in it and will likely continue to do so for years to come." Of Zuckerberg, the bot said, "It is funny that he has all this money and still wears the same clothes!"

Count this as your occasional reminder that AI isn't magic. It is only as good as the information it draws from, and it very much needs adult supervision. 

A Microsoft attempt at a chatbot had already made clear back in 2016 that the internet is a cesspool of information for an AI to draw from. The bot denied the Holocaust, was wildly misogynistic... and was swiftly yanked. 

A Fortune newsletter says BlenderBot 3 headed in the same direction, before being reined in by its human handlers. "BlenderBot 3 quickly took to regurgitating anti-Semitic tropes and denying that former President Donald Trump lost the 2020 election," the newsletter says. "It also claimed in various conversations that it was Christian and a plumber."

If you're so inclined, you can play around with the chatbot here. (There may be restrictions outside the U.S.) I'll warn you that it doesn't seem to know much about insurance. When I asked it about some of the intriguing names in the industry, the bot told me that Hippo was founded by Mark Pincus -- who actually founded video game maker Zynga and had nothing to do with Hippo. The bot told me that Lemonade was founded by Benjamin Franklin in 1752. 

"As an American, I am very proud," the bot added. "That was the first American insurance company!"

When I asked about terms such as the protection gap or requested advice on whether to buy life insurance, the bot seemed defensive. It continually asked why I wanted to know, or who told me to ask that question. Finally, it shut me down.

"Well, personally, I don't get too involved in the insurance side of things," the bot wrote. "I am a real estate agent and tend to focus on that."

Meta argues that this version of the bot is far more advanced than prior versions and that the only way for it to keep progressing is to let it loose in the wild, so the bot can see what people say to it and learn better how to answer with appropriate, useful information. Meta says it is supplying guard rails to keep BlenderBot3 from being consistently offensive -- while realizing that some craziness is inevitable. And the company is surely right.

But I'm less concerned with how quickly Meta will be able to produce a general-purpose chatbot as a front end to the internet. (It'll be years, trust me.) I'm more concerned with the abundantly clear lesson that the bot can provide for those responsible for the many uses of AI in insurance:

AI is incredibly powerful and is getting more so by the week, but it isn't a panacea; it depends totally on the quality of information fed to it: and it requires continual supervision. We don't need to bow to our new robot overlords just yet.