For years now, Siri, Alexa and Google Home have been portrayed as just the beginning of a pathway to virtual assistants that will give each of us a J.A.R.V.I.S., the voice-activated software used by Iron Man to handle, well, just about everything.
But the wave of layoffs happening at Amazon's Echo unit tells a different story.
That story will govern for the foreseeable future, and insurers should heed it as they consider how to incorporate natural language processing technologies into their operations. While many futurists have claimed that voice is such a natural means of communication that it will soon take over all customer/corporate interactions, Amazon's experience proves otherwise.
Here's the core of the issue for Amazon and for any insurer implementing natural language processing technologies: I love my Echo and use it all the time, but I've never bought anything with it and am unlikely to ever do so.
I set a lot of timers, ask every day or two about the temperature outdoors and maintain a running list for grocery shopping. Other than that, I seem to ask for people's ages -- if I want to see how long I'm going to have to watch my Tottenham Hotspurs suffer at the hands (feet?) of Liverpool forward Mo Salah, I might call out for his age, for instance.
But that's about it. Amazon has tried to entice me to do all kinds of things, including buying items on my shopping list via Amazon Prime. That should be the easiest extension for Alexa, but I've never even been tempted, because of the fundamental limitations of a voice interface.
The limitations for Alexa, and all such voice interfaces, boil down to two insurmountable ones: trust and complexity.
Quite simply, I don't trust my Echo to act on my behalf. I expect Alexa to act on behalf of Amazon, its creator and manager. So, I wouldn't just tell Alexa to order cat food, even if it knew the brand I wanted. I assume Amazon would steer me toward a product or size or flavor that maximized its profit.
And the trust problem becomes greater as the complexity of the purchase does. Who would ever say, "Alexa, get me a plane ticket to Chicago next Thursday, and a rental car, too"? Certainly, no one would ever buy an insurance policy via a voice interface, or even renew a policy, I imagine.
The complexity problem figures in here, too. If I'm flying to Chicago, I can call up a host of options via Expedia on my computer screen and quickly sort through them based on a combination of convenience, price and favored airline, but that selection would take forever via voice -- "Remind me, Alexa, what the price was for that United flight at 6:45 a.m." Yes, software agents will eventually become sophisticated enough that they'll have a good handle on my preferences, but we're a long way from the day when a voice interface can let you sort through complexity nearly as efficiently as a visual one.
Voice interfaces can still do great things. Who gets lost these days, when you can say to your phone, "Hey, Siri, get me driving directions to XYZ"? In insurance, voice interfaces can remove a lot of drudgery and expense by having automated systems answer simple queries from customers, such as about when a payment is due or when a policy renews. Natural language processing used in chatbots and conversational AI allows for even more complex interactions via text, such as initiating a claim.
But the trust and complexity issues will limit voice interfaces for the foreseeable future.
You can't say Amazon didn't try. The Echo began in 2014 as an idea from then-CEO Jeff Bezos and got all the attention that the pet project of a billionaire founder can receive. The New York Times reported that the Echo had sustained $5 billion of losses by 2018, and spending only accelerated from there. Business Insider reports that the division that includes Echo will lose $10 billion just this year.
You can see where the money went, too. The technology is as slick as can be.
The business case just isn't there yet, and it won't be any time soon.