October 25, 2020
Where Are All Those Benefits Promised by AI?
by Paul Carroll
Two recent studies shed light both on how to realize the gains that AI can provide — and on how the industry remains unrealistic.
Having covered developments in artificial intelligence for going on 35 years, I’ve long been struck by the confusing expectations. Based on what many were saying back in the ’80s, we should all be working for our robot overlords by now. Yet people are also often too cautious: If I could tell my 1986 self that I’m now calling out to my Amazon Echo for random factoids that I just have to know that instant or that I’m dictating text messages to Siri, my earlier self would have called for the men with the butterfly nets.
Sorting through the confusion, I’ve decided that artificial intelligence is a moving target, an aspiration for capabilities that might be possible just over the horizon. Once something becomes reality — even something as mind-boggling as speech recognition — what was AI becomes garden-variety computing.
So, I haven’t been overly surprised in my seven years with ITL to see the insurance industry light up at the prospects for AI and, at the same time, have trouble realizing them.
Two big studies released last week, one led by BCG and the other by Willis Towers Watson, shed light both on how to realize the gains that AI can provide — and on how the industry remains unrealistic.
The study by BCG Gamma, the BCG Henderson Institute and the MIT Sloan Management Review found that only 10% of companies reported significant financial benefits from implementing AI — so figuring out how to realize gains would seem to be in order.
The authors were encouraged by what they see as an increase in interest in AI: Their survey of more than 3,000 executives globally found that 60% have an AI strategy in 2020, up from 40% two years ago. But the authors argue that simply having a strategy — what they call “discovering AI” — only gives a company a 2% chance of significant financial benefits. (“Significant” means a gain of $100 million in revenue or a $100 million reduction in costs for a company with annual revenue of more than $10 billion, and proportionally lesser gains for smaller companies.)
The authors say that even moving into the “building phase” — getting the right data, technology and talent and organizing them within a corporate strategy — only boosts the odds to a 21% chance of success.
Companies can help themselves a lot, the authors say, if they figure out how to iterate with targeted users on solutions that AI might offer — achieving this “scaling stage” lifts to 39% the prospects for significant financial benefits.
The study finds that the final, “organizational learning” phase — “orchestrating the macro and micro interactions between humans and machines” — makes the biggest difference. Getting to that stage gives businesses a 73% chance or reaping big benefits from AI.
The report cites two key factors for companies that hope to move into successively more mature phases:
- Use as many feedback mechanisms as possible to improve the capability of the AI. There are three possibilities: Humans can provide feedback to the AI; humans can take feedback from the AI; and the AI can teach itself. The report finds that the AI is five times as likely to produce real benefits if all three types of feedback are used than if just one type of feedback is.
- Be willing to change existing processes to incorporate the AI rather than treat it as a separate animal. In other words, don’t just assume that you’re training the AI; realize that the humans need to be retrained, too. The authors report that companies that changed business processes extensively were five times as likely to succeed as those that didn’t.
Fair enough. All that makes sense.
But the second study I saw last week, from Willis Towers Watson, suggests that we’re going to be overly optimistic about our ability to absorb AI — even if we know we’re going to be overly optimistic.
The study looked at seven ways that insurers intended to use AI and found a consistent pattern: Asked in 2017 about plans for 2019, insurers expected huge gains. Surveyed again in 2019, the insurers hadn’t come close to achieving their goals. Asked about plans for 2021, though, insurers were undaunted; they predicted even bigger improvements than the ones they failed to achieve by 2019.
For instance, asked about using AI to remove bottlenecks in claims, 3% of insurers said they were already there in 2017, but 30% expected to be there by 2019. Actual number? 7% said they hit the goal in 2019. So you’d think insurers would be chastened, and perhaps 10% — maybe 15% — would expect to achieve that goal by 2021, right? Nope. 43% say they’ll get there.
I was reminded of this sort of inability to escape a recursive logical fallacy earlier this month when a fascinating fellow I interviewed 30 years ago up and won the Nobel Prize for Physics because of work he’d done on the mathematics of black holes. On the side, Roger Penrose shared ideas with Dutch artist M.C. Escher (including about what are known as the Penrose steps), and following our talk I picked up what turned out to be a profound book, “Godel, Escher, Bach,” by Douglas Hofstadter. Among the many insights was what is known as Hofstadter’s law, which the author posited about any task of sufficient complexity: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.”
While he was largely referring to programming, I embraced the idea about my various writing projects and posited what I called Carroll’s Corollary, which says: “Writing always takes longer than you expect, even when you take into account Carroll’s Corollary.”
That’s all a long way of saying that, while I’d love to be able to give you clear guidance on how to be more realistic about how quickly you’ll be able to adopt AI, I realize this is a hard problem. I mean, I have trouble just figuring out how long it’ll take me to write Six Things each week.
The only solution I know is calibration. If you’re one of those companies in the Willis Towers Watson study, and you’re making projections for a couple of years out, don’t just start with a blank sheet of paper. Go back and look at the projections you’ve made previously and see how they’ve panned out. If, like most, you’ve been overly optimistic, look into why. Then be specific about the assumptions that have to hold true for you to be right this time around, and see if you aren’t making the same mistakes you made last time.
You’ll likely still be too optimistic, at least according to Hofstadter’s Law, but you’ll get better at predictions over time. Eventually — who knows? — maybe AI will solve the problem for you.
P.S. Here are the six articles I’d like to highlight from the past week:
AI is highly sensitive to new data and tends to react immediately, creating a dynamically updated vision of the future.
It’s common in crises to pause or cut investments, including in innovation, yet this is an incredible time to innovate. Here are five tips.
Life insurers have been flirting with a new digital paradigm in underwriting, health protection and remote claims. Perhaps now is the time.
Independent agencies haven’t fundamentally changed the way they do business in 100 years but now must greatly up their game or sell.
If you call yourself a risk manager when you are really only selling insurance, are you representing yourself truthfully?
Insurers need an operational model with adequate agility to follow market fluctuations. It’s time to outsource all non-core activities.