A much-cited claim about how behavioral science can guide insurance has been exposed as fraudulent. The claim was made most prominently by Dan Ariely, a best-selling author and pioneer in the field of behavioral economics, who was Lemonade's chief behavioral officer from 2015 through 2020. But the claim turns out to be based on fabricated data.
The claim was based on a study that Ariely and four co-authors published in 2012 in the Proceedings of the National Academy of Sciences. Ariely then cited the study at length in his 2013 book, The Honest Truth About Dishonesty, continuing his string of best-sellers that began with Predictably Irrational in 2008.
The study reported that people would be more honest if you asked them to promise to be truthful before providing information rather than having them provide the information and then certify that what they reported was accurate. In other words, you disrupt the usual process, in which people supply information and then just have to rationalize a bit of cheating afterward.
The study said it drew on nearly 13,500 customers of an auto insurer, half of whom signed a claim of truthfulness at the top of an application and half of whom signed at the bottom. The study reported that those who signed at the bottom said they drove about 10% fewer miles than those who signed at the top -- and, of course, paid lower premiums as a result.
The conclusion was so appealing that the paper was cited more than 400 times in academic publications. Many organizations, including the IRS, began having at least some people attest to their honesty at the start of the process. I certainly fell for the idea. I couldn't even tell you how many times I've cited the study.
More importantly, from the standpoint of insurance, Lemonade incorporated behavioral economics ideas into its initial business model that at least rhymed with the study's conclusion, even if they didn't specifically build on it. Lemonade took a set share of premium, to demonstrate to customers that it had no incentive to deny claims. Lemonade also said it would donate to specified causes if claims were below a set level -- encouraging clients to minimize claims.
Other insurers surely built on the study, especially given Lemonade's success (even though its use of behavioral economics seems to have mattered far less than its sleek customer experience and slick marketing).
The plot began to unravel as others tried and failed to duplicate the study's results. Eventually, the authors published two retractions in 2020, in the Proceedings of the National Academy of Sciences and in Scientific American.
As part of the retractions, the authors published the original data -- which is how it became apparent that the study was based on more than an honest mistake; the data had been manufactured.
Sleuths at Data Colada spotted what, in retrospect, were obvious problems. The data didn't follow a Bell curve, as you'd expect. There weren't some people who drove a little, some who drove a lot and a whole bunch who fell in the middle. Every division based on mileage had almost exactly the same number of people in it, from low mileage through high mileage, and not a single person out of nearly 13,500 drove more than 50,000 miles in a year. In addition, the mileage that people supposedly reported was accurate down to the mile, even though actual people would round off the numbers. The precision was a clear indication that a random number generator was being used.
There was more, too. In any case, when confronted with the Data Colada analysis, all the authors quickly agreed that the data had to have been faked.
At the moment, the focus seems to be on figuring out who to blame for the fraud. I confess to some personal confusion. I spent time with Ariely at a small, three-day conference where we both spoke in 2008 and found him to be extremely smart and thoroughly engaging, so I'd like to think that he wasn't involved. (He has vigorously denied faking any data.) But he has said he was the only one of the five authors who dealt directly with the insurer that provided the data, and it's not at all clear to me what the insurer would gain by faking the results. (While the company wasn't initially named, it's since been identified as The Hartford.) I'm also confused because he cited the study to me, personally, at that gathering in 2008 but didn't publish the results for four years. Why wait so long with such an interesting result? (He's on the record as having cited the study in a talk at Google in 2008, so he wasn't just talking to me, either.)
But I'm more concerned with the broader point, which I think is this:
Behavioral economics is still a powerful tool for insurers despite this embarrassing fraud. We may like to think of customers as completely rational, but they aren't, and we need to understand them as they are, not as we'd like them to be. That doesn't mean accepting broad pronouncements about behavior, even from charismatic experts like Ariely. Understanding behavior means engaging with our own customers deeply, testing how they react to various actions on our part and then tailoring our interactions with them, foibles and all, to maximize benefits both for them and for us.
I realize that this is two weeks in a row where I've take a contrary view about technologies and techniques that are huge benefits to the insurance industry -- last week's was When AI Doesn't Work. I'm sure these two Six Things commentaries aren't the start of a trend. But I don't believe that trees grow to the sky, so I don't see the point in pretending they might. When there's a problem, I'll always try to point it out.