A new book co-written by behavioral economist extraordinaire Daniel Kahneman points out a major problem that numerous industries, including insurance, only sort of know they have and is surely worse than they recognize. He calls the problem “noise.”
He says insurers are very aware of potential bias based on age, race, gender, etc., especially as they evaluate algorithms driven by artificial intelligence — insurers know to look for consistent favoritism toward, say, white men. But, he says, insurers tend to gloss over the problem of inconsistency, or “noise” — the fact that people come to very different conclusions based on the same set of facts, even when bias is removed from the equation.
Kahneman, who won the Nobel Prize in Economics in 2002 and who has driven so much of the progress on behavioral economics for decades, cites a study he did in 2015 that presented a series of cases to 48 underwriters at a large insurance company. Executives predicted that there would be roughly a 10% variance between the high and low prices that the underwriters provided after assessing the risks — but the typical variance was 55%. Many variances were even more extreme. One underwriter might set an annual premium at $9,500 and another at $16,700.
The tendency is to think that the decisions balance out, but Kahneman says such wide variance suggests that the insurer is actually making two mistakes. The $9,500 quote was likely underpricing and was either leaving money on the table or was winning unprofitable business. The $16,700 might be overpricing that costs the carrier business because competitors will offer better rates.
“Wherever there is judgment, there is noise, and more of it than you think,” according to the book, “Noise: A Flaw in Human Judgment,” which Kahneman wrote with Olivier Sibony and Cass R. Sunstein and which is being released today.
The book focuses heavily on judges’ decisions on prison sentences, both because they are so consequential and because they clearly illustrate the difference between consistent bias (which many companies are becoming good at assessing) and noise (which companies tend to underestimate and thus gloss over).
A study found that a certain set of facts led judges, on average, to impose seven-year sentences. But there was an average variance of 3 1/2 years — a long time. Some of the variance relates to bias: Conservative judges tend to consistently impose longer sentences. But some is just noise. Perhaps the judge has a personal story that makes him identify more with the defendant. Perhaps the judge has had a series of cases that made her more fed up based on the crime committed. Kahneman says variance even happens based on time of day, the day of the week, the mood of the person making the decision, etc.
While he doesn’t try to quantify how much noise reduces profitability for insurers, the sheer size of the numbers involved in underwriting, designing policies, assessing claims, etc. suggests that the potential gains are enormous if decisions can be made more consistently.
Kahneman and his co-authors argue that the starting point for combatting the problem is to conduct a “noise audit.” Insurers could do the sort of test that the authors did to assess how wide the variance is among their underwriters, adjusters, agents and perhaps others, decide what the effects on profitability likely are and determine how much effort should go into reducing the noise.
The book argues that algorithms will be a big piece of the solution — while acknowledging the need to watch out for systemic bias, largely by being super careful about the reliability of the data being fed into the algorithms. Algorithms are nearly free of noise: An algorithm faced with the same information will almost always make the same decision. And, while algorithms can make bad decisions, they can always be learning, meaning that bad decisions can be gradually corrected and turned into good ones.
There will be pushback. Judges largely hated the mandatory guidelines that were established following a major study in the mid-1970s that found huge variance in sentences. Doctors object to being ordered to treat patients a certain way, even when the mandates are based on evidence.
But the evolving state of medicine could provide a solution for insurers: In the same way that AI can now offer suggestions to doctors on diagnoses and treatment — while leaving the final decision to the humans — insurers could use algorithms to generate a suggested range of actions for underwriters, adjusters and agents. The algorithms would provide some guardrails that would at least reduce the unprofitable outliers at insurance companies and would keep learning, continually narrowing the recommended range and moving the choices toward profitability
Although I rarely recommend books — even ones I’ve written — I think this book provides a road map for a relatively straightforward way to improve the accuracy of insurers’ decisions. And, once you’ve become acquainted with Kahneman’s work, you can go back and read his ground-breaking work, “Thinking, Fast and Slow,” published in 2013.
While economists long based their work on the assumption of rational consumers who maximize their utility, we all know that assumption is silly — people are far from completely rational. And Kahneman has led the way in helping us understand how people actually behave, as opposed to how we might imagine they behave or hope they behave.
P.S. Here are the six articles I’d like to highlight from the past week:
More are turning to “open insurance” solutions, under which insurers leverage open APIs to share data and services with third parties.
It’s time to break through the first phase of technology adoption and move into a new phase of tech-enabled innovation.
While AI is sure to benefit society when wielded properly, cyber carriers remain conscious that AI’s proliferation is a double-edged sword.
A logical data fabric has the capacity to knit together disparate data sources in insurers’ broad, hybrid universe of data platforms.
There is, rightly, enthusiasm around hydrogen solutions for a low-carbon economy, but projects involve complex industrial and energy risks.
As people return to the workforce, candidates with the potential to revolutionize our industry may present themselves.