Moving Forward

We need to look rigorously at past forecasts and decisions to calibrate how good we are (or, often, aren't), so we can continually improve.

How old do you think Martin Luther King Jr. was when he was assassinated?

That question has been used to help gauge how good people are at making predictions and to help them get better. I'm going to try to build on it here to make a point about the need to look rigorously at past forecasts and decisions to calibrate how good we are (or, often, aren't), so we can continually improve.

The question about MLK is deliberately unfair. How can we be expected to know his exact age? So, pick an age, then give yourself a range, expanding in both directions until you figure you have a 90% chance of having his actual age fall within that range. (Go ahead and write it down. I'll wait.)

The answer: He was 39 years old when he was assassinated on April 4, 1968, in Memphis, Tenn.

Don't feel bad if you didn't realize he was so young: When a group called the Good Judgment Project asked a series of such questions to a broad array of research subjects, it found that people were typically right less than half the time even though they thought they'd given themselves a 90% chance of being correct.

A recent New York Times article described how professionals are not only consistently wrong, like the rest of us, but can be biased. The article looked at the last 15 years of forecasts by top economists about GDP growth in the U.S., made two years in advance. Twelve of the 15 consensus views were too enthusiastic, and the eight biggest errors were all on the side of optimism.

We typically don't recognize our fallibility, either. Research for a book I collaborated on a few years ago found that only 2% of high school seniors ranked themselves as below average on leadership skills, while 25% put themselves in the top percentile. Maybe you could just have a chuckle at high school seniors' poor understanding of averages and percentiles, but 94% of college professors rated themselves above average. Among engineers, who really ought to know better, 32% at one company said they were in the top 5% of performers, and 42% at another company put themselves in that top category.

The idea of revisiting predictions has been a pet project of mine ever since I saw that many of the companies extolled in Tom Peters' book "In Search of Excellence" ran into trouble by the mid-1980s, just a few years after the 1981 publication. "Blue Ocean Strategy," published in 2004, caught my eye because the premise was simplistic -- look for opportunity in the blue ocean, where no one else is, rather than the red ocean, where competition is bloody. While the book became a fad, in the many years since I have seen no example of any company using the book's frameworks to score a major success, beyond the case study about a Nintendo game system cited in the book.

As much as I respected the thoroughness of Jim Collins' research for his books, including "From Good to Great," the 11 companies he cited in 2001 weren't all looking so great -- or even good -- a decade later. Circuit City had gone out of business; the Great Recession had crushed Fannie Mae and hurt Wells Fargo badly; and several others had seen their stock prices decline or rise only slightly. Shouldn't that performance raise some questions about the predictive power of the principles laid out in the book?

Readers don't seem to think so -- the book ranked #362 on Amazon's best-seller list this week, nearly 20 years after publication. But I'd bet that Collins hasn't stopped learning and would welcome some do-overs. For instance, while his "Level 5 leadership" is a laudable concept, the idea that a Level 5 leader can pick a Level 5 successor was never very helpful, because it takes too long to see how the successor does. The concept has now taken a real hit because of the collapse at General Electric. Longtime GE CEO Jack Welch was the exemplar of Level 5 leadership, including for his selection of Jeff Immelt as his successor -- but you practically have to stand in line these days to pillory Immelt, now that GE forced him out in 2017.

Our culture certainly doesn't seem to value accountability much. Political experts confidently tell us things day after day even though they're right about as often as a coin flip. Sports experts on TV tell us some team is a sure bet, only to be wrong and then come back with some equally ironclad guarantee the next week.

But we need to make better decisions in business. There's real money on the line, not just some aspiration for our favorite sports team. And we're not all in the 99th percentile as forecasters, or even above average.

Fortunately, there are ways for you to improve, by calibrating successes and failures and learning from them.

One way might be to take up bridge or poker -- which the Good Judgment Project found correlated with better estimating. One reason seems to be that you keep score and have to see over time whether you're winning or losing. Another is that the games encourage you to be analytical about decisions you've made -- my older brother, a Life Master at bridge, could spend hours considering what he might have done better on a single hand. Even if you don't want to take up a new pastime, you can imagine the sort of mental discipline that a good bridge player or poker player applies to problems.

The Good Judgment Project also found that by providing as little as an hour of instruction they improved forecasting by an average of 14%. Training partly consists of some basic concepts -- no, having a coin come up heads five times in a row does not make it more likely to come up tails the next time -- but mostly consists of "confidence questions" like the one I posed about MLK. Once people learn to become more realistic about what they know and what they don't know, their forecasting improves.

At the C-suite level, learning from mistakes is even more important because the dollar amounts are so much higher -- but calibrating still isn't done nearly as much as it should be. Research for another recent book I helped write found that social dynamics within the senior team often prevented them from conducting a thorough analysis. Those hockey stick forecasts of massive growth from last year and the year before and the year before that just got buried in a drawer when the growth didn't materialize, and those making forecasts were allowed to pretty much start fresh in the new budgeting season. Even when companies tried to analyze past forecasts, success tended to be credited to management while underperformance was written off as due to unforeseeable circumstances beyond management's control.

There's no easy solution at the C-suite level, but it will help to maintain the discipline of making very specific predictions and then revisiting them at the appropriate time to see whether they panned out -- while trying to allow for the tendency to write off failure as bad luck.

When Chunka Mui and I conducted research that was the flip side of Jim Collins for a book we published in 2008 -- while he looked at successes, we spent two years with 20 researchers looking at 2,500 strategic failures -- we decided that our lessons learned had to have predictive power, or they were no good. So we started a blog and made predictions for a couple of years about major strategy announcements that we were sure would crater. We were right, too, on something like 49 of the 50 predictions we made. (We gave up on the site years ago, so I can no longer do an exact count.)

So, good for us, right? Well, we were also lucky. We were all set to make our boldest prediction around the time of the book launch but got some really smart pushback. We dropped an attempt to get a national newspaper to publish our thoughts and never even posted them on our blog. Good thing, too -- the deal has been a raging success.

I've still taken the near-fiasco to heart and think of it from time to time as I try to help myself understand where the weaknesses are in how I think. (I've started playing bridge again, too.)

Stay safe.

Paul

11 Keys to Predictive Analytics in 2021

Using the plethora of data now available, here are 11 ways predictive analytics in P&C insurance will change the game in 2021.

New Tool: Cognitive Process Automation

With low interest rates putting pressure on expenses, CPA goes beyond robotic process automation, cutting costs while maintaining service.

Beware the Dark Side of AI

Apple Card's algorithm sparked an investigation soon after it launched when it appeared to offer wives lower credit lines than their husbands.

Trusted Adviser? No, Be a Go-To Adviser

Is earning trust brag-worthy? Isn't trust the minimum for an adviser-client relationship? The real goal should be achieving "go-to" status.

Does Remote Work Halt Innovation?

We must make up for the gap in organic connection through a tried-and-true method of driving innovation – Networked Improvement Communities.

Making Inroads With Open APIs

Insurers must allow third parties to access their data and products and be present – and relevant – in customers’ digital ecosystems.


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

MORE FROM THIS AUTHOR