What the NFL Draft Can Teach Us

The instant grades, based on completely inadequate data, illustrate the dangers of false precision that show up in lots of projections about insurance. 

Image
nfl football helmet

As soon as the NFL draft ended Saturday night, we all waited anxiously to see how the various pundits would grade our teams' selections. But if you step back even a little bit, you see how ridiculous those grades are — and that they are symptomatic of an issue that can skew the judgment of lots of executives, including in insurance. The issue is false precision.

It's certainly fair to judge the players and the teams drafting them. How big, fast, strong, etc. is a player? What does the tape show about how they fared against competition in college? How well do they fit a team's needs? And so on. But that sort of general analysis isn't enough for the analysts — or for us as fans. All the attributes of a player get boiled down to a grade. Some player is an A- pick, while another is a B or a C+. The same sorts of ultra-precise ratings are rendered for teams. 

Yet the data doesn't come close to supporting such precision. As the saying goes, those being drafted have to this point faced a lot of college players who are now headed off to become accountants, and not a one of the players has yet faced a pro team. Who knows how a quarterback will react when he realizes that TJ Watt is going to hit him all game? 

And there's so much uncertainty about how players will last physically playing a brutal game. My Steelers used a third-round pick on a linebacker who won awards last season as the best in college football and has all the physical attributes to be a great pro, but he has a history of injuries and is reportedly lacking an ACL in one knee after tearing it twice. The data tells me he warrants a grade somewhere between F and A+. Ask me in a year or five, and I'll have a better idea.

The way the data is turned into grades raises questions, too, about how precise the analysis actually is. Just about anybody can hang out a shingle as a draft analyst, and even the well-funded operation at ESPN doesn't have the resources that are trained on the pool of talent by teams that are each spending a quarter of a billion dollars on player salaries each year. 

Besides, think about who's doing the grading vs. who's doing the drafting. Any of the pundits doing the grading would kill to be one of the general managers making the actual selections. The reason they're on TV and not in the draft room? It's because their opinions aren't as respected by those who control those $250 million annual payrolls.

If those of us watching the draft just take the grading for its entertainment value, then we have the right perspective. We can still get excited as fans (and trust me, my Steelers had a GREAT draft... I think) while understanding that all those mock drafts and lists that rank players out through the end of the sixth round are really only accurate for the first five or six picks, are a rough guide for the rest of the first round and then amount to just about nothing for the remaining 200-some players chosen. 

But the human tendency is to grasp on to the grade and forget how poor the data is that underlies it. And that sort of tendency can be dangerous in business.

Let's look at a few examples of false precision in insurance.

Here are the sorts of studies I see quoted all the time in articles that are sent to me for publication:

  • "The global cyber insurance market is projected to be worth $90.6 billion (about $280 per person in the U.S.) by 2033, at a growth rate of 22.3% CAGR from 2023." 
  • "Insurance fraud totals $308.6 billion in the U.S. each year."
  • "The global blockchain in insurance market is expected to reach $32.9 billion by 2031, growing at a CAGR of 52.4% from 2022 to 2031." 
  • "The global AI in insurance market is poised for substantial growth, with its value projected to increase from $5 billion in 2023 to approximately $91 billion by 2033. This remarkable expansion, [translates] to a compound annual growth rate (CAGR) of 32.7% over the forecast period."

My first problem with these sorts of claims is that the terminology is so vague. At least with the projection on cyber, we can be pretty sure the measuring stick for the size of the market is premiums. And we all have a pretty good handle on what fraud looks like. But what is the "blockchain in insurance market"? Is that strictly revenue generated by those selling blockchain services? Does it include the value of, say, claims that are coordinated on a blockchain? Likewise, what does the "AI in insurance market" entail? Revenue generated from AI services? Savings from AI? Or what?

My bigger problem is the false precision. The global cyber insurance market is going to grow 22.3% a year? You're sure about that? Not 22% a year? Not 20% to 25%? Not "really fast, with our current best estimate being 20% to 25% a year"?

Blockchain in insurance will grow 52.4% a year? That ".4" kills me, just like the pluses and minuses do on the made-up letter grades for those selected in the NFL draft. So do the ".7" in the 32.7% CAGR projected for AI and the  ".6" in the $308.6 billion that the U.S. supposedly loses to insurance fraud every year. 

Some of the false precision feels accidental. Give someone a calculator, and they're tempted to report a precise percentage as though it's meaningful to three digits, forgetting that the inputs are really just an educated guess. Give someone a spreadsheet that automatically adds up columns, and they're tempted to report that the guesses on fraud total precisely $308.6 billion.

Some of the false precision feels deliberate, though. The people producing these studies want you to think they can be far more precise than they can, and $308.6 billion sounds a lot more definitive than, say, $250 billion to $350 billion, which is probably a more accurate expression of the conclusion even if you accept the analysts' methodology and definitions. 

As with the draft grades, these studies are fine if you treat them as merely general guideposts. As John Maynard Keynes said, "It is better to be roughly right than precisely wrong." 

But executives sometimes get trapped by precise forecasts. 

In the mid-1980s, AT&T famously asked McKinsey to forecast how many cellphones would be in use in the U.S. in 2000 and was told the number would be only 900,000. On that basis, AT&T dropped out of the market for years and ceded territory to others. The actual number in use by 2000 was about a factor of 1,000 more than McKinsey estimated. 

Similarly, IBM's market researchers decided back in 1980 that the entire demand over the lifetime of the PC it was to introduce in 1981 would be 200,000 units. As a result, IBM rushed a product to market even though that meant relying on Intel for the processor and Microsoft for the operating system. In the 1990s, more than 200,000 units of IBM-compatible computers were selling EVERY DAY, and Intel and Microsoft got rich while IBM languished because it had underestimated the power of the PC.

Both the AT&T and IBM blunders are complicated. The companies didn't just fall for some random forecast. They had internal issues that inclined them to think in terms of landlines, not cellphones, and mainframes and minicomputers (so-called Big Iron), not PCs. But they still illustrate the need to be on guard about the kind of false precision that the NFL draft demonstrated in spades and that shows up in projections about insurance lines and technologies all the time.

Go, Steelers!

Paul