Did you ever hear the joke where the boss says floggings will continue until morale improves? In healthcare, flogging the data until results improve or the data confesses is not uncommon. Too bad.
Over the course of my career, I’ve worked with companies with more than a hundred thousand covered lives, the claim costs of which could swing widely from year to year, all because of a few extra transplants, big neonatal ICU cases, ventricular assist cases, etc.
Here are just a few of the single case claims I’ve observed in recent years:
- $3.5 million one cancer case
- $6 million one neonatal intensive care
- $8 million one hemophilia case
- $1.4 million one organ transplant
- $1 million one ventricular assist device
These big numbers aren’t a complaint. After all, health insurance should be about huge, unbudgetable health events. But they raise an important point about the lumpiness of costs and about claims that are made about reducing health expenditures.
All health insurance plans must cover one organ transplant every 10,000 or so life years, which will cost about $1 million over six years. So, a plan with 1,000 covered lives will have such an expense every 10 years, on average. Of course, the company may have none for 15 years and have two in the 16th year. The same timing applies to $500,000-plus ventricular assist device surgeries.
Looking at claims data for small groups is perilous—and is sometimes so for large groups, too. Because of the high cost and relative infrequency of so-called “shock” claims (those of more than $250,000), you need about 100,000 life years for the claims data to be even approximately 75% credible. When a group with 5,000 lives says it did something that cut the claims costs, you can’t really know if the change made a significant difference for a couple of decades.
Here’s an example. A small-ish group with about 3,000 covered lives asked me to help calculate how much its wellness plan was saving. It had all its employees listed in three tiers: active wellness participants, moderate participants and non-participants. I warned the company it didn’t have enough data to be credible, but it proceeded anyway. It expected active users would have the lowest claim costs—and so on. When the data was reviewed, there was perfect reverse correlation. Active wellness users had the highest claim costs, moderate users had the next highest costs and non-participants had the lowest. In the final report—which I had nothing to do with preparing and from which I had recused myself—the company subtracted big claims by the active and moderate users to get the results it wanted. In short, the company flogged the data until it confessed. Alas.
One large company claimed huge reductions in plan costs by adding a wellness program. It turns out, during the period in question, the company also implemented an “early out” incentive. Upon examination, the early-out program resulted in a big reduction in the number of older employees, which more than accounted for the reduction in claims costs.
See Also: 6 Limitations of Big Data in Healthcare
Here is yet another example. I was at a conference a few years ago where a presenter from a small company (about 1,000 covered lives) claimed to have kept its health costs flat for five years through wellness initiatives. While the presenter got a big ovation, his numbers just didn’t add up. I asked him a few questions after his speech about the other changes he made during that period. He said the company lowered its “stop loss” limit from $100,000 to $50,000 a few years earlier. Then he admitted to excluding his stop-loss premium costs, which were skyrocketing, from his presentation. With a bit of mental arithmetic, I added the costs back in, which revealed his company’s total health costs were going up at the same rate as everyone else’s, perhaps even a little faster. Hmmm. I don’t think he deliberately misled the audience; he just didn’t know better.
When you hear boasts of big short-term impacts of wellness programs, beware of confirmation bias.
When a company claims it implemented something that caused its health plan costs to drop 15% or so, ask a few questions:
- Did the company adjust for plan design changes—such as raising deductibles and co-pays—that merely shifted costs to employees?
- Did the changes really save claim dollars?
- Did the company factor in stop-loss premiums?
- How many life years of data did the company observe?
- Did the company exclude large or “shock” claims? (This isn’t uncommon, especially among wellness vendors.)
- Did the company experience any big changes in demographics, such as through implementation of an early retirement program or layoffs that, particularly, had a large impact on older workers?
When I’ve asked those kinds of questions of a small company, I’ve almost never seen a big claim of cost reductions hold up under scrutiny. And that goes for some big companies, too.
Today, flogging the data to get the desired results is all too common. That’s no surprise. Academics and big pharma kept getting caught doing the same thing.
Skepticism is a good thing.