Are Your Vendor’s Claims Valid? (Part 2)

This article, the second in a series, looks at how participation bias is misused to falsify claims about the success of employee health programs.

The first installment covered regression to the mean. This installment features the fallacy of using non-participants as a control for participants.  This “control” fallacy led the Food and Drug Administration to reject this methodology more than half a century ago. 

And, as we’ll see through examples below, correctly so.

The news of the fallacy and its rejection never reached the employee health services industry—or maybe it reached the industry altogether too well. Either way, vendors of wellness, diabetes, disease management and orthopedic programs routinely compare participants with non-participants, or measure just on participants alone.  Buyers don’t insist on controlling for participation bias, largely due to lack of understanding. Vendors not validated by the Validation Institute (VI) rarely offer to control for participation bias. Such a control would undercut their own performance claims, because participants always outperform nonparticipants. 

Indeed, one of the most dramatic savings figures was achieved simply by separating employees into participants and non-participants, without even giving "participants" a program to participate in.

Participation bias is even more invalidating in employee health services than in drug trials. The latter usually require only taking a pill and tracking results. The former require very active participation. Further, those who initially volunteer and then drop out are never counted as participants. Often, the dropout rate is never even reported. The result is what’s known in the industry as “last man standing” programs, because the only people whose outcomes are counted are the initial voluntary participants who stuck with the program the entire time. 

This study design is a recipe for massive invalidity. Not surprisingly, it has been proven four times that 100% of the alleged outcome of a program using this study design is attributable to the design, rather than to the intervention itself. 

This explains why VI-validated programs – programs that self-select to apply for VI validation because they actually accomplish something – make such modest claims, as compared with invalid vendors. It’s because modest claims are what they actually achieve…but modest valid claims trump massive invalid claims.

“Accidental” proofs of study design invalidity

The beauty of the first two proofs below is that they constitute what a litigator would call “declarations against interest,” meaning that the perpetrators’ own statements invalidate their own arguments. The wellness promoters who conducted these studies accidentally proved the opposite of what they intended to prove, without acknowledging it in the first case, or realizing it in the second.

 These two cases, discussed at length here, are summarized below:

  1. Using the same employee subjects, a program measured outcomes both ways: through a high-quality randomization and also through participants-vs-non-participants; 
  2. As mentioned, participants were separated from non-participants but not offered a program to participate in.

In the first case, a large group of employees without a diagnosis of/history of hospitalization for diabetes or heart disease was divided into:

  1. Group A, to whom invitations to participate would be offered;
  2. Group B, employees “matched” to the invited group using demographics and claims history, for whom nothing special was done.

See also: Are Your Healthcare Vendor’s Claims Valid?

The population was separated before any invitations were issued to Group A, making this a valid -- and extremely well-designed -- comparison. The “invited” Group A then included both participants (about 14% were willing to submit to the program, of which almost a quarter dropped out, leaving 11%) and non-participants.

The intervention was to use people’s DNA to tell them they were at risk for diabetes or heart disease, and then coach them. Because there were no hospitalizations or ER visits specific to those events beforehand as part of the study design, it would be arithmetically impossible to reduce the relevant hospitalization rate of 0. And yet "savings" of $1,464 per participant was claimed for the first year for the “last man standing” group of the 11% of Group A invitees who actually completed the program, vs. those Group A invitees who declined the invitation.

A cynic might say this massive savings figure was chosen because the program itself cost $500…and a program needs to show an ROI well north of 2-to-1 to be salable.

Using the valid randomized control methodology, the participants, dropouts and non-participants were then recombined into the full “invited” Group A…and compared with the control Group B. Though no cost comparisons were offered, there was essentially no difference-of-differences between these two groups in any relevant clinical indicators. While all changes in both groups were fairly trivial, the latter three trended in the “wrong” direction for the Group A vs. the Group B control.

Along with the fact that there were no relevant hospitalizations to reduce in the first place, the near-total absence of change in clinical indicators makes it impossible for any savings to be achieved, let alone $1,434 per participant, perhaps the highest first-year claimed savings in history. 

This excerpt courtesy of the Validation Institute. For the smashing (and needless to say, hilarious, this being the wellness industry) conclusion, click through here.


Al Lewis

Profile picture for user AlLewis

Al Lewis

Al Lewis, widely credited with having invented disease management, is co-founder and CEO of Quizzify, the leading employee health literacy vendor. He was founding president of the Care Continuum Alliance and is president of the Disease Management Purchasing Consortium.

MORE FROM THIS AUTHOR

Read More