Tag Archives: HFC

11 Questions for Ron Goetzel on Wellness

We thank Ron Goetzel, representing Truven Health and Johns Hopkins, for posting on Insurance Thought Leadership a rebuttal to our viral November posting, “Workplace Wellness Shows No Savings.” Paradoxically, while he conceived and produced the posting, we are happy to publicize it for him. If you’ve heard that song before, think Mike Dukakis’s tank ride during his disastrous 1988 presidential campaign.

Goetzel’s rebuttal, “The Value of Workplace Wellness Programs,” raises at least 11 questions that he has been declining to answer. We hope he will respond here on ITL. And, of course, we are happy to answer any specific questions he would ask us, as we think we are already doing in the case of the point he raises about wellness-sensitive medical events. (We offer, for the third time, to have a straight-up debate and hope that he reconsiders his previous refusals.)

Ron:

(1)    How can you say you are not familiar with measuring wellness-sensitive medical events (WSMEs), like heart attacks? Your exact words are: “What are these events? Where have they been published? Who has peer-reviewed them?” Didn’t you yourself just review an article on that very topic, a study that we ourselves had hyperlinked as an example of peer-reviewed WSMEs in the exact article of ours that you are rebutting now? WSMEs are the events that should decline because of a wellness program. Example: If you institute a wellness program aimed at avoiding heart attacks, you’d measure the change in the number of heart attacks across your population as a “plausibility test” to see if the program worked, just like you’d measure the impact of a campaign to avoid teenage pregnancies by observing the change in the rate of teenage pregnancies. We’re not sure why you think that simple concept of testing plausibility using WSMEs needs peer review. Indeed, we don’t know how else one would measure impact of either program, which is why the esteemed Validation Institute recognizes only that methodology. (In any event, you did already review WMSEs in your own article.) We certainly concur with your related view that randomized controlled trials are impractical in workplace settings (and can’t blame you for avoiding them, given that your colleague Michael O’Donnell’s journal published a meta-analysis showing RCTs have negative ROIs).

(2)    How do you reconcile your role as Highmark’s consultant for the notoriously humiliating, unpopular and counterproductive Penn State wellness program with your current position that employees need to be treated with “respect and dignity”? Exactly what about Penn State’s required monthly testicle check and $1,200 fine on female employees for not disclosing their pregnancy plans respected the dignity of employees?

(3)    Which of your programs adhere to U.S. Preventive Services Task Force (USPSTF) screening guidelines and intervals that you now claim to embrace? Once again, we cite the Penn State example, because it is in the public domain — almost nothing about that program was USPSTF-compliant, starting with the aforementioned testicle checks.

(4)    Your posting mentions “peer review” nine times. If peer review is so important to wellness true believers,  how come none of your colleagues editing the three wellness promotional journals (JOEM, AJPM and AJHP) has ever asked either of us to peer-review a single article, despite the fact that we’ve amply demonstrated our prowess at peer review by exposing two dozen fraudulent claims on They Said What?, including exposés of four companies represented on your Koop Award committee (Staywell, Mercer, Milliman and Wellsteps) along with three fraudulent claims in Koop Award-winning programs?

(5)    Perhaps the most popular slide used in support of wellness-industry ROI actually shows the reverse — that motivation, rather than the wellness programs themselves, drives the health spending differential between participants and non-participants. How do we know that? Because on that Eastman Chemical-Health Fitness Corp. slide (reproduced below), significant savings accrued and were counted for 2005 – the year before the wellness program was implemented. Now you say 2005 was “unfortunately mislabeled” on that slide. Unless this mislabeling was an act of God, please use the active voice: Who mislabeled this slide for five years; where is the person’s apology; and why didn’t any of the analytical luminaries on your committee disclose this mislabeling even after they knew it was mislabeled? The problem was noted in both Surviving Workplace Wellness and the trade-bestselling, award-winning Why Nobody Believes the Numbers, which we know you’ve read because you copied pages from it before Wiley & Sons demanded you stop? Was it because HFC sponsors your committee, or was it because Koop Committee members lack the basic error identification skills taught in courses on outcomes analysis that no committee member has ever passed?

wellness-article

(6)    Why doesn’t anyone on the Koop Committee notice any of these “unfortunate mislabelings” until several years after we point out that they are in plain view?

(7)    Why is it that every time HFC admits lying, the penalty that you assess — as president of the Koop Award Committee — is to anoint their programs as “best practices” in health promotion? (See Eastman Chemical and Nebraska in the list below.) Doesn’t that send a signal that Dr. Koop might have objected to?

(8)    Whenever HFC publishes lengthy press releases announcing that its customers received the “prestigious” Koop Award, it always forgets to mention that it sponsors the awards. With your post’s emphasis on “the spirit of full disclosure” and “transparency,” why haven’t you insisted HFC disclose that it finances the award (sort of like when Nero used to win the Olympics because he ran them)?

(9)    Speaking of “best practices” and Koop Award winners, HFC’s admitted lies about saving the lives of 514 cancer victims in its award-winning Nebraska program are technically a violation of the state’s anti-fraud statute, because HFC accepted state money and then misrepresented outcomes. Which is it: Is HFC a best practice, or should it be prosecuted for fraud?

(10)    RAND Corp.’s wellness guru Soeren Mattke, who also disputes wellness ROIs, has observed that every time one of the wellness industry’s unsupportable claims gets disproven, wellness defenders say they didn’t really mean it, and they really meant something else altogether. Isn’t this exactly what you are doing here, with the “mislabeled” slide, with your sudden epiphany about following USPSTF guidelines and respecting employee dignity and with your new position that ROI doesn’t matter any more, now that most ROI claims have been invalidated?

(11)    Why are you still quoting Katherine Baicker’s five-year-old meta-analysis claiming 3.27-to-1 savings from wellness in (roughly) 16-year-old studies, even though you must be fully aware that she herself has repeatedly disowned it and now says: “There are very few studies that have reliable data on the costs and benefits”? We have offered to compliment wellness defenders for telling the truth in every instance in which they acknowledge all her backpedaling whenever they cite her study. We look forward to being able to compliment you on truthfulness when you admit this. This offer, if you accept it, is an improvement over our current Groundhog Day-type cycle where you cite her study, we point out that she’s walked it back four times, and you somehow never notice her recantations and then continue to cite the meta-analysis as though it’s beyond reproach.

To end on a positive note, while we see many differences between your words and your deeds, let us give you the benefit of the doubt and assume you mean what you say and not what you do. In that case, we invite you to join us in writing an open letter to Penn State, the Business Roundtable, Honeywell, Highmark and every other organization (including Vik Khanna’s wife’s employer) that forces employees to choose between forfeiting large sums of money and maintaining their dignity and privacy. We could collectively advise them to do exactly what you now say: Instead of playing doctor with “pry, poke, prod and punish” programs, we would encourage employers to adhere to USPSTF screening guidelines and frequencies and otherwise stay out of employees’ personal medical affairs unless they ask for help, because overdoctoring produces neither positive ROIs nor even healthier employers. And we need to emphasize that it’s OK if there is no ROI because ROI doesn’t matter.

As a gesture to mend fences, we will offer a 50% discount to all Koop Committee members for the Critical Outcomes Report Analysis course and certification, which is also recognized by the Validation Institute. This course will help your committee members learn how to avoid the embarrassing mistakes they consistently otherwise make and (assuming you institute conflict-of-interest rules as well to require disclosure of sponsorships) ensure that worthy candidates win your awards.

Workplace Wellness Shows No Savings

During the last decade, workplace wellness programs have become commonplace in corporate America. The majority of US employers with 50 or more employees now offer the programs. A 2010 meta-analysis that was favorable to workplace wellness programs, published in Health Affairs, provided support for their uptake. This meta-analysis, plus a well-publicized “success” story from Safeway, coalesced into the so-called Safeway Amendment in the Affordable Care Act (ACA). That provision allows employers to tie a substantial and increasing share of employee insurance premiums to health status/behaviors and subsidizes implementation of such programs by smaller employers. The assumption was that improved employee health would reduce healthcare costs for employers.

Subsequently, however, Safeway’s story has been discredited. And the lead author of the 2010 meta-analysis, Harvard School of Public Health Professor Katherine Baicker, has cautioned on several occasions that more research is needed to draw any definitive conclusions. Now, more than four years into the ACA, we conclude that these programs increase, rather than decrease, employer spending on healthcare, with no net health benefit. The programs also cause overutilization of screening and check-ups in generally healthy working-age adult populations, put undue stress on employees and provide incentives for unhealthy forms of weight-loss.

Through a review of the research literature and primary sources, we have found that wellness programs produce a return-on-investment (ROI) of less than 1-to-1 savings to cost. This blog post will consider the results of two compelling study designs — population-based wellness-sensitive medical event analysis and randomized controlled trials (RCTs). Then it will look at the popular, although weaker, participant vs. non-participant study design. (It is beyond the scope of this posting to question vendors’ non-peer-reviewed claims of savings that do not rely on any recognized study design, though those claims are commonplace.)

Population Based Wellness-Sensitive Medical Event Analysis

A wellness-sensitive medical event analysis tallies the entire range of primary inpatient diagnoses that would likely be affected by a wellness program implemented across an employee population. The idea is that a successful wellness program would reduce the number of wellness-sensitive medical events in a population as compared with previous years. By observing the entire population and not just voluntary, presumably motivated, participants or a “high-risk” cohort (meaning the previous period’s high utilizers), both self-selection bias and regression to the mean are avoided.

The field’s only outcomes validation program requires this specific analysis. One peer-reviewed study using this type of analysis — of the wellness program at BJC HealthCare in St. Louis — examined a population of hospital employees whose overall health status was poor enough that, without a wellness program, they would have averaged more than twice the Healthcare Cost and Utilization Project (HCUP) national inpatient sample (NIS) mean for wellness-sensitive medical events. Yet even this group’s cost savings generated by a dramatic reduction in wellness-sensitive medical events from an abnormally high baseline rate were offset by “similar increases in non-inpatient costs.”

Randomized Controlled Trials and Meta-Analyses

Authors of a 2014 American Journal of Health Promotion (AJHP) meta-analysis stated: “We found a negative ROI in randomized controlled trials.” This was the first AJHP-published study to state that wellness in general loses money when measured validly. This 2014 meta-analysis, by Baxter et al., was also the first meta-analysis attempt to replicate the findings of the aforementioned meta-analysis published in February 2010 in Health Affairs, which had found a $3.27-to-1 savings from wellness programs.

Another wellness expert, Dr. Soeren Mattke, who has co-written multiple RAND reports on wellness that are generally unfavorable, such as a study of PepsiCo’s wellness program published in Health Affairs, dismissed the 2010 paper because of its reliance on outdated studies. Baicker et. al.’s report was also challenged by Lerner and colleagues, whose review of the economic literature on wellness concluded that there is too little credible data to draw any conclusions.

Other Study Designs

More often than not wellness studies simply compare participants to “matched” non-participants or compare a subset of participants (typically high-risk individuals) to themselves over time. These studies usually show savings; however, in the most carefully analyzed case, the savings from wellness activities were exclusively attributable to disease management activities for a small and very ill subset rather than from health promotion for the broader population, which reduced medical spending by only $1 for every $3 spent on the program.

Whether participant vs. non-participant savings are because of the wellness programs themselves or because of fundamentally different and unmatchable attitudes is therefore the key question. For instance, smokers self-selecting into a smoking cessation program may be more predisposed to quit than smokers who decline such a program. Common sense says it is not possible to “match” motivated volunteers with non-motivated non-volunteers, because of the unobservable variable of willingness to engage, even if both groups’ claims history and demographics look the same on paper.

A leading wellness vendor CEO, Henry Albrecht of Limeade, concedes this, saying: “Looking at how participants improve versus non-participants…ignores self-selection bias. Self-improvers are likely to be drawn to self-improvement programs, and self-improvers are more likely to improve.” Further, passive non-participants can be tracked all the way through the study because they cannot “drop out” from not participating, but dropouts from the participant group — whose results would presumably be unfavorable — are not counted and are considered lost to follow-up. So the study design is undermined by two major limitations, both of which would tend to overstate savings.

As an example of overstated savings, consider one study conducted by Health Fitness Corp. (HFC) about the impact of the wellness program it ran for Eastman Chemical’s more than 8,000 eligible employees. In 2011, that program won a C. Everett Koop Award, an annual honor that aims to promote health programs “with demonstrated effectiveness in influencing personal health habits and the cost-effective use of health care services” (and for which both HFC and Eastman Chemical have been listed as sponsors). The study developed for Eastman’s application for the Koop awards tested the participants-vs-non-participants equivalency hypothesis.

From that application, Figure 1 below shows that, despite the fact that no wellness program was offered until 2006, after separation of the population into participants and non-participants in 2004, would-be participants spent 8% less on medical care in 2005 than would-be non-participants, even before the program started in 2006. In subsequent presentations about the program, HFC included the 8% 2005 savings as part of 24% cumulative savings attributed to the program through 2008, even though the program did not yet exist.

Figure 1

Lewis-Figure 1

Source: http://www.thehealthproject.com/documents/2011/EastmanEval.pdf

The other common study design that shows a positive impact for wellness identifies a high-risk cohort, asks for volunteers from that cohort to participate and then tracks their results while ignoring dropouts. The only control is the cohort’s own previous high-risk scores. In studying health promotion program among employees of a Western U.S. school district, Brigham Young University researcher Ray Merrill concluded in 2014: “The worksite wellness program effectively lowered risk measures among those [participants] identified as high-risk at baseline.”

However, using participants as their own control is not a well-accepted study design. Along with the participation bias, it ignores the possibility that some people decline in risk on their own, perhaps because (independent of any workplace program) they at least temporarily lose weight, quit smoking or ameliorate other risk factors absent the intervention. Research by Dr. Dee Edington, previously at the University of Michigan, documents a substantial “natural flow of risk” absent a program.

Key Mathematical and Clinical Factors

Data compiled by the Healthcare Cost and Utilization Project (HCUP) shows that only 8% of hospitalizations are primary-coded for the wellness-sensitive medical event diagnoses used in the BJC study. To determine whether it is possible to save money, an employer would have to tally its spending on wellness-sensitive events just like HCUP and BJC did. That represents the theoretical savings when multiplied by cost per admissions. The analysis would compare that figure to the incentive cost (now averaging $594) and the cost of the wellness program, screenings, doctor visits, follow-ups recommended by the doctor, benefits consultant fees and program management time. For example, if spending per covered person were $6,000 and hospitalizations were half of a company’s cost ($3,000), potential savings per person from eliminating 8% of hospitalizations would be $240, not enough to cover a typical incentive payment even if every relevant hospitalization were eliminated.

There is no clinical evidence to support the conclusion that three pillars of workplace wellness — annual workplace screenings or annual checkups for all employees (and sometimes spouses) and incentives for weight loss — are cost-effective. The U.S. Preventive Services Task Force (USPSTF) recommends that only blood pressure be screened annually on everyone. For other biometric values, the benefits of annual screening (as all wellness programs require) may not exceed the harms of potential false positives or of over-diagnosis and overtreatment, and only a subset of high-risk people should be screened, as with glucose. Likewise, most literature finds that annual checkups confer no net health benefit for the asymptomatic non-diagnosed population. Note that in both cases, harms are compared with benefits, without considering the economics. Even if harms roughly equal benefits, adding screening costs to the equation creates a negative return.

Much of wellness is now about providing incentivizes for weight loss. In addition to the lack of evidence that weight loss saves money (Lewis, A, Khanna V, Montrose S., “It’s time to disband corporate weight loss programs,” Am J Manag Care, In press, February 2015), financial incentives tied to weight loss between two weigh-ins may encourage overeating before the first weigh-in and crash-dieting before the second, both of which are unhealthy. One large health plan offers a weight-loss program that is potentially unhealthier still, encouraging employees to use the specific weight-loss drugs that Dartmouth’s Steven Woloshin and Lisa Schwartz have argued in the Journal of the American Medical Association never should have been approved because of the drugs’ potential harms.

In sum, with tens of millions of employees subjected to these unpopular and expensive programs, it is time to reconfigure workplace wellness. Because today’s conventional programs fail to pay for themselves and confer no proven net health benefit (and may on balance hurt health through over-diagnosis and promotion of unhealthy eating patterns), conventional wellness programs may fail the Americans with Disabilities Act’s “business necessity” standard if the financial forfeiture for non-participants is deemed coercive, as is alleged in employee lawsuits against three companies, including Honeywell.

Especially in light of these lawsuits, a viable course of action — which is also the economically preferable solution for most companies and won’t harm employee health — is simply to pause, demand that vendors and consultants answer open questions about their programs and await more guidance from the administration. A standard that “wellness shall do no harm,” by being in compliance with the USPSTF (as well as the preponderance of the literature where the USPSTF is silent), would be a good starting point.

The Wellness Industry Pleads the Fifth

The wellness industry’s latest string of stumbles and misdeeds are on the verge of overwhelming the cloud’s capacity to keep track of them.

First, as readers of my column may recall, is the C. Everett Koop Award Committee’s refusal to rescind Health Fitness Corp.’s (HFC’s) award even after HFC admitted having lied about saving the lives of 514 cancer victims. (As luck would have it, the “victims” never had cancer in the first place.) Curiously, HFC’s customers have won an amazing number of these Koop awards, which are given for “population health promotion and improvement programs.” Why so many, you might ask? Is HFC that good? Well, HFC is not just a winner of the Koop Award. HFC is also a major sponsor. Perhaps it was an oversight that HFC omitted this detail from its announcement that both Koop Awards were won by its customers for 2012.

Second, the American Heart Association (AHA) recently announced its guidelines for workplace screenings. They call for much more screening than the U.S. Preventive Services Task Force does. As it happens, the AHA guidelines were co-written by a senior executive from Staywell, a screening vendor. Not just any vendor, but one that had already been caught making up outcomes.

Third, although the American Journal of Health Promotion published a meta-analysis that showed a degree of integrity rare for the wellness industry, it then hedged the conclusion. The analysis showed that high-quality studies on wellness outcomes demonstrated “a negative ROI in randomly controlled trials.” But the journal then added that invalid studies (generally comparing active, motivated participants to non-motivated non-participants) showed a positive return. The journal said that if you averaged the results of the invalid and the valid studies you got an ROI greater than break-even. However, the averaging logic leading to that conclusion is a bit like “averaging” Ptolemy and Copernicus to conclude that the earth revolves halfway around the sun.

How does the wellness industry respond to criticisms like these three? It doesn’t. The industry basically pleads the Fifth.

The industry knows better than to draw attention to itself when it doesn’t control the agenda. The players know a response creates a news cycle, which they will lose — and that absent a news cycle no one other than people like you are going to read my columns and notice these misdeeds.

One co-author of the AHA guidelines wrote to my Surviving Workplace Wellness co-author, Vik Khanna, and said the AHA would respond to our “accusation” but apparently thought better of it when the lay media didn’t pick up the original story.  (As a sidebar, I replied that saying a screening vendor was writing the screening policy was an “observation,” not an “accusation,” and recommended the editors check www.dictionary.com to see the difference.)

Similarly, in the past, I have made accusations and observations about the wellness industry both in this column and on the Health Care Blog…and gotten no response. So to make things extra easy for these folks, I dispensed with statements that needed to be rebutted. Instead, I asked some simple questions. I said I would publish companies’ responses, which would create a great marketing opportunity for them…if, indeed, their responses appealed to readers.

I posted the questions on a new website called www.theysaidwhat.net.  I got only one response, from the Vitality Group. The other wellness companies allowed the questions to stand on their own, on that site.

To ferret out responses, I then did something that has probably never been done before: I offered wellness companies a bribe…to tell the truth. I said I’d pay them $1,000 to simply answer the questions I posted about their public materials, which would take about 15 minutes.( If someone makes me that offer, I ask, “Where do I sign?” but I’m not a wellness vendor.)

Here’s how easy the questions are: Recall from a previous ITL posting that Wellsteps has an ROI model on its website that says it saves $1,358.85 per employee, adjusted for inflation, by 2019 no matter what you input into the model as assumptions for obesity, smoking and spending on healthcare. The company claims this $1,358.85 savings is based on “every ROI study ever published.” Compiling all those citations would require time, so I merely asked the company to name one little ROI study that supports this $1,358.85 figure. Silence.

I asked similar questions (which you can view on the click-throughs) to Aetna, Castlight, Cigna, Healthstat, Keas (which wins style points for the most creative way to misreport survey data), Pharos, Propeller Health, ShapeUp, US Corporate Wellness and Wellnet, as well as their enablers and validators, Mercer and Milliman. Propeller and Healthstat responded — but didn’t actually answer the questions. Healthstat seems to say that rules of real math don’t apply to it because it prefers its own rules of math. Propeller – having released the completely mystifying interim results of a study long before it was completed – said it looks forward to the study’s completion and didn’t even acknowledge that questions were asked.

In all fairness, one medical home vendor sent a response expressing a seemingly genuine desire to understand or clarify issues with its outcomes figures and to possibly improve their validity (if, indeed, they are invalid). As a result, I am not adding the vendor to this site; the idea is not to highlight honest and well-intentioned vendors. (The company would like its name undisclosed for now, but if anyone wants to contact it, just send me an email, and I will pass it along to the company for response.)

Likewise, there are good guys – Towers Watson and Redbrick, despite their high profiles, managed to stay off the list by keeping their hands clean (or at least washing them right before inspection). Allone, owned by Blue Cross of Northeastern Pennsylvania, even had its outcomes validated and indemnified. I will announce more validated and indemnified vendors in a followup posting.

As for the others, well, I am not saying that their historic and continuing strategy of pleading the Fifth when asked to explain themselves means that they know their statements are wrong. Nor am I saying that they are liars, idiots or anything of the sort. Something like that would be an “accusation.” Instead, I am merely making an “observation.”

It isn’t even my observation. It is credited to Confucius:  “A man who makes a mistake and does not correct it, is committing another mistake.”