In the comedy by William Shakespeare, “The Taming of the Shrew,” the main plot depicts the courtship of Petruchio and Katherina, the headstrong, uncooperative shrew. Initially, Katherina is an unwilling participant in the relationship, but Petruchio breaks down her resistance with various psychological torments, which make up the “taming” — until she finally becomes agreeable.
An analogous challenge exists when using predictive analytics with healthcare data. Healthcare data can often seem quite stubborn, like Katherina. One of the main features of healthcare data that needs to be “tamed” is the “skew.” In this article, we describe two types of skewness: the statistical skew, which affects data analysis, and the operational skew, which affects operational processes. (Neither is a comedy.)
The Statistical Skew
Because the distribution of healthcare costs is bounded on the lower end — that is, the cost of healthcare services is never less than zero — but ranges widely on the upper end, sometimes into the millions of dollars, the frequency distribution of costs is skewed. More specifically, in the following plot of frequency by cost, the distribution of healthcare costs is right-skewed because the long tail is on the right (and the coefficient of skewness is positive):
This skewness is present whether we are looking at total claim expense in the workers’ compensation sector or annual expenses in the group health sector. Why is this a problem? Simply because the most common methods for analyzing data depend on the ability to assume that there is a normal distribution, and a right-skewed distribution is clearly not normal. To produce reliable predictions and generalizable results from analyses of healthcare costs, the data need to be “tamed” (i.e., various sophisticated analytic techniques must be used to deal with the right-skewness of the data). Among these techniques are logarithmic transformation of the dependent variable, random forest regression, machine learning and topical analysis.
It’s essential to keep this in mind in any analytic effort with healthcare data, especially in workers’ compensation. To get the required level of accuracy, we need to think “non-normal” and get comfortable with the “skewed” behavior of the data.
There is an equally pervasive operational skew in workers’ compensation that calls out for a radical change in business models. The operational skew is exemplified by:
The 80/20 split between simple, straightforward claims that can be auto-adjudicated and more complex claims that have the potential to escalate or incur attorney involvement (i.e., 80% of the costs come from 20% of the claims).
The even more extreme 90/10 split between good providers delivering state-of-the-art care and the “bad apples” whose care is less effective, less often compliant with evidence-based guidelines or more expensive for a similar or worse result. (i.e., 90% of the costs come from 10% of the providers).
How can we deal with operational skew? The first step is to be aware of it and be prepared to use different tactics depending on which end of the skew you’re dealing with. In the two examples just given, we have observed that by using the proper statistical approaches:
Claims can be categorized as early as Day 1 into low vs. high risk with respect to potential for cost escalation or attorney involvement. This enables payers to apply the appropriate amount of oversight, intervention and cost containment resources based on the risk of the claim.
Provider outcomes can be evaluated, summarized and scored, empowering network managers to fine-tune their networks and claims adjusters to recommend the best doctors to each injured worker.
Both of these examples show that what used to be a single business process —managing every claim by the high-touch, “throw a nurse or a doctor at it” approach, as noble as that sounds — now requires the discipline to enact two entirely different business models to be operationally successful. Let me explain.
The difference between low- and high-risk claims is not subtle. Low-risk claims should receive a minimum amount of intervention, just enough oversight to ensure that they are going well and staying within expected parameters. Good technology can help provide this oversight. Added expense, such as nurse case management, is generally unnecessary. Conversely, high-risk claims might need nurse or physician involvement, weekly or even daily updates, multiple points of contact and a keen eye for opportunities to do a better job navigating this difficult journey with the recovering worker.
The same is true for managing your network. It would be nice if all providers could be treated alike, but, in fact, a small percentage of providers drives the bulk of the opioid prescribing, attorney involvement, liens and independent medical review (IMR) requests. These “bad apples” are difficult to reform and are best avoided, using a sophisticated provider scoring system that focuses on multiple aspects of provider performance and outcomes.
Once you have tamed your statistical skew with the appropriate data science techniques and your operational skew with a new business model, you will be well on your way to developing actionable insights from your predictive modeling. With assistance from the appropriate technology and operational routines, the most uncooperative skewness generally can be tamed. Are you ready to “tame the skew”?
Medicine is often considered part science and part art. There is a huge amount of content to master, but there is an equal amount of technique regarding diagnosis and delivery of service. To optimally succeed, care providers need to master both components. The same can be said for the problem of processing healthcare data in bulk. In spite of the existence of many standards and protocols regarding healthcare data, the problem of translating and consolidating data across many sources of information in a reliable and repeatable way is a tremendous challenge. At the heart of this challenge is recognizing when quality has been compromised. The successful implementation of a data quality program within an organization, similar to medicine, also combines a science with an art form. Here, we will run through the basic framework that is essential to data quality initiative and then provide some of the lesser-understood processes that need to be in place in order to succeed.
The science of implementing a data quality program is relatively straightforward. There is field-level validation, which ensures that strings, dates, numbers and lists of valid values are in good form. There is cross-field validation and cross-record validation, which checks the integrity of the expected relationships to be found within the data. There is also profiling, which considers historical changes in the distribution and volume of data and determines significance. Establishing a framework to embed this level of quality checks and associated reporting is a major effort, but it is also clearly an essential part of any successful implementation involving healthcare data.
Data profiling and historical trending are also essential tools in the science of data-quality management. As we go further down the path of conforming and translating our healthcare data, there are inferences to be made. There is provider and member matching based on algorithms, categorizations and mappings that are logic-based, and then there are the actual analytical results and insights generated from the data for application consumption.
Whether your downstream application is analytical, workflow, audit, outreach-based or something else, you will want to profile and perform historical trending of the final result of your load processes. There are so many data dependencies between and among fields and data sets that it is nearly impossible for you to anticipate them all. A small change in the relationship between, say, the place of service and the specialty of the service provider can alter your end-state results in surprising and unexpected ways.
This is the science of data-quality management. It is quite difficult to establish full coverage – nearly impossible – and that is where “art” comes into play.
If we do a good job and implement a solid framework and reporting around data quality, we immediately find that there is too much information. We are flooded with endless sets of exceptions and variations.
The imperative of all of this activity is to answer the question, “Are our results valid?” Odd as it may seem, there is some likelihood that key teams or resident SMEs will decide not to use all that exception data because it is hard to find the relevant exceptions from the irrelevant. This is a more common outcome than one might think. How do we figure out which checks are the important ones?
Simple cases are easy to understand. If the system doesn’t do outbound calls, then maybe phone number validation is not very important. If there is no e-mail generation or letter generation, maybe these data components are not so critical.
In many organizations, the final quality verification is done by inspection, reviewing reports and UI screens. Inspecting the final product is not a bad thing and is prudent in most environments, but clearly, unless there is some automated validation of the overall results, such organizations are bound to learn of their data problems from their customers. This is not quite the outcome we want. The point is that many data-quality implementations are centered primarily on the data as it comes in, and less on the outcomes produced.
Back to the science. The overall intake process can be broken down into three phases: staging, model generation and insight generation. We can think of our data-quality analysis as post-processes to these three phases. Post-staging, we look at the domain (field)-level quality; post-model generation, we look at relationships, key generation, new and orphaned entities. Post-insight generation, we check our results to see if they are correct, consistent and in line with prior historical results.
If the ingestion process takes many hours, days or weeks, we will not want to wait until the entire process has completed to find out that results don’t look good. The cost of re-running processes is a major consideration. Missing a deadline due to the need to re-run is a major setback.
The art of data quality management is figuring out how separate the noise from the essential information. Instead of showing all test results from all of the validations, we need to learn how to minimize the set of tests made while maximizing the chances of seeing meaningful anomalies. Just as an effective physician would not subject patients to countless tests that may or may not be relevant to a particular condition, an effective data-quality program should not present endless test results that may or may not be relevant to the critical question regarding new data introduced to the system. Is it good enough to continue, or is there a problem?
We need to construct a minimum number of views into the data that represents a critical set and is a determinant of data quality. This minimum reporting set is not static, but changes as the product changes. The key is to focus on insights, results and, generally, the outputs of your system. The critical function of your system determines the critical set.
Validation should be based on the configuration of your customer. Data that is received and processed but not actively used should not be validated along with data that is used. There is also a need for customer-specific validation in many cases. You will want controls by product and by customer. The mechanics of adding new validation checks should be easy and the framework should scale to accommodate large numbers of validations. The priority of each verification should be considered carefully. Too many critical checks and you miss clues that are buried in data. Too few and you miss clues because they don’t stand out.
Profiling your own validation data is also a key. You should know, historically, how many of each type of errors you typically encounter and flag statistically significant variation just as you would when you detect variations in essential data elements and entities. Architecture is important. You will want the ability to profile and report anything that implies it is developed in a common way that is centralized rather than having different approaches to different areas you want to profile.
Embedding critical validations as early in the ingestion process as possible is essential. It is often possible to provide validations that emulate downstream processing. The quality team should have incentives to pursue these types of checks on a continuing basis. They are not obvious and are never complete, but are part of any healthy data-quality initiative.
A continuous improvement program should be in place to monitor and tune the overall process. Unless the system is static, codes change, dependencies change and data inputs change. There will be challenges, and with every exposed gap found late in the process there is an opportunity to improve.
This post had glossed over a large amount of material, and I have oversimplified much to convey some of the not-so-obvious learnings of the craft. Quality is a big topic, and organizations should treat it as such. Getting true value is indeed an art as it is easy to invest and not get the intended return. This is not a project with a beginning and an end but a continuing process. Just as with the practice of medicine, there is a lot to learn in terms of the science of constructing the proper machinery, but there is an art to establishing active policies and priorities that effectively deliver successfully.
There is a myth out there that healthcare providers are unwilling to adopt new technology. It’s just not true. In the last few months, I have spoken to dozens of healthcare leaders at hospitals both small and large, and I am amazed at their willingness to understand and adopt technology.
Pretty much every hospital CEO, COO, CMIO or CIO I talk to believes two things:
With growing demand, rising costs and constrained supply, healthcare is facing a crisis unless providers figure out how to “do more with less.”
Technology is a key enabler. There is technology out there to help save more lives, deliver better care, reduce costs and achieve a healthier America. If a technology solution solves a real problem and has a clearly articulated return on investment (ROI), healthcare isn’t that different from any other industry, and the healthcare industry is willing to adopt that technology.
Given my conversations, here are the five biggest IT trends I see in healthcare:
1. Consumerization of the electronic health record (EHR). Love it or hate it, the EHR sits at the center of innovation. Since the passage of the HITECH Act in 2009—a $30 billion effort to transform healthcare delivery through the widespread use of EHRs—the “next generation” EHR is becoming a reality driven by three factors:
Providers feeling the pressure to find innovative ways to cut costs and bring more efficiency to healthcare delivery
The explosion of “machine-generated” healthcare data from mobile apps, wearables and sensors
The “operating terminal” shifting from a desktop to a smartphone/tablet, forcing providers to reimagine how patient care data is produced and consumed
The “next generation” EHR will be built around physicians’ workflows and will make it easier for them to produce and consume data. It will, of course, need to have proper controls in place to make sure data can only be accessed by the right people to ensure privacy and safety. I expect more organizations will adopt the “app store” model Kaiser pioneered so that developers can innovate on their open platform.
2. Interoperability— Lack of system interoperability has made it very hard for providers to adopt new technologies such as data mining, machine learning, image recognition, the Internet of Things and mobile. This is changing fast because:
3. Mobile— With more than 50% of patients using their smartphone to monitor health and more than 50% of physicians using (or wanting to use) their smartphone to monitor patient health, and with seamless data sharing on its way, the way care is delivered will truly change.
Telemedicine is showing significant gains in delivering primary care. We will continue to see more adoption of mobile-enabled services for ambulatory and specialty care in 2016 and beyond for three reasons:
Mobile provides “situational awareness” to all stakeholders so they can know what’s going on with a patient in an instant and can move the right resources quickly with the push of a button.
Mobile-enabled services radically reduce communication overhead, especially when you’re dealing with multiple situations at the same time with urgency and communication is key.
The services can significantly improve the patient experience and reduce operating costs. Studies have shown that remote monitoring and mobile post-discharge care can significantly reduce readmissions and unnecessary admissions.
The key hurdle here is regulatory compliance. For example, auto-dialing 9-1-1 if a phone detects a heart attack can be dangerous if not properly done. As with the EHR, mobile services have to be designed around physician workflows and must comply with regulations.
4. Big data— Healthcare has been slower than verticals such as retail to adopt big data technologies, mainly because the ROI has not been very clear to date. With more wins on both the clinical and operational sides, that’s clearly changing. Of all the technology capabilities, big data can have the greatest near-term impact on the clinical and operational sides for providers, and it will be one of the biggest trends in 2016 and beyond. Successful companies providing big data solutions will do three things right:
Clean up data as needed: There’s lots of data, but it’s not easy to access it, and isn’t not quite primed “or clean” for analysis. There’s only so much you can see, and you spend a lot of time cleansing before you can do any meaningful analysis.
Meaningful results: It’s not always hard to build predictive analytic models, but they have to translate to results that enable evidence-based decision-making.
Deliver ROI: There are a lot of products out there that produce 1% to 2% gains; that doesn’t necessarily justify the investment.
5. Internet of Things— While hospitals have been a bit slow in adopting IoT, three key trends will shape faster adoption:
Innovation in hardware components (smaller, faster CPUs at lower cost) will create cheaper, more advanced medical devices, such as a WiFi-enabled blood pressure monitor connected to the EHR for smoother patient care coordination.
General-purpose sensors are maturing and becoming more reliable for enterprise use.
Devices are becoming smart, but making them all work together is painful. It’s good to have bed sensors that talk to the nursing station, and they will become part of a top level “platform” within the hospital. More sensors also mean more data, and providers will create a “back-end platform” to collect, process and route it to the right place at the right time to can create “holistic” value propositions.
With increased regulatory and financial support, we’re on our way to making healthcare what it should be: smarter, cheaper and more effective. Providers want to do whatever it takes to cut costs and improve patient access and experience, so there are no real barriers.
Transparency, The New Buzzword In Healthcare
Healthcare price and quality have been nearly impossible to determine. Consumers can compare prices and quality of nearly everything they purchased, except healthcare — which truly has life and death implications.
Today, there is a new demand for healthcare transparency driven by:
Employers’ efforts to contain escalating costs
High-performing providers distinguishing their efficiency (price) and proficiency (quality)
Consumers seeking better value
Accomplishing this requires unearthing true and independently determined value — not just “secret” negotiated insurance rates, artificial fee schedules and quality metrics of questionable relevance.
Unknowingly purchasing healthcare with large price variations is a major cause of healthcare inflation and is estimated to cost Americans with employer-sponsored insurance as much as $36 billion a year.1 A recent study published in the Archives of Internal Medicine revealed prices ranging from a low of $1,529 to a whopping high of $182,955 for an appendectomy!2
The mystery of healthcare pricing contributes significantly to the escalating cost of healthcare burdening consumers, employers and taxpayers. Introducing transparency to the healthcare market will shrink price and quality disparities — saving employers and employees money while they receive better quality care.
Quality is as important a factor as price, yet most consumers do not incorporate it into their healthcare decisions, largely because that information is not readily available. Online opinions of physicians and hospitals generally focus on wait times or communication skills rather than clinical qualifications and outcomes. The former makes you comfortable or uncomfortable; the latter can be costly, even deadly.
So quality does matter. In fact, more than one quarter of inpatient stays experience a medical error: 13.5 percent of Medicare/Medicaid hospital patients experienced an adverse event (a serious event, including death and disability) and another 13.5 percent experienced some other temporary harm that required intervention, according to the Department of Health and Human Services.
Transparency — The Good, The Bad And The Ugly
The Good: Consumers want full transparency and with the convergence of technology, data availability and better analytics, it’s increasingly available and affordable.
The Bad: With more companies entering the transparency market, each one defines transparency as they see it, causing confusion and making comparison difficult. Worse, some parties actively impede transparency by claiming data ownership and censoring data for their own benefit.
The Ugly: Many companies touting transparency merely slap the transparency tag on products having little or nothing to do with transparency. Or worse, advertise it but then suggest a plan to develop it; in another word, vaporware. Perhaps most disturbing are companies selling their version of transparency while failing to disclose conflicts of interest.
Optimal transparency solutions should, at the least, meet criteria in four categories: unbiased, credible, meaningful and measurable. This article examines findings from a comparative summary of “transparency” companies in these four important categories.
Monocle Health Data conducted a study of seven companies alleging to provide either price and/or quality transparency of some sort. We developed and applied 25 criteria in the four categories named above. We did our best to verify accuracy and graded each company by these criteria using a simple three-tiered grade.
Plus — the capability was confirmed
Unknown — capability could not be determined
Minus — the capability did not exist or there was a clear deficiency
This study includes 200 footnotes documenting the findings. If you are interested in using our proprietary transparency comparison format or want more info, you may request it through firstname.lastname@example.org. There is no charge. The following is a summary of significant findings.
1. Three of the seven were founded, owned or controlled by insurance companies or healthcare providers. This creates an inherent conflict of interest. What is most disturbing about these three is their lack of, well, transparency. They don’t reveal their potential conflicts. With a little research we found the conflicts, but no customer should have to work that hard — especially for a service that purports to give customers the full truth. These three companies’ conflicts were numerous and included:
Being founded by a consortium of state hospital associations;
Partially owned by a well-known hospital system;
Owned by a company marketing U.S. provider networks;
Publicly stated plans to offer its own provider network; and finally,
Owned by a global medical tourism company representing its own network.
2. Two of the seven promoted a provider network from which they receive compensation. Any time a seller claims to sell a “truth” product such as transparency, other sources of compensation from influential parties in the transaction should be divulged. In fact, for many industries it’s the law (think auto dealer rebates and real estate agencies). The conflict isn’t just the unseemly hidden compensation. In order to make networks attractive, their reps sell on access first and foremost, not quality or price. And there’s the rub. When networks include 90 percent of providers in the market, in the best case scenario, the network includes the best 50 percent and worst 40 percent of providers. And we all know about the wide disparities in healthcare price and quality. Broad network access — by definition — engenders disparities.
If a transparency company is selling access to a preferred network, it no longer has an incentive to reveal disparities (aka deficiencies) within its network. They’re paid to sell their network — not reveal provider-specific performance. And if they can get you to pay an access fee for the privilege of ignorance, well, they see that as an even more profitable sale — at your expense.
3. Three of the seven accept advertising revenues from providers as a primary source of revenue. Any transparency solution accepting ad revenues from those it’s supposed to evaluate without bias should be taken off the list of legitimate transparency solutions; they’re just one level away from “pay-to-play.”
1. Pay to play — Two companies use third-party sources that charge providers to participate in their “quality” assessment or to be more prominently displayed. And if the provider doesn’t pay the participation fee, it receives a “no score” which translates to a failing score. You can’t buy credibility. Worse yet, much of the data used in these companies’ “transparency” tools are from their own databases — not independent, recognized organizations.
2. Most companies did not use independently verified, fact-based information that has been cross-referenced from nationally recognized organizations. In fact, two of them used opinion surveys as their primary transparency tool, emphasizing the patient experience while ignoring independently verified, fact-based information. Opinion surveys are nice but patients want the best care possible, not just a pleasant experience, despite the trendy (and misleading) exclamation, “It’s all about the customer experience!”
3. Healthcare price and quality transparency is not the primary business for four of these companies. Those four companies’ primary businesses range from hospital consulting to selling networks to medical tourism to selling mobile apps. If a company’s primary business isn’t transparency, you know the business has other priorities that can change quickly — unbeknownst to the customer. If you want dedicated transparency services, free of conflicts, you’re most likely to receive that from a company dedicated to it as a primary business and core competency.
4. Use of appropriate comparative data — amazingly, six of the seven transparency companies failed this test. Most incorrectly compare Medicare data to commercial populations, use generic UCR fee schedules instead of the average cash payment, use market ranges instead of provider-specific data, or use an overall quality score that isn’t disease or procedure specific. Consumers have a right to know more than just whether a hospital earned a superior overall score — they have a right to know the score for treating their specific illness, and to know where each provider ranks for treating that illness.
5. Verifiable information from multiple credible sources and not just a company’s own database. Proprietary algorithms are one thing, but referencing a company’s own database as a valid source is intellectually dishonest. If the transparency company won’t or can’t provide auditable detail to support its findings, it lacks credibility. Keep in mind that data from at least two credible organizations is needed to validate conclusions. Only one transparency company met this standard.
1. Only one of the seven transparency companies used severity adjustments of appropriate data populations using at least two recognized severity-adjustment methodologies. Four of the seven didn’t demonstrate any severity adjustment capability. Severity adjustments allow for valid comparisons on a disease-specific, provider-specific basis so individuals can find providers who treat similar patients proficiently and efficiently.
2. Provider price rankings and quality ratings for both chronic illnesses and episodic care for hospitals and doctors on the same platform was offered by only one of the seven companies. The standard approach was to provide a price for each procedure, office visit, prescription, lab test, imaging procedure, etc. and let the user compile the total cost — if they can. With chronic illnesses comprising two-thirds of all benefit costs, it is critically important to rank and rate providers based on price and quality on a severity-adjusted basis for managing a chronic illness, including all costs for treatment, over an entire year.
3. In- and out-of-network provider comparisons were offered by only three of the seven companies (see Unbiased above). A meaningful transparency solution should provide consumers with ratings and rankings on providers who are both in- and out-of-network. Any “transparency” solution that excludes out-of-network providers isn’t transparency, it’s self-serving censorship detrimental to the consumer.
This is particularly important with high-deductible plans. I’ll give my personal experience: Pfizer sent me a Lipitor $4 copay card. I took it to CVS Pharmacy and was told that under my health plan, I would have to pay $250 for using a brand medication instead of generic — but they’d gladly reduce this by $4. I thought this surely was a mistake so I called CIGNA and was told its in-network pharmacy’s interpretation (CVS) was correct. CIGNA doesn’t tell consumers that it’s cheaper to fill prescriptions at out-of-network providers.
Excluding out-of-network providers isn’t transparency — it’s charging users for the privilege of buying high-cost services from in-network providers. Perhaps it’s time to question the value of networks — and any transparency solution that ignores out-of-network providers.
4. Robust analytic report package updated monthly. Six of the seven companies don’t offer monthly analytic reports. Another transparency requirement should be timely reports generated from robust analytics and the ability to “drill down” into the data to see exactly why and how each provider earned their ranking and rating. You deserve to know the supporting facts — after all this is transparency. True transparency is driven by analytics and subject matter expertise, not just a provider directory lacking supporting analytics.
1. Only one solution ranks by price and rates quality by quartile. Almost all of the transparency companies use a three-, four- or five-star rating system. Unfortunately, since half of the transparency companies in this study also sell networks, the rankings and ratings are largely meaningless — they only rate in-network providers and almost all of the providers are rated as average or better. This is unrealistic. In fact, the biggest disparities between provider price and quality performance are in the bottom 50 percent. Consumers deserve to know true rankings and ratings so they can avoid the bottom 50 percent of doctors and find a doctor in the top 50 percent who best meets their needs. Ranking doctors and hospitals by quartile gives consumers a short list of the best doctors, for specific diseases, to choose from — not just an endorsement of another network.
2. Only one solution offers an on-line, interactive data cube to support users requiring sophisticated analytics. This enables a robust, flexible, user-friendly reporting package that’s population-specific to each employer and allows employers to establish dashboards and benchmarks for health plan performance and their vendors (e.g. network performance, disease/medical/case management). Five companies did not offer any reporting package.
3. Only two companies offer a savings measurement tool. One company provides an ROI worksheet using employer-specific assumptions to calculate savings. An important transparency feature is the ability to project accurate ROI and savings using employers’ own assumptions — before and after engaging the transparency company. Savings projection tools, along with the analytic reports, give the employer actionable intelligence to identify areas of improvement and measure vendor performance.
The rise of healthcare transparency is inevitable — it epitomizes the old saying, “How do you keep them down on the farm once they’ve seen the big city?” Consumers are slowly realizing that not only should they be able to see price and quality information on healthcare providers — they have the right to see accurate, meaningful information.
The healthcare industry is on the cusp of tremendous change brought about by the adoption of healthcare IT solutions. The ability to extract data which can then be shared with consumers will forever change the way healthcare quality is measured, and create new pricing metrics that extend far beyond in-network and out-of-network.
1 Save $36 Billion in U.S. Healthcare Spending Through Price Transparency (White paper), Thompson Reuters, February 2012.
2 Renee Y. Hsia, MD, MSc; Abbas H. Kothari, BA; Tanja Srebotnjak, PhD; Judy Maselli, MSPH. Health Care as a “Market Good”? Appendicitis as a Case Study; Arch Intern Med. 2012;172(10):818-819.