Tag Archives: Munzoor Shaikh

population health

A New Dimension in Population Health

With the healthcare landscape changing from fee-for-service to fee-for-value models, healthcare provider systems (hospitals, clinics, independent physician associations, etc.) are now, more than ever, under pressure to effectively manage the health and cost outcomes of their given populations. Under such models, providers are not only providing healthcare service to the patients, but they are also sharing in the financial risk and reward of patient costs. To effectively become a value-based organization, providers today are adopting a process broadly termed “population health.”

The “population health” process usually starts with identifying key segments of a population that face certain risks of adverse health outcomes and thereby high cost — a step known as “risk stratification.” Once risk is stratified, appropriate patient intervention programs are employed to improve: access to health, targeted encounters with providers and continuous monitoring of patient risk. This leads to lower emergency room visits, better clinical outcomes (such as properly managed blood glucose levels for diabetics) and lower financial cost.

There are many proven methods of risk-stratification to assign patients to low-, medium- or high-risk groups. For example, the adjusted clinical groups method examines patient diagnoses, and the elder risk assessment method assigns risk based on patient demographics. In today’s market, we observe many proprietary methods of risk stratification developed by various provider systems. The variables used in risk stratification can be classified into the following categories:

  1. Clinical: Data from electronic medical records (EMRs), patient vitals, laboratory data, etc.
  2. Administrative: Usually patient claims that track diagnosis and procedures already conducted
  3. Socio-Economic: Patients’ social situations, family and friend support systems, language preference, community involvement, the degree of influence that out-of-pocket expenses could have on the patient’s well-being, etc.
  4. Lifestyle: Health and activity tracking devices such as Fitbit, Apple Watch, etc., which carry critical daily lifestyle data about a patient

While the above categories play a large role in risk stratification, a new dimension known as “spatial access” can significantly lend leverage to the provider systems in affecting patient outcomes. For some patients, the overall risk may increase significantly because of their spatial, geographical and transportation access to medical and wellness resources. Spatial access refers to patients’ geographic proximity and ease of mobility to resources such as hospitals, primary care physician offices, primary and specialty care clinics and nurses. The geographic arrangement of patient and provider resources can play a significant role in healthcare utilization. For example, patients living in areas with fewer healthcare resources — regions often termed “doctor deserts” — have been linked with higher rates of preventable ER visits that are notorious for raising healthcare costs without necessarily improving healthcare outcomes. Using geographical and spatial analysis to supplement existing risk stratification techniques can help providers with an untapped method of assessing risk and generating better ROI in the long run.

To incorporate spatial access analysis into risk stratification, providers must:

  1. Gather patients’ social network geographic information
    Most EMR systems already contain patient address information, but they often lack information about the patients’ social network. The following types of data should be collected and refreshed on an annual basis:

    • Distance to closest primary care clinic, both straight-line and network-distance;
    • Distance to closest primary care provider, both straight-line and network-distance;
    • Spatial density of medical resources in a given area, especially primary care services;
    • Access to vehicle transportation, either on the patient’s own or through a family member; and
    • Proximity to public transportation.
  2. Conduct “spatial access” risk stratification
    Using a geographic information system (GIS), assign relative risk to each patient based on each of the components listed above, then create a composite risk based on all of the attributes.
  3. Represent population risk stratification visually via mapping
    Examine which areas of a provider’s service areas are prone to having individuals with high risk; look for clusters of high- or low-risk patients in doctor deserts. Viewing individual or aggregate risk through mapping would offer analysts and decision makers a comprehensive view of what types of risk are occurring in their service area.
  4. Strategize how to implement interventions based on locations of high-risk patients
    If clusters of high-risk patients exist in a certain area, begin to strategize about what kinds of interventions may alleviate the problem. Interventions may include the placement of new primary or specialty care clinics. Because creating clinics can be challenging, increased use of mobile provider teams can be an alternate solution. Lastly, a combination of telemedicine and mobile medicine should be assessed for the right mix of care for doctor deserts and lack of physical clinics.

Understanding the spatial context of patient demand vs. provider supply of healthcare service is an important component for accountable care organizations to conduct accurate risk stratification. Moreover, incorporating GIS into healthcare service analyses improves decision-making capabilities for evaluating, planning and implementing strategic initiatives. By taking advantage of the analytic capabilities of GIS and spatial access risk stratification, healthcare service providers are better equipped to more comprehensively understand their patient population and to thrive in this new value-based world.

healthcare

Why to Start Small on Healthcare IT

In a recent article by CIO, the volume of healthcare data at the end of 2013 was estimated at just over 150 exabytes, and it is expected to climb north of 2,300 exabytes by 2020—a growth rate of 1,500% in just seven years.

In response, both healthcare payers and providers are increasing their investments in technology and infrastructure to establish competitive advantages by making sense of the growing pool of data. But key actionable insights—such as how to improve the quality of patient care, increase operational efficiency or refine revenue cycle management—are difficult to find. Core challenges surrounding data analytics (capturing, cleaning, analyzing and reporting) are complex and daunting tasks, both from a technical and subject matter perspective.

It’s no surprise, then, that many healthcare organizations struggle to make sense of this data. While the advent of big data technologies, such as Hadoop, provide the tools to collect and store this data, they aren’t a magic bullet to translate these heaps of information into actionable business insights. To do so, organizations must carefully plan infrastructure, software and human capital to support analysis on this scale, which can quickly prove to be prohibitively expensive and time-consuming.

But, by starting small in the new era of big data, healthcare organizations are able to create an agile and responsive environment to analyze data—without assuming any unnecessary risk. To do so, however, they must be able to answer three questions:

  1. What narrowly tailored problem has a short-term business case we can solve?
  2. How can we reduce the complexity of the analysis without sacrificing results?
  3. Do we truly understand the data? And, if not, what can we learn from the results?

To illustrate the effectiveness of starting small, consider two examples: that of a healthcare services provider looking to prevent unnecessary hospital visits and that of a large healthcare provider looking to universally improve revenue cycle operations after a three-practice merger.

The first example concerns an organization that specializes in care coordination. This particular organization consumes a sizeable volume of claims—often more than five million a month. And to supplement core operations (e.g. patient scheduling and post-visit follow-ups), it sought to answer a question that could carry significant value to both payers and providers: How can we reduce the number of unnecessary hospital visits? By digging even further, there was a more-refined question from payer and provider clients: Can we identify patients who are at a high risk for a return visit to the ER? Last, but not least, the organization eventually asked the key question many such big data projects fail to ask: Is there a short-term business case for solving this problem?

To answer the question, the organization considered all available data. Although the entire patient population would provide a significant sample size, it could potentially be skewed by various factors relating to income, payer mix, etc. So the organization decided to narrow the search to a few geographically grouped facilities and use this sample as a proof of concept. This would not only limit the volume of data analyzed but would also reduce the complexity of the analysis because it does not require more advanced concepts of control groups and population segmentation. The approach may also allow, if necessary, subject matter experts to weigh in from the individual facilities to provide guidance on the analysis.

The results returned from the analysis were simple and actionable. The service provider found that particular discharge diagnoses have comparatively high rates of return visits to the ER, often related to patients not closely following discharge instructions. And by providing the payers and providers this information, the service provider was able to improve the clarity of discharge instructions and drive post-discharge follow-ups to decrease the total number of unnecessary readmissions. The cost of unnecessary admissions was significant enough to grant further momentum to the small data project, allowing the project to expand to other regions.

In the second example (a large, regional healthcare services provider looking to improve revenue cycle operations), a similarly tailored question was posed: How can we improve revenue cycle efficiency by reducing penalties related to patient overpayments? At first glance, this seems to be a relatively small insight for traditional revenue cycle analyses. Questions that could potentially have a larger impact (Who owes me money now? Which payer pays the best rates for procedure XYZ?), could provide a larger payoff, but they would inevitably complicate the task of standardizing and streamlining data and definitions for all three practice groups.

However, the analysis would provide a jumping off point that would improve understanding of the data at a granular level. Not only was this regional provider able to create reports to identify delayed payments and prioritize accounts by the “age” of the delayed payment, it was able to better understand the underlying cause of the delayed payments. It was then able to adjust the billing process to ensure timely payments. Once again, timely payments significantly helped the working capital requirements of the organization by proving a rather short-term and significant business case. As a result, the small data project was expanded to include more complex revenue cycle management problems related to underpayment and claims related to specialty practices.

In both examples, the organizations deliberately started small—both in terms of the amount of data and the complexity of their approach. And by showing restraint and limiting the scope of their analyses, they were able to define a clear business case, derive actionable insights and gain momentum to tackle larger challenges faced by the organization.

data

To Go Big (Data), Try Starting Small

Just about every organization in every industry is rolling in data—and that means an abundance of opportunities to use that data to transform operations, improve performance and compete more effectively.

“Big data” has caught the attention of many—and perhaps nowhere more than in the healthcare industry, which has volumes of fragmented data ready to be converted into more efficient operations, bending of the “cost curve” and better clinical outcomes.

But, despite the big opportunities, for most healthcare organizations, big data thus far has been more of a big dilemma: What is it? And how exactly should we “do” it?

Not surprisingly, we’ve talked to many healthcare organizations that recognize a compelling opportunity, want to do something and have even budgeted accordingly. But they can’t seem to take the first step forward.

Why is it so hard to move forward?

First, most organizations lack a clear vision and direction around big data. There are several fundamental questions that healthcare firms must ask themselves, one being whether they consider data a core asset of the organization. If so, then what is the expected value of that asset, and how much will the company invest annually toward maintaining and refining that asset? Oftentimes, we see that, although the organization may believe that data is one of its core assets, in fact the company’s actions and investments do not support that theory. So first and foremost, an organization must decide whether it is a “data company.”

Second is the matter of getting everyone on the same page. Big data projects are complex efforts that require involvement from various parties across an organization. Data necessary for analysis resides in various systems owned and maintained by disparate operating divisions within the organization. Moreover, the data is often not in the form required to draw insight and take action. It has to be accessed and then “cleansed”—and that requires cooperation from different people from different departments. Likely, that requires them to do something that is not part of their day jobs—without seeing any tangible benefit from contributing to the project until much later. The “what’s in it for me” factor is practically nil for most such departments.

Finally, perception can also be an issue. Big data projects often are lumped in with business intelligence and data warehouse projects. Most organizations, and especially healthcare organizations, have seen at least one business intelligence and data warehouse project fail. People understand the inherent value but remain skeptical and un-invested to make such a transformational initiative successful. Hence, many are reticent to commit too deeply until it’s clear the organization is actually deriving tangible benefits from the data warehouse.

A more manageable approach

In our experience, healthcare organizations make more progress in tapping their data by starting with “small data“—that is, well-defined projects of a focused scope. Starting with a small scope and tackling a specific opportunity can be an effective way to generate quick results, demonstrate potential for an advanced analytics solution and win support for broader efforts down the road.

One area particularly ripe for opportunity is population health. In a perfect world with a perfect data warehouse, there are infinite disease conditions to identify, stratify and intervene for to improve clinical outcomes. But it might take years to build and shape that perfect data warehouse and find the right predictive solution for each disease condition and comorbidity. A small-data project could demonstrate tangible results—and do so quickly.

A small-data approach focuses on one condition—for example, behavioral health, an emerging area of concern and attention. Using a defined set of data, it allows you to study sources of cost and derive insights from which you can design and target a specific intervention for high-risk populations. Then, by measuring the return on the intervention program, you can demonstrate value of the small data solution; for example, savings of several million dollars over a one-year period. That, in turn, can help build a business case for taking action, possibly on a larger scale and gaining the support of other internal departments.

While this approach helps build internal credibility, which addresses one of the biggest roadblocks to big data, it does have some limitations. There is a risk that initiating multiple independent small-data projects can create “siloed” efforts with little consistency and potential for fueling the organization’s ultimate journey toward using big data. Such risks can be mitigated with intelligent and adaptive data architecture and a periodic evaluation of the portfolio of small-data solutions.

Building the “sandbox” for small-data projects

To get started, you need two things: 1) a potential opportunity to test and 2) tools and an environment that enable fast analysis and experimentation.

It is important to understand quickly whether a potential solution has a promising business case, so that you can move quickly to implement it—or move on to something else without wasting further investment.

If a business case exists, proceed to find a solution. Waiting to procure servers for analysis or for permission to use an existing data warehouse will cost valuable time and money. So that leaves two primary alternatives for supporting data analysis: leveraging Software-as-a-Service solutions such as Hadoop with in-house expertise, or partnering with an organization that provides a turnkey solution for establishing analytics capabilities within a couple of days.

You’ll then need a “sandbox” in which to “play” with those tools. The “sandbox” is an experimentation environment established outside of the organization’s production systems and operations that facilitate analysis of an opportunity and testing of potential intervention solutions. In addition to the analysis tools, it also requires resources with the skills and availability to interpret the analysis, design solutions (e.g., a behavioral health intervention targeted to a specific group), implement the solution and measure the results.

Then building solutions

For building a small-data initiative, it is a good idea to keep a running list of potential business opportunities that may be ripe for cost-reduction or other benefits. Continuing our population health example, this might include areas as simple as finding and intervening for conditions that lead to the common flu and reduced employee productivity, to preventing pre-diabetics from becoming diabetics, to behavioral health. In particular, look at areas where there is no competing intervention solution already in the marketplace and where you believe you can be a unique solution provider.

It is important to establish clear “success criteria” up front to guide quick “go” or “no-go” decisions about potential projects. These should not be specific to the particular small-data project opportunity but rather generic enough to apply across topics—as they become the principles guiding small data as a journey to broader analytics initiatives. Examples of success criteria might include:

– Cost-reduction goals
– Degree to which the initiative changes clinical outcomes
– Ease of access to data
– Ease of cleansing data so that it is in a form needed for analysis

For example, you might have easy access to data, but it requires a lot of effort to “clean” it for analysis—so it isn’t actually easy to use.

Another important criterion is presence of operational know-how for turning insight into action that will create outcomes. For example, if you don’t have behavioral health specialists who can call on high-risk patients and deliver the solution (or a partner that can provide those services), then there is little point in analyzing the issue to start with. There must be a high correlation between data, insight and application.

Finally, you will need to consider the effort required to maintain a specific small-data solution over time. For instance, a new predictive model to help identify high-risk behavioral health patients or high-risk pregnancies. Will that require a lot of rework each year to adjust the risk model as more data becomes available? If so, that affects the solution’s ease of use. Small-data solutions need to be dynamic and able to adjust easily to the market needs.

Just do it

Quick wins can accelerate progress toward realizing the benefits of big data. But realizing those quick wins requires the right focus—”small data”—and the right environment for making rapid decisions about when to move forward with a solution or when to abandon it and move on to something else. If in a month or two, you haven’t produced a solution that is translating into tangible benefits, it is time to get out and try something else.

A small-data approach requires some care and good governance, but it can be a much more effective way to make progress toward the end goal of leveraging big data for enterprise advantage.

This article first appeared at Becker’s Hospital Review.