Tag Archives: service provider

healthcare

Why to Start Small on Healthcare IT

In a recent article by CIO, the volume of healthcare data at the end of 2013 was estimated at just over 150 exabytes, and it is expected to climb north of 2,300 exabytes by 2020—a growth rate of 1,500% in just seven years.

In response, both healthcare payers and providers are increasing their investments in technology and infrastructure to establish competitive advantages by making sense of the growing pool of data. But key actionable insights—such as how to improve the quality of patient care, increase operational efficiency or refine revenue cycle management—are difficult to find. Core challenges surrounding data analytics (capturing, cleaning, analyzing and reporting) are complex and daunting tasks, both from a technical and subject matter perspective.

It’s no surprise, then, that many healthcare organizations struggle to make sense of this data. While the advent of big data technologies, such as Hadoop, provide the tools to collect and store this data, they aren’t a magic bullet to translate these heaps of information into actionable business insights. To do so, organizations must carefully plan infrastructure, software and human capital to support analysis on this scale, which can quickly prove to be prohibitively expensive and time-consuming.

But, by starting small in the new era of big data, healthcare organizations are able to create an agile and responsive environment to analyze data—without assuming any unnecessary risk. To do so, however, they must be able to answer three questions:

  1. What narrowly tailored problem has a short-term business case we can solve?
  2. How can we reduce the complexity of the analysis without sacrificing results?
  3. Do we truly understand the data? And, if not, what can we learn from the results?

To illustrate the effectiveness of starting small, consider two examples: that of a healthcare services provider looking to prevent unnecessary hospital visits and that of a large healthcare provider looking to universally improve revenue cycle operations after a three-practice merger.

The first example concerns an organization that specializes in care coordination. This particular organization consumes a sizeable volume of claims—often more than five million a month. And to supplement core operations (e.g. patient scheduling and post-visit follow-ups), it sought to answer a question that could carry significant value to both payers and providers: How can we reduce the number of unnecessary hospital visits? By digging even further, there was a more-refined question from payer and provider clients: Can we identify patients who are at a high risk for a return visit to the ER? Last, but not least, the organization eventually asked the key question many such big data projects fail to ask: Is there a short-term business case for solving this problem?

To answer the question, the organization considered all available data. Although the entire patient population would provide a significant sample size, it could potentially be skewed by various factors relating to income, payer mix, etc. So the organization decided to narrow the search to a few geographically grouped facilities and use this sample as a proof of concept. This would not only limit the volume of data analyzed but would also reduce the complexity of the analysis because it does not require more advanced concepts of control groups and population segmentation. The approach may also allow, if necessary, subject matter experts to weigh in from the individual facilities to provide guidance on the analysis.

The results returned from the analysis were simple and actionable. The service provider found that particular discharge diagnoses have comparatively high rates of return visits to the ER, often related to patients not closely following discharge instructions. And by providing the payers and providers this information, the service provider was able to improve the clarity of discharge instructions and drive post-discharge follow-ups to decrease the total number of unnecessary readmissions. The cost of unnecessary admissions was significant enough to grant further momentum to the small data project, allowing the project to expand to other regions.

In the second example (a large, regional healthcare services provider looking to improve revenue cycle operations), a similarly tailored question was posed: How can we improve revenue cycle efficiency by reducing penalties related to patient overpayments? At first glance, this seems to be a relatively small insight for traditional revenue cycle analyses. Questions that could potentially have a larger impact (Who owes me money now? Which payer pays the best rates for procedure XYZ?), could provide a larger payoff, but they would inevitably complicate the task of standardizing and streamlining data and definitions for all three practice groups.

However, the analysis would provide a jumping off point that would improve understanding of the data at a granular level. Not only was this regional provider able to create reports to identify delayed payments and prioritize accounts by the “age” of the delayed payment, it was able to better understand the underlying cause of the delayed payments. It was then able to adjust the billing process to ensure timely payments. Once again, timely payments significantly helped the working capital requirements of the organization by proving a rather short-term and significant business case. As a result, the small data project was expanded to include more complex revenue cycle management problems related to underpayment and claims related to specialty practices.

In both examples, the organizations deliberately started small—both in terms of the amount of data and the complexity of their approach. And by showing restraint and limiting the scope of their analyses, they were able to define a clear business case, derive actionable insights and gain momentum to tackle larger challenges faced by the organization.

ERM Blurs Lines for Nonprofits

I will never forget a frustrated Peter Drucker lamenting years ago, in his heavy German accent, the use of the term “nonprofit” when “the fact remains that profitability is vital to our sustainability?” To this point, I’ve been so impressed with the tools of technology (and equally with recent management appointments in nonprofits) that I’ve been encouraging nonprofits to raise their game relative to risk retention. This can be achieved with a more sophisticated form of reinsuring their liabilities and operations — captive, risk retention group — so that nonprofits’ efforts are rewarded through an ROI on their capital (i.e., a surplus), generating a “profit center” for mission protection.

I believe the time has come for a more holistic view of how we manage, whether it’s our company, our organization or our household. (Interestingly, the word “economics” comes from two Greek words — oikos and nomia — whose earliest origins relate to taking stock of the affairs of the home.) I believe this blurring is manifest in much of what I am witnessing:

  • For-profit executives leaving a life of “success,” corporately, for a life of “significance” in a mission-based organization (Bob Buford’s theory) at a mid-point in their lives;
  • For-profit executives sitting on non-profit boards advocating for more enterprise risk management (ERM), a more sophisticated form of risk management; and
  • A tidal wave of interest among emerging generations in the nonprofit sector — for careers, volunteerism and engagement.

Another concept of this blurring relates to the need for nonprofits to see resources, talent, contribution and solutions in their nonprofit, community-based neighbors. In fact, it appears that risk management is no longer an “organization issue,” per se; you can have the best-laid plans, but if you aren’t aligned with your community, you risk vulnerability.

Additionally, so many recent security breaches point to the need for community-based solutions that are global, not just U.S.-centric.

Below is a diagram I raised with a faith-based nonprofit to demonstrate how its approach to risk might, more effectively, be to find greater impact through alignments within the local community.

Screen Shot 2016-01-25 at 1.41.00 PM

Engaging your community

Perhaps now is the time for “nonprofits” to change the semantics of their sector to a broader, community-based organization (CBO) concept. In fact, one idea that has emerged is an alternative label for a nonprofit: CBO.

Perhaps — as CBOs — we will more effectively live out our missions, starting with a positive, inclusive approach rather than a negative (“non”) dynamic. And, no doubt, we’ll better manage risk through these alignments.

Ultimately, we’re better off with collaboration!

Rethinking the Claims Value Chain

As a claims advisor, I specialize in helping to optimize property casualty claims management operations, so I spend a lot of time thinking about claims business processes, activities, dependencies and the value chains that are commonly used to structure and refine them. Lately, I have been focusing on the claims management supply chain — the vendors who provide products and perform services that are critical inputs into the claims management and fulfillment process.

In a traditional manufacturing model, the supply chain and the value chain are typically separate and — the supply chain provides raw materials, and the value chain connects activities that transform the raw materials into something valuable to customers. In a claims service delivery model, the value chain and the supply chain are increasingly overlapping, to the point where it is becoming hard to argue that any component of the claims value chain couldn’t be handled directly by the supply chain network.

image5
Which creates an intriguing possibility for an insurance company — an alternative to bricks and mortar and company cars and salaries, a virtual claims operation! Of course, there are third-party administrators (TPAs) that are large and well-developed enough to offer complete, end-to-end claims management and fulfillment services to an insurance company through an outsourced arrangement. That would be the one-stop shopping solution: hiring a TPA to replace your claims operation. But try to envision an end-to-end process in which you invite vendors/partners/service providers to compete to handle each component in your claims value chain (including processing handoffs to each other.) You select the best, negotiate attractive rates, lock in service guarantees and manage the whole process simply by monitoring a performance dashboard that displays real time data on effectiveness, efficiency, data quality, regulatory compliance and customer satisfaction.

You would need a system to integrate the inputs from the different suppliers to feed the dashboard, and you would also need to make certain the suppliers all worked together well enough to provide the ultimate customer with a seamless, pain free experience, but you are probably already doing some of that if you use vendors. You would still want to do quality and compliance and leakage audits, of course, but you could always hire a different vendor to do that for you or keep a small team to do it yourself.

Your unallocated loss adjustment expenses (ULAE) would become variable, tied directly to claim volume, and your main operating challenge would be to manage your supply/value chain to produce the most desirable cost and experience outcomes. Improved cycle time, efficiency, effectiveness, data accuracy and the quality of the customer experience would be your value propositions. You could even monitor the dashboard from your beach house or boat — no more staff meetings, performance reviews, training sessions — and intervene only when needed in response to pre-defined operational exceptions.

Sounds like a no-brainer. Insurance companies have been outsourcing portions of their value chain to vendors for years, so why haven’t they made their claims operations virtual?

If you are running an insurance company claims operation, you probably know why. Many (probably most) claims executives are proud of and comfortable with their claims operations just the way they are. They believe they are performing their value chain processes more effectively than anyone else could, or that their processes are “core” (so critical or so closely related to their value proposition they cannot be performed by anyone else) and thus sacrosanct, or that they have already achieved an optimal balance between in-house and outsourced services so they don’t need to push it any further. Others don’t like the loss of control associated with outsourcing, or they don’t want to consider disruptive change. Still others think it might be worth exploring, but they don’t believe they can make a successful business case for the investment in systems and change costs. Unfortunately, this may help explain why claims executives are often accused of being stubbornly change averse and overly comfortable with the status quo, but I think it is a bit more complicated than that — it all begins with the figurative “goggles” we use to self-evaluate claims operations.

If you are running a claims operation, you have an entire collection of evaluation goggles — the more claims experience you have, the larger your collection. When you have your “experience” goggles on, you compare your operation to others you have read about, or seen in prior jobs, or at competitors, to make sure your activities and results benchmark well and that you are staying up to date with best practices. At least once a year, someone outside of claims probably demands that you put your “budget” goggles on o look for opportunities to reduce ULAE costs. or legal costs, or fines and penalties, or whatever. You probably look through your “customer satisfaction” goggles quite a bit, particularly when complaints are up, or you are getting bad press because of your CAT response, or a satisfaction survey has come out and you don’t look good. Your “stakeholder” goggles help you assess how successful you have been at identifying those who have a vested interest in how well you perform, determining what it is they need from you to succeed, and delivering it. You use your “legal and regulatory compliance” goggles to identify problems before they turn into fines, bad publicity or litigation, much as you use your “no surprises” goggles to continually scan for operational breakdowns that might cause reputational or financial pain, finger pointing and second guessing. Then there are the goggles for “management” — litigation, disability, medical, vendor — and for “fraud mitigation” and “recovery” and “employee engagement.” Let’s not forget the “efficiency” goggles, which help you assess unit costs and productivity, and the “effectiveness” and “quality control” goggles, which permit you to see whether your processes are producing intended and expected results. And of course your “loss cost management” goggles give you a good read on how well you are managing all three components of your loss cost triangle, i.e., whether you are deploying and incurring the most effective combination of allocated and unallocated expenses to produce the most appropriate level of loss payments.

Are all those goggles necessary? You bet. Claims management involves complex processes and inputs and a convoluted web of variables and dependencies and contingencies. Most claims executives would probably agree it makes sense to regularly evaluate a claims operation from many different angles to get a good read on what’s working well , what isn’t and where there is opportunity for improvement. The multiple perspectives provided by your goggles help you triangulate causes, understand dependencies and impacts and intelligently balance operations to produce the best outcomes. So even if you do have a strong bias that your organization design is world-class, your people are the best and all processes and outcomes are optimal, the evaluation should give you plenty of evidence-based information with which to test that bias and identify enhancement opportunities — as long as you keep an open mind.

No matter what you do, however, there will always be others in your organization who enjoy evaluating your claims operation, and they usually aren’t encumbered by such an extensive collection of goggles. They may have only one set that is tuned to budget, or customer experience, or compliance, or they may be under the influence of consultants whose expensive goggles are tuned to detect opportunities for large-scale disruptive/destructive process innovation or transformation in your operation. On the basis of that narrow view, they just might conclude that things need to change, that new operating models need to be explored. Whether you agree or disagree, your evidence-based information should be of some value in framing and joining the debate.

Will we ever see virtual claims operations? Sure. There are many specialized claims service providers operating in the marketplace right now that can perform claims value chain processes faster, cheaper and better than many insurance companies can perform them. The technology exists to integrate multiple provider data inputs and create a performance dashboard. And there are a few large insurance company claims organizations pursuing this angle vigorously right now. I fully expect the companies that rethink and retool their claims value chains to take full advantage of integration of supply chain capabilities and begin to generate improved performance metrics and claim outcomes, ultimately creating competitive advantage for themselves. Does that mean it is time for you to rethink your claims value chain? I think the best way to find out is to put on your “innovation” goggles and take a look!

$1.2 Trillion Disruption in Personal Insurance

Most of us don't think much about insurance. That's by design, of course. Insurance is supposed to be a safety net that affords us the leisure of not thinking about it. Unless of course, we have to. That generally happens about once a year when we're reacquainted with our premium. Ouch. According to statisticians, most of us will also have to think about our insurance about once every seven to eight years when we'll encounter a loss of some sort. Another ouch.

My insurance is pretty confusing. I pay for coverage of my house – a fairly precise calculation based on its quality, size, age, materials, etc. I get a guarantee that, if I keep paying my premium, my home will be covered for its replacement costs. That's pretty reassuring. But then it gets a little weird. I get a “blanket” (insurance-speak is very comforting), which is really a formula that assumes that all the stuff I own is worth, um, somewhere around 50% to 70% of the value of my home. Huh? Maybe there's a bit of science to this, but surely there's a lot of guess…and, according to research, about 39% of the time the formula is just wrong. (As one insurance CEO recently confessed to me, most folks are probably 50% underinsured). The complications go on: If I own something really valuable, some bauble or collectible, well, that has to go on a list of things that are really valuable, and those things get their own coverage. Then, so my stuff continues to be well-protected, I have to re-estimate the value of those things from time to time, or employ an appraiser. What's more, if I buy something or donate something I own, or if any of my things goes down or up in value for whatever reason, my insurance doesn't change — because my provider doesn't know about these changes. And, if you've ever had a claim to file, the process starts with the assumption of fraud, with the burden of proof borne by the policyholder, because most people don't have an accurate accounting of their possessions and their value. Still another ouch.

So while I'm not supposed to be thinking about insurance, maybe I should be paying closer attention.  

Change is coming like a freight train, and its impact has the potential to shake one of the world's largest industries to its core. For a little perspective: The property and casualty insurance industry collected some $1.2 trillion (!) in premiums in 2012, (or about twice the annual GDP of Switzerland). 

At the core of the P/C insurance enterprise is (and I know I am simplifying here) the insurance-to-value ratio, which estimates whether there's enough capital reserved to insure the value of items insured —  if values go up, there'd better be enough money around in case of a loss. All good, right? Except that for as long as actuaries have been actuarying, the value side of that ratio has been a guess — especially for personal property (the stuff I own other than my home). So, if I forget to tell my insurer about something I bought, or if I no longer own that painting, watch, collectible, antique; or if the precious metal in my jewelry has increased…then what? Am I paying too much, or am I underinsured for the current value of the things I own? Of course, these massive companies make calculated allowances for the opacity…but these allowances also cost us policyholders indirectly in increased premiums, and the inefficiency costs the insurer in potential returns on capital. 

The coming changes can be summarized in terms of three trends. First is the expectation of the connected generations, now entering their most acquisitive years and set to inherit $30 trillion of personal wealth. Second is the connected availability of current data about the value of things. Third is the emergence of the personal digital locker for things.

Data, data! I want my data! — the expectation of the connected generations.

If they're anything, the connected generations are data-savvy and mobile. If you’ve shopped for just about anything with a Millennial recently, you’re familiar with their reliance on real-time data about products, local deals, on-line values and even local inventories. (I was with one of Google's brains, and he showed me how retailers are now sending Google local inventory data so now it can post availability and price of a searched-for item at a local store). Smartphone usage is nearly 90% for Gen Xers and Millennials, and data is mother's milk to the children of the connected generations who are being weaned on a diet rich with direct (disintermediated) access to comparisons, descriptions, opinions, crowd-sourced knowledge and even current values. The emerging generations rarely rely on the intermediation of experts (unless validated on a popular blog with a mass following) and are not likely to be satisfied with an indirect relationship with those affecting their financial health. Smartphones in hand, depending on data in the cloud, they will demand and receive visibility into the data shaping all their risk decisions.    

And here's where the insurance revolution will begin: A connected generation that is apt to disintermediate and has access to real-time info on just about any thing will demand that they insure only what they own (bye bye, blanket); that their insurance should track to real values, not formulaic guesses; and that they have the ability to reprice more frequently than once a year. 

The time is coming for variable-rate insurance that reflects changes in the values of items insured and is offered on a real-time basis for any item that the owner deems valuable. 

The price is wrong — the real-time valuation of everything.

Over the past few years, several data services have sprung up whose charters are similar: something like developing the world's largest collection of data about products — their descriptions, suggested retail price, current resale value, user manuals, photos and the like. No one has yet dominated, but it's early yet, and someone (or probably a few) will conquer the objective. Similarly, there are a few excellent companies that are collecting and indexing for speedy retrieval the information about every collectible that has been sold at auction for the past 15 years. I know something of these endeavors because our core product relies on the availability and accuracy of these data providers to collect the values (and other attributes) of the items people are putting into their Trovs (our moniker for the personal cloud for things). It is only a matter of time before we will be able to accurately assign a fair market value to most every thing — in real-time and without human intervention. This real-time value transparency will transform the way that insurance is priced, and how financial institutions view total wealth.

My stuff in the clouds — the automated collection and secure storage for the information about my things.

Within 12 to 24 months, connected consumers will embrace applications that will automatically (as much as possible) collect the information about all they own and store it in a secure, personal cloud-hosted locker. These “personal data lockers” will proliferate because of their convenience, because of real financial incentives from insurers and other service providers and because data-equipped consumers will have powerful new tools with which to drive bargains based on the data about everything they own. These new tools will pour fuel on the re-invention of insurance because all the information needed to provide new types of insurance products will be in the personal cloud-hosted data locker.

Progressively (pun noted, not intended) engineered insurance products that account for the connected generations' expectation of access to data, the abundance of data about products and collectibles and the active collection and accurate valuation of the things people own may turn the 300-year-old insurance industry on its head. Doubtless, the disruption will leave some carriers grappling for handholds and wondering how they could have insured against a different outcome.

This article first appeared in JetSet magazine.