Download

Medical Homes Change the Game

On-site clinics have audited results showing they let employers attack both sides of the healthcare equation -- health and health costs.

Washington county and Wisconsin are right in the middle of a seismic shift in the delivery of healthcare in America – from primary care as a loss leader for the big hospital corporations to medical homes for employees right at the work site. The latest company to install an on-site clinic is West Bend Mutual Insurance, the largest employer in the county. West Bend has reportedly contracted with QuadMed, a subsidiary of QuadGraphics, another major employer in the state and county. This clinic will be the “silver lining” for West Bend’s 1,100 employes, who have always enjoyed great benefits and work environment. “Silver lining” is the tag line for West Bend’s advertising and refers to the protection offered to policyholders. But it also fits what the new benefit will do for its workforce. They will enjoy convenient, relationship-based, long-term-oriented, proactive and cost-effective primary care on campus. Those adjectives do not generally apply to the in-and-out, symptom care in big system medicine. West Bend and its people can expect to see significant improvements in their workforce health metrics, like the percentage of smokers, cholesterol levels, blood pressure and even body mass index. They can also expect to see health costs drop 20% to 30% over time. That’s been the audited experience of QuadGraphics, which pioneered on-site health care starting in 1990. Its QuadMed now provides contracted medical homes for 120 major employers in 90 clinics across the country serving 150,000 members. That includes NML, Briggs & Stratton, Kohler, Rockwell and MillerCoors. Quad is one of several dozen entrepreneurial providers that have jumped into the business of on-site or near-site clinics. Quad started with its own employees and fulltime doctors, but now offers a menu of other options, such as clinics headed by a nurse practitioner (NP). Serigraph contracts for its on-site clinic with Interra Health, a Brookfield-based provider. We also contract with Paladina Health, which has roots in Wisconsin, for part-time primary care doctors. Five other manufacturers in the county also use Paladina’s “concierge” doctors for their people. HealthStats, Charlotte N.C., installed a clinic headed by a physicians’ assistant (PA) for the West Bend School District in 2013. Savings are already apparent. Office visits, for example, typically run $22 to $40 at on-site clinics vs. $160 to $190 at the big systems. Lab tests cost about half of what big systems charge. HealthStats also won a trifecta with a contract for a clinic for the county, city and school district in Waukesha. It also services city of Kenosha employees. Other local governments and school districts are jumping on the bandwagon. You get the picture. The nature of primary care in America is changing rapidly toward a model that keeps people well and out of the expensive, dangerous hospitals. The big healthcare corporations have realized the challenge, and some, like Froedtert and Pro Health, are overhauling their business models to offer clinics tailored for employers and their employees. They are late to the game, but appear to be responding to the competition. A few hospital-based systems, like Bellin Health of Green Bay and Theda Care of Appleton, saw the train coming early and moved fast into direct contracting with private companies. Their clinics center on patients as customers, as opposed to the specialist -centered model of the big systems that drove U.S. healthcare into unsustainable hyperinflation. Here’s a major piece of irony: The Affordable Care Act, aka Obamacare, was supposed to address the cost issues but has worked to drive up premiums. It is employers and their entrepreneurial vendors for medical homes that are bending the curve for American health costs. Disruptive innovation – if ever an industry needed disruption, it’s U.S. healthcare – is just getting started. Some big players are joining the revolution. DaVita, the nationwide dialysis chain, bought the predecessor to Paladina. Humana bought the Concentra clinic chain. Walgreens runs clinics. Not all are holistic medical homes, but they are headed in that direction. Just recently, QuadMed and Walmart cut a deal to run a pilot that moves Walmart’s rudimentary clinics into a fuller range of services, headed by a PA or NP. Office visits are $40. If the pilot works, and Walmart puts its full muscle behind this new delivery model for primary care, look out. The concept behind medical homes is sound. They allow employers to attack both sides of the healthcare equation – health and health costs. The contracted medical teams can home in on every employee with a chronic disease condition, the source of most costs. They are passionate about getting the disease conditions under control. Better and better predictive analytic tools help to identify those high-risk employes. On the economic side, if expensive specialist care is needed, the teams can direct patients to the highest-value providers for both quality and price. With price variations routinely of more than 300%, there are easy pickings for savings. New transparency tools highlight the best buys. In short, the medical homes put employers back in charge of the medical supply chain. The happy ending of this blog is that Washington county and some parts of Wisconsin are leaders in the medical home movement. We are early winners in terms of big savings.

Emerging Risk of 2015: Outsourcing

Outsourcing has been a boon to profits, but the cost savings carry risks if they come through low safety standards, poor quality control, etc.

Outsourcing might just be the most common business management earnings booster of the past 10 years. Which means that it is also a top candidate for becoming a major emerging risk in the near future. The idea of outsourcing is an extension of the fundamental logic of capitalism: specialization. Processes are good candidates for outsourcing when there are other firms that can perform the same service at a significantly lower cost. Cost Advantages When you start looking at a potential outsourcing situation, you need to understand the source of the cost advantage. There are several possible drivers:
  • Higher efficiency
  • Lower wages paid to the people performing the outsourced work
  • Lower overhead for the outsourcing partner
But there are other ways that a cost advantage might come about that are not as desirable:
  • Lower safety and health standards
  • Lower spending on quality control
  • Lower amount of slack resources that can be available when a machine breaks or a key person gets sick
  • Lower-quality source materials
How to Control Risks of Outsourcing If an outsourced process is not only out of sight but also out of mind, this emerging risk may become a current problem. There are two basic ways of controlling the risks of outsourcing: by specifying standards at the outset of the arrangement and by inspection of the process and output on a continuing basis. But with the explosion of outsourcing over the past 10 years, even firms that had set down extensive and clear standards at the time of the original agreement and that have allocated the needed resources for inspection of the processes and outputs are at risk from the complacency that comes from the the passage of time without serious incident, the changing individuals on both sides of the agreement and the changing pressures on both organizations. An outsourced process is out of sight. If it also becomes out of mind, then it will likely move out of the emerging risk category into the current problem category. This article first appeared on WillisWire.

Dave Ingram

Profile picture for user DaveIngram

Dave Ingram

Dave Ingram is a member of Willis Re's analytics team based in New York. He assists clients with developing their first ORSA (own risk and solvency assessment), presenting their ERM programs to rating agencies, developing and enhancing ERM programs and developing and using economic capital models.

An Argument for Physician Dispensing

While a recent WCRI study is being used to argue against having physicians dispense drugs, the data can be interpreted more innocently.

A January 2015 Workers’ Compensation Research Institute (WCRI) study that focused on three new medication strengths has again questioned the practice of physicians dispensing medications.  Some analysts argue that the new strengths are designed to skirt price controls and generate exorbitant profits for doctors and drug manufacturers and repackagers. But another explanation is possible: that doctors and drug companies have identified new strengths that patients want. In any case, competition will, over time, drive down prices on the new medications just as it did on ones that have been in the market for a long time. The study titled, “"Are Physician Dispensing Reforms Sustainable?” prompted Michael Gavin, president of PRIUM, a subsidiary of Ameritox, to write an article titled "Physician Dispensing: I've Changed My Mind" on this website. He said: (1) ”that drug repackagers in California created novel dosages of certain medication to evade the constraints of the physician dispensing regulations”; (2) “allowing repackagers to create new NDC codes and charging exorbitant amounts of money for drugs that would have been substantially cheaper had they been secured through a retail pharmacy”; and (3) "Worse, utilization of these medications skyrocketed as a result of the revenue incentives for physicians (my conclusion, not WCRI’s)”. This article analyzes the Cyclobenzaprine HCL medication, with emphasis on the new generic 7.5mg strength that was reviewed in the WCRI study and cited in the article, “Loophole for Doctors on Drug Dispensing,” that Ramona Tanabe from WCRI wrote for this website. The 7.5mg Cyclobenzaprine HCL was first made available as a generic by the pharmaceutical company “KLE 2 Pharmaceuticals” ((www.kle2.com). The company's mission statement reads: “It is our goal to provide new therapies via unique strengths, delivery methods and/or new formulations.” KLE 2 identified a marketing opportunity to meet the needs of those who found that the 5mg strength was not effective enough and that the 10mg was too strong. There is evidence on the Internet of people attempting to split a Cyclobenzaprine HCL tablet to reduce its strength, with limited success. From late 2011 through early 2013, KLE 2 was the only manufacturer of the generic Cyclobenzaprine HCL 7.5mg strength, which was included in the Medi-Cal formulary and used for California workers’ compensation claims. In April 2013, the manufacturer Mylan released a generic 7.5mg strength, and it was also included in the Medi-Cal formulary. KLE 2 has a Medi-Cal price of $3.2153 per tablet; Mylan, $3.99. The brand name “Fexmid,” by Sciele Pharma, owned by Shionogi, has a Medi-Cal price of $4.4383 per tablet. Pharmaceutical pricing in the U.S. is unregulated; the more manufacturers there are, the lower the price to the consumer. In the case of the 7.5mg strength Cyclobenzaprine HCL, there are currently only two manufacturers, so the price will remain high until more manufacturers produce this strength or there is less demand for it. The 10mg strength, in comparison, has currently around 17 manufacturers. The average Medi-Cal price for 10mg is $0.1035. The lowest Medi-Cal price is $0.0468, from the manufacturer KVK Tech. (Refer to page 7 of "Understanding Pricing of Pharmaceuticals," available here under the Dialogue tab, for a Medi-Cal price comparison of 10mg Cyclobenzaprine HCL). The 5mg strength is manufactured by about 11 pharmaceutical companies. The average Medi-Cal price is $0.1586 -- that is down from Mylan's price of $1.3616 in 2006. The current lowest Medi-Cal price for a 5mg strength tablet is $0.0468, again from KVK Tech. I mentioned earlier that attempts to split either a 5mg or 10mg tablet in half have not been successful. It has been well documented that the coating applied to the 5mg and 10 mg Cyclobenzaprine HCL tablets does not allow them to be easily cut, regardless of the device used. The opportunity therefore for cutting a 5mg in half to take 1½ tablets of 5mg of Cyclobenzaprine HCL and accurately administer a strength of 7.5mg is not possible. The release of the 7.5mg strength addresses this need. Although the 5mg, 10mg and now 7.5mg strengths are the most commonly dispensed Cyclobenzaprine HCL medications, there are also other strengths, such as the 15mg and 30mg extended-release capsules manufactured by Mylan, which have a Medi-Cal price of $8.7899 per capsule. There are also the brand name “Amrix” extended-release 15mg and 30mg capsules manufactured by Cephalon, a subsidiary of Teva Pharmaceuticals, which have a Medi-Cal price of $25.0163 per capsule for both strengths. These 15mg and 30mg strengths further illustrate how a lack of competition for a specific medication leads to higher prices. Medi-Cal prices apply to all dispensers of California workers’ compensation medications, including pharmacies and physicians, and the same Medi-Cal maximum price has applied since 2007, as explained in my article, “The Paradox on Drugs in Worker’s Comp.” But the average prices paid, according to the WCRI study, are significantly higher than the Medi-Cal prices. The WCRI said prices paid for the 5mg and 10mg strengths were 35 to 70 cents a tablet, yet we find that the average Medi-Cal price was 10 cents for 10mg and 16 cents for 5mg. This discrepancy requires further clarification, because it appears that claims administrators have been paying significantly more than Medi-Cal's maximum price. The WCRI reported a range of between $2.90 and $3.45 for the 7.5mg strength. The $2.90 price is lower than Medi-Cal's prices and indicates that a competitive price was paid by claims administrators. If, as some have suggested, new strengths such as the 7.5mg are medically inappropriate, have claims administrators moved to remove the doctors who prescribe those strengths from their medical provider networks (MPNs)? Have claims administrators reported those doctors to the California Fraud Assessment Commission? Gavin said in the second point I pulled from his article that medications dispensed by physicians cost more than those in retail pharmacies, but obtaining prices of Cyclobenzaprine HCL from a number of retail pharmacies on the website goodrx.com are higher than the average Medi-Cal price paid for the same medications to dispensing physicians. (Prices on the website can change at any time and cited here for illustration purposes only. The Medi-Cal formulary can also change at any time in both its suppliers of medications and prices paid.) This analysis of the Cyclobenzaprine HCL medication further reinforces the need for claims administrators to be vigilant when dealing with pharmaceuticals. Let the buyer beware, too, when interpreting studies produced by organizations such as the WCRI.

To Bundle or Not to Bundle?

Risk managers historically bought services separately, but developments -- mostly technology -- should prompt a new look at bundles.

To purchase services on a bundled or unbundled basis is a question that risk managers have debated for many years. In the past, conventional thinking among many risk professionals was to purchase services from distinct service providers. This decision was typically based on which vendors were perceived to offer the highest quality or lowest-priced services. In recent years, however, there appears to have been a shift in thinking, as bundled programs have become more popular. Technology advancements are helping drive this change. In large part, this is because of the improved efficiencies and outcomes that a packaged program can provide. Examining the process will underscore the benefits that bundled services offer. However, no two programs are alike, and customization must continue to be part of the discussion for any employer. The bundled approach As businesses strive for increased savings and productivity, services such as clinical consultation, pharmacy management, provider selection and bill review are more commonly sought from a single services provider and integrated into the overall claims management process. Robust technology systems tie these service components together and provide risk managers with comprehensive access to complete and real-time information like never before. All professionals managing the injury make better, more informed decisions and ultimately improve outcomes. Clinical consultation When a workers’ compensation injury occurs, early response and appropriate treatment are critical. Integrating clinical consultation services ensures that an injured worker talks with a nurse by telephone shortly after an incident occurs. The two parties discuss the injury and related symptoms along with other health conditions that might affect the injury and recovery process. Using his medical knowledge, the nurse can then discuss recommended treatment options. Depending on the severity of the injury, this can range from self-care to an occupational clinic visit to emergency room treatment. One of the key advantages to this approach is that it removes recommended treatment input from the manager or supervisor. Provider selection In a well-designed program, the nurse will have access to a listing of prequalified medical providers. These providers will have been selected based on a demonstrated ability to deliver desired outcomes on a consistent basis. The providers also will have shown that they understand the workers’ compensation system and employer expectations. This contributes greatly to return-to-work initiatives. Quantifiable physician rating programs are preferred over an expansive listing of physicians who have been selected solely based on their willingness to negotiate price. Pharmacy management Management of prescription drug costs can also be part of a bundled services package. Most successful programs will employ injury-specific formularies. These are listings of drugs approved for certain types of injuries or conditions. Given today’s increased use of opioids in treating work-related injuries, these custom formularies can be a valuable asset in preventing unnecessary or extended use of such powerful narcotics. A pharmacy management program can be structured so that a claims examiner receives an alert if a particular drug is prescribed or requested. The examiner can then place a call to the physician or pharmacist to see if there are alternative drugs available. Often, unnecessary or inappropriate drugs can be blocked at the point of sale. The use of network pharmacies can also add value. These pharmacies are selected based on quality, price and an understanding of program expectations. Drugs here are much preferred and often less expensive than prescriptions obtained from a physician’s office. Network pharmacists also understand the value of generic drugs versus brand name prescriptions and recommend these when appropriate. They are available to educate injured workers about the benefits or risks associated with any given drug. Bill review Bill review is becoming more commonly purchased as part of a bundled program. An effective bill review program goes beyond applying fee schedules and physician provider organization (PPO) discounts and is really driven by how information is processed. Bill review services seek all possible reductions on every bill. Accurate coding should be applied throughout the process, and it should reflect the lowest possible allowance for any code and provider. Additional savings are then typically charged as a percentage of savings. The more discounts obtained early on, the lower the service fee will be. Technology Technology has really increased the attractiveness of bundled service programs. Detailed and immediate information empowers professionals to make sound decisions and take steps to move a claim toward closure and return an injured employee to work more readily than ever before. As an example, when a clinical consultation nurse and claims adjuster share a single technology system, appropriate notes can be exchanged seamlessly and early details can be accessed that may later affect the case. Such a system also allows for a complete and up-to-date listing of prequalified medical providers and injury-specific drug formularies to be easily updated and maintained. This information is essential when an injured worker is seeking initial medical treatment or a claims adjuster is monitoring prescribed drugs. Also, when participating physicians and pharmacies are on a single system, medical bills are easily accessed and reviews performed more readily. Additionally, technology associated with these types of services can produce valuable data used to measure performance and identify trends. It is then possible to develop strategies to improve outcomes in care management and at the desk level based on quantifiable information. When services are bundled and one system ties them together, gaps in data are avoided. Conclusion Business trends will continue to evolve, as will debates over bundled versus unbundled services programs. However, today’s discussion is different than those in the past because of the advancement of technology and its resulting impact. Risk managers are looking to innovation to drive enhanced capabilities seeking improved efficiencies and effectiveness. Given the high stakes associated with increasing productivity and lowering costs, this debate is likely to intensify in the future, with technology adding zest to the conversation. This article first appeared on WorkCompWire.

Christopher Mandel

Profile picture for user ChristopherMandel

Christopher Mandel

Christopher E. Mandel is senior vice president of strategic solutions for Sedgwick and director of the Sedgwick Institute. He pioneered the development of integrated risk management at USAA.

Cars: What's Driving Disruption and Change

As auto makers become "mobility" companies, will insurance shift from the driver to the manufacturer? Can new services be provided?

car insurance disruptive|
The SMA research report The Next-Gen Insurer: Fueled by Innovation identified the major influencers within and outside the industry that are reshaping the business of insurance. It cautioned that if insurers chose to ignore, or even put off, the inevitable need to change along with the rest of the world, they would be taking a chance and creating risk for the survival of their businesses. Well, as it turns out, ignoring it is no longer an option. The new SMA research report, The Changing Auto Insurance Landscape: Influencers Driving Disruption and Change, underscores that disruption to the auto insurance industry is inescapable. Multiple influencers have converged, primarily from outside the industry, and are in the early stages of transforming the automobile industry and subsequently the auto insurance business. The new examples like driverless/autonomous vehicles, the connected car, car apps and shared transportation are disrupting traditional business, risk, product, pricing and customer assumptions while setting off the first wave of a broader disruption that will challenge the industry. Together, they reveal a growing wave of disruption in the auto insurance segment. This was emphasized by the announcements made at the Consumer Electronics Show (CES) in Las Vegas in early January 2015. Insurers reward customers with discounts for multiple auto policies, offer discounts for pay-as-you-drive (PAYD) or pay-how-you-drive (PHYD) programs and offer more discounts for additional coverage such as homeowners, umbrella, or others. The same is true for commercial insurance – business owners will look for a package of insurance that includes bundled discounts. But consider what Mark Fields, Ford's CEO, noted to the media at the 2015 CES show. Fields sees Ford as a mobility company rather than an automotive company, delivering a wide array of services and experiences via the auto instead of the mobile phone. This reimagined business model will have rippling effects across other industries, including insurance. So how will insurance see itself going forward? How will insurance reimagine itself? The impact will drive insurers to think bigger and reimagine their businesses as they ride this wave of change toward becoming a Next-Gen Insurer. The transformational potential of each influencer individually is great, but when combined they are game-changing. Each is individually beginning to disrupt insurance in varying degrees by redefining or reducing risk; redefining vehicle needs and uses; creating product and service needs; and affecting traditional revenue, pricing and operational models. Even more importantly, influencers are reshaping customer expectations by providing new experiences to create, retain and grow customer relationships and loyalty. Here are some potential implications for insurance:
  • Will insurance models move away from the driver to the vehicle or manufacturer?
  • What new services can be provided based on connected car or smartphone applications to engage with customers differently?
  • Will auto driver usage data come from Google, Apple and auto manufacturers rather than traditional industry data providers? Will this new data redefine risk, pricing and underwriting models?
  • Will insurers need to rethink partnership strategies to deliver new services?
  • How will risk models and ultimately pricing models be affected?
  • How will these affect operational, unit cost, revenue and profitability models?
The last two questions are especially significant based on the changes that are already happening in driverless/autonomous vehicles, the connected car, car apps and shared transportation. Using some of the statistics and projections from these examples featured in the new report, the hypothetical potential financial impact on auto premiums is profound. Collectively, the impact to the top 10 personal auto insurers that represent 70% of the direct written premium (DWP) could put 60% of existing DWP revenue into play. What's more, this does not include potential lost revenue because of new products and services that may be offered by other companies and industries. Even if the impact is only half of this, the operational and profitability models based on historical auto insurance assumptions are significantly disrupted. And those assumptions are starting to become irrelevant. Rather than waiting for automotive, technology and other industries to determine where this revenue will go, insurers must begin to plan today. Another inevitable result will be felt in the traditional customer relationships that will be further challenged by the emergence of new services and providers around the shared economy, connected car and driverless vehicles. Opportunities to strengthen customer relationships will be strained and diminished as these companies redirect customer relationships and revenue away from traditional insurers. The impact of these influencers; the emergence of new services; and their effects on customer relationships, old business models and revenue and profitability models are causing insurers to seriously consider these underlying, but very strategic questions: How are insurers going to recapture the disrupted revenue stream? Will it be through new products and services that generate new revenue in new ways? Will insurers become product manufacturers/underwriters for these emerging companies? Or will insurers adapt and become broader providers of insurance and service capabilities? How will you retain customer relationships and loyalty within this disruption? Are you preparing scenarios and plans to respond to these changes over the next three to five years? These changes have uncovered a challenging new business landscape. The inevitable disruption of auto insurance is taking the industry in new and surprising directions. How you respond is strategically important for your companies' relevance and competitiveness. So, fasten your seat belts! It is going to be a fast and interesting ride!

Model Errors in Disaster Planning

This article is the fourth in a series on how the evolution of catastrophe models provides a foundation for innovation.

“All models are wrong; some are useful.” – George Box We have spent three articles (article 1, article 2, article 3) explaining how catastrophe models provide a tool for much-needed innovation to the global insurance industry. Catastrophe models have covered for the lack of experience with many losses and let insurers properly price and underwrite risks, manage portfolios, allocate capital and design risk management strategies. Yet for all the practical benefits CAT models have infused into the industry, product innovation has stalled. The halt in progress is a function of what models are and how they work. In fairness to those who do not put as much stock in the models as a useful tool, it is important to speak of the models’ limitations and where the next wave of innovation needs to come from. Model Design Models are sets of simplistic instructions that are used to explain phenomena and provide relevant insight on future events (for CAT models – estimating future catastrophic losses). We humans start using models at very early ages. No one would confuse a model airplane with a real one; however, if a parent wanted to simplify the laws of physics to explain to a child how planes fly, then a model airplane is a better tool than, say, a physics book or computer-aided design software. Conversely, if you are a college student studying engineering or aerodynamics, the reverse is true. In each case, we are attempting to use a tool – models of flight, in this instance – to explain how things work and to lend insight into what could happen based on historical data so that we can merge theory and practice into something useful. It is the constant iteration between theory and practice that allows an airplane manufacturer to build a new fighter jet, for instance. No manufacturer would foolishly build an airplane based on models no matter how scientifically advanced those models are, but those models would be incredibly useful in guiding the designers to experimental prototypes. We build models, test them, update them with new knowledge, test them again and repeat the process until we achieve desired results. The design and use of CAT models follows this exact pattern. The first CAT models estimated loss by first calculating total industry losses and then proportionally allocating losses to insurers based on assumption of market share. That evolved into calculating loss estimates for specific locations at specific addresses. As technology advanced into the 1990s, model developers harnessed that computing power and were able to develop simulation programs to analyze more data, faster. The model vendors then added more models to cover more global peril regions. Today’s CAT models can even estimate construction type, height and building age if an insurer does not readily have that information. As catastrophic events occur, modelers routinely compare the actual event losses with the models and measure how well or how poorly the models performed. Using actual incurred loss data helps calibrate the models and also enables modelers to better understand the areas in which improvements must be made to make them more resilient. However, for all the effort and resources put into improving the models (model vendors spend millions of dollars each year on model research, development, improvement and quality assurance), there is still much work to be done to make them even more useful to the industry. In fact, virtually every model component has its limitations. A CAT model’s hazard module is a good example. The hazard module takes into account the frequency and severity of potential disasters. Following the calamitous 2004 and 2005 U.S. hurricane seasons, the chief model vendors felt pressure to amend their base catalogs with something to reflect the new high-risk era we were in, that is, taking into account higher-than-average sea surface temperatures. These model changes dramatically affected reinsurance purchase decisions and account pricing. And yet, little followed. What was assumed to be the new normal of risk taking actually turned into one of the quietest periods on record. Another example was the magnitude 9.0, 2011 Great Tōhuko Earthquake in Japan. The models had no events even close to this monster earthquake in their event catalogs. Every model clearly got it wrong, and, as a result, model vendors scrambled to fix this “error” in the model. Have the errors been corrected? Perhaps in these circumstances, but what other significant model errors exist that have yet to be corrected? CAT model peer reviewers have also taken issue with actual event catalogs used in the modeling process to quantify catastrophic loss. For example, a problem for insurers is answering the type of question of: What is the probability of a Category 5 hurricane making landfall in New York City? Of course, no one can provide an answer with certainty. However, while no one can doubt the significance of the level of damage an event of that intensity would bring to New York City (Super Storm Sandy was not even a hurricane at landfall in 2012 and yet caused tens of billions of dollars in insured damages), the critical question for insurers is: Is this event rare enough that it can be ignored, or do we need to prepare for an event of that magnitude? To place this into context, the Category 3, 1938 Long Island Express event would probably cause more than $50 billion in insured losses today, and that event did not even strike New York City. If a Category 5 hurricane hitting New York City was estimated to cause $100 billion in insured losses, then knowing whether this was a 1-in-10,000-year possibility or a 1-in-100-year possibility could mean the difference between solvency and insolvency for many carriers. If that type of storm was closer to a 1-in-100-year probability, then insurers have the obligation to manage their operations around this possibility; the consequences are too grave, otherwise. Taking into account the various chances of a Category 5 directly striking New York City, what does that all mean? It means that adjustments in underwriting, pricing, accumulated capacity in that region and, of course, reinsurance design all need to be considered -- or reconsidered, depending on an insurer’s present position relative to its risk appetite. Knowing the true probability is not possible at this time; we need more time and research to understand that. Unfortunately for insurers, rating agencies and regulators, we live in the present, and sole reliance on the models to provide “answers” is not enough. Compounding this problem is that, regardless of the peril, errors exist in every model’s event catalog. These errors cannot even be avoided, and the problem escalates where our paucity of historical recordings and scientific experiments limit our industry’s ability to inch us closer and closer to greater certainty. Earthquake models still lie beyond a comfortable reach of predictability. Some of the largest and most consequential earthquakes in U.S. history have occurred near New Madrid, MO. Scientists are still wrestling with the mechanics of that fault system. Thus, managing a portfolio of properties solely dependent on CAT model output is foolhardy at best. There is too much financial consequence from phenomena that scientists still do not understand. Modelers also need to continuously assess property vulnerability when it comes to taking into consideration various building stock types with current building codes. Assessing this with imperfect data and across differing building codes and regulations is difficult. That is largely the reason that so-called “vulnerability curves” oftentimes are revised after spates of significant events. Understandably, each event yields additional data points for consideration, which must be taken into account in future model versions. Damage surveys following Hurricane Ike showed that the models underestimated contents vulnerability within large high-rises because of water damage caused by wind-driven rain. As previously described, a model is a set of simplified instructions, which can be programmed to make various assumptions based on the input provided. Models, therefore, fall into the Garbage In – Garbage out complex. As insurers adapt to these new models, they often need to cajole their legacy IT systems to provide the required data to run the models. For many insurers, this is an expensive and resource-intensive process, often taking years. Data Quality’s Importance Currently, the quality of industry data to be used in such tools as CAT models is generally considered poor. Many insurers are inputting unchecked data into the models. For example, it is not uncommon that building construction type, occupancy, height and age, not to mention a property’s actual physical address, are unknown! For each  property whose primary and secondary risk characteristics are missing, the models must make assumptions regarding those precious missing inputs – even regarding where the property is located. This increases model uncertainty, which can lead to inaccurate assessment of an insurer's risk exposure. CAT modeling results are largely ineffective without quality data collection. For insurers, the key risk is that poor data quality could lead to a misunderstanding regarding what their exposure is to potential catastrophic events. This, in turn, will have an impact on portfolio management, possibly leading to unwanted exposure distribution and unexpected losses, which will affect both insurers’ and their reinsurers’ balance sheets. If model results are skewed as a result of poor data quality, this can lead to incorrect assumptions, inadequate capitalization and the failure to purchase sufficient reinsurance for insurers. Model results based on complete and accurate data ensures greater model output certainty and credibility. The Future Models are designed and built based on information from the past. Using them is like trying to drive a car by only looking in the rear view mirror; nonetheless, catastrophes, whether natural or man-made, are inevitable, and having a robust means to quantify them is critical to the global insurance marketplace and lifecycle. Or is it? Models, and CAT models in particular, provide a credible industry tool to simulate the future based on the past, but is it possible to simulate the future based on perceived trends and worst-case scenarios? Every CAT model has its imperfections, which must be taken into account, especially when employing modeling best practices. All key stakeholders in the global insurance market, from retail and wholesale brokers to reinsurance intermediaries, from insurers to reinsurers and to the capital markets and beyond, must understand the extent of those imperfections, how error-sensitive the models can be and how those imperfections must be accounted for to gain the most accurate insight into individual risks or entire risk portfolios. The difference in a few can mean a lot. The next wave of innovation in property insurance will come from going back to insurance basics: managing risk for the customer. Despite model limitations, creative and innovative entrepreneurs will use models to bundle complex packages of risks that will be both profitable to the insurer and economical to the consumer. Consumers desiring to protect themselves from earthquake risks in California, hurricane risks in Florida and flood risks on the coast and inland will have more options. Insurers looking to deploy capital and find new avenues of growth will use CAT models to simulate millions of scenarios to custom create portfolios optimizing their capacity and create innovative product features to distinguish their products against competitors. Intermediaries will use the models to educate and craft effective risk management programs to maximize their clients’ profitability. For all the benefit CAT models have provided the industry over the past 25 years, we are only driving the benefit down to the consumer in marginal ways. The successful property insurers of the future will be the ones who close the circle and use the models to create products that make the transfer of earthquake, hurricane and other catastrophic risks available and affordable. In our next article, we will examine how we can use CAT models to solve some of the critical insurance problems we face.

How HR Can Stop Insider Data Theft

Among other things, HR can limit access to data, can watch for disgruntled employees and can swiftly block terminated workers.

After Edward Snowden’s escapades, how could any company fail to take simple measures to reduce its exposure to insider data theft? Yet large enterprises remain all too vulnerable to insider threats, as evidenced by the Morgan Stanley breach. And many small and medium-sized businesses continue to view insider data theft as just another nuisance piled on to a long list of operational challenges. “I suspect too many companies are fixated on outsider threats, like malware infections and external hacking, to the extent that insider threats get overlooked,” says Stephen Cobb, senior security researcher at anti-malware vendor ESET. More: 3 steps for figuring out if your business is secure A low-level Morgan Stanley financial adviser with sticky fingers allegedly tapped into account records, including passwords, for six million of the Wall Street giant’s clients. He got caught allegedly attempting to peddle the stolen records on Pastebin, a popular website for storing and sharing text files. The financial services sector has long been very proactive defending against all forms of data breaches for obvious reasons, and Morgan Stanley was able to nip this particular caper early on. Big banks and investment houses typically have highly trained teams, using a variety of detection tools and monitoring regimes designed to flush out any indication of a breach. “Often you have analysts in a security operations center hunting for abnormal activity,” says Scott Hazdra, principal security consultant at risk management firm Neohapsis. “They can often spot suspicious data movement based on quantity, destination or classification level and react in hours versus discovering data out in the wild when it’s much harder to limit exposure.” Organizations outside of the financial services industry, however, are still on the lower end of the curve understanding this exposure, much less taking even basic steps to reduce it. Given the nature of the exposure, security and privacy experts say human resource officials need to be on the front lines of mitigating insider data theft. In particular, HR department heads should be integrally involved in working with a company’s tech and security teams to define and deploy access rights to sensitive company data. “With this collaboration and the right tool sets, companies can apply access controls that restrict employees to just the information they need to perform their jobs,” says Deena Coffman, CEO of IDT911 Consulting, which is part of identity and data risk consultancy IDT911. (Full disclosure: IDT911 sponsors ThirdCertainty.) It’s a balancing act, of course. Quick and flexible access to company records drives productivity gains. At the same time, it creates fresh opportunities for granting unnecessary access privileges — and for theft. “Building data and network security policies to thwart the likely approaches to steal information is a foundation for limiting possible damage,” says Steve Hultquist, chief evangelist at security analytics firm RedSeal. “Using automation to analyze and ensure compliance with a security policy is essential for protecting customer and corporate data assets.” There should also be a structured process for communicating changes quickly to ensure that a terminated employee or departed contractor does not retain access privileges, Coffman says. “Many of the inside attacks are IT employees with elevated privileges and little oversight on how and when those privileges are used,” Coffman says. “The use of privileged accounts should be monitored and logged. Separation of duties should be required on certain functions, and an annual outside review is a good idea.” Cutting off terminated employees and partners should be swift and sure. Better safe than sorry. “Too often, organizations don’t have a complete picture of what access each employee has, particularly if they have been there a while,” ESET’s Cobb says. “Getting employee departures right involves a coordinated effort from HR, IT and legal.” A disgruntled employee, who’s not planning on going anywhere, is another type of exposure that should be addressed. American Banker is now reporting that the alleged perpetrator of the Morgan Stanley breach was promoted to financial adviser from sales assistant about a year ago and gained access to records by manipulating the bank’s wealth management software. The lawyer representing the accused adviser insists in the American Banker report that his client did not post any of Morgan Stanley’s data on Pastebin. “All managers need to be aware of morale among reports, and there needs to be a process for taking concerns to HR in a discreet way while increasing monitoring of use of IT resources,” Cobb says.

Therapy Charges Are Being Inflated

Some physical therapist networks are routinely and inappropriately adding $15 to $19 of charges for each office visit.

Your physical therapy (PT) costs may be $15 to $19 per visit higher than they should be. Here’s what’s going on: It’s common for therapists to perform multiple procedures at the same time on a single body part. Under nationally accepted standards (under the Centers for Medicare and Medicaid Services (CMS) National Correct Coding Initiative), the therapist is to be reimbursed for only one of these procedures. Sometimes, it is appropriate for the PT to bill for multiple procedures -- for example, if two procedures commonly done simultaneously are performed at separate times. But, unless the therapist adds a special modifier to the procedure code, only one will be reimbursed. If multiple procedures are to be reimbursed, the "59 modifier" is added to the end of the CPT code, and the treating provider documents the reason for the variance in coding in the medical notes. The 59 modifier should be on about 11% to 15% of lines on PT bills. But some payers are seeing 59 modifiers on almost ALL BILLS. It appears the 59 modifiers were not added by the therapist; they were added by a PT network company. There’s no explanation in the treatment notes for this billing practice; no evidence the affected procedures were actually performed at separate times; no indication the PT network company reviewed the treating provider’s notes prior to upcoding. No documentation, no record, no history. It appears that the intermediary was adding the 59 modifier as an automated system edit without reviewing the treatment notes. The systemic upcoding has resulted in higher costs for payers. You should look at bills processed between 2009 and 2014:
  • If more than 20% of lines on your PT bills have the 59 modifier, you MAY have a problem.
  • If more than 40% of the lines on your PT bills have this modifier, you DO have a problem.
For the full blog from which this is excerpted, click here.

Joseph Paduda

Profile picture for user JosephPaduda

Joseph Paduda

Joseph Paduda, the principal of Health Strategy Associates, is a nationally recognized expert in medical management in group health and workers' compensation, with deep experience in pharmacy services. Paduda also leads CompPharma, a consortium of pharmacy benefit managers active in workers' compensation.

Is Baseline Testing Worth It? (Part 3)

A review of 15,000 tests shows that they cut the number of workers' comp claims and reduce handling costs.

This is the conclusion to the series of articles on whether baseline testing is worth the effort. The first two articles dealt with baseline testing from an employer's point of view and from an injured worker's point of view. We believe that those case studies were compelling. This final article will examine the statistics and, we believe, prove that baseline testing is truly worth the effort. The concept of baseline testing for soft-tissue injuries began for us when requirements for set asides were established to protect Medicare from future medical expenses for workers’ compensation and general liability claims. ln 2011, the Centers for Medicare and Medicaid Services (CMS) mandated that all workers’ compensation  and general liability claims be reported in electronic format. This change enables CMS to look back and identify if it has ever made any work comp-related payments on a patient. Section 111 of the Medicare, Medicaid and SCHIP Extension Act of 2007 establishes Medicare's status as a secondary payer under 42 U.S.C. 5 1395y (b), and this creates a right to reimbursement for any future claims related to a past workers’ compensation settlement. Therefore, this act has the potential to impose a possible risk of future liability against all parties indefinitely. Soft-tissue injuries are the leading cause of claims and costs in this challenging system. They account for at least one third of all claims and are the primary reason for lost time at work. So, we  began baseline testing for soft-tissue injuries for the transportation industry in October 2011. Since that time, we have expanded our baseline testing  program to other industries: manufacturing, retail, warehouse and construction. Our initial testing  was in Georgia and quickly expanded to Texas. Now, our program is being conducted in California, Arizona, Utah, Florida, Oklahoma, Colorado and Indiana. Since the inception of the program, we have conducted more than 15,000 baseline tests. Of those we tested, 27 have attempted to file a workers' compensation soft-tissue claim. Only five of those 27 were found to have a change in condition. ln other words, only five had a pathology that arose out of the course and scope of employment (AOECOE). No claim was accepted for the remaining 22 cases. Of the five claims that were accepted, all resolved with the appropriate treatment. Of the cases where there was no change in condition and the claim was not accepted, three went on to litigation. These cases are summarized in the following vignettes. Litigated case 1: A 54-year-old truck driver underwent the post-loss electrodiagnostic functional assessment (EFA) to compare with the baseline. She alleged incapacitating pathology to her neck, shoulder and back. But the comparison between the post-loss test and the baseline actually demonstrated improvement. It was found she had 25 prior workers' compensation claims related to the same body part. Her case ultimately went to arbitration. This complicated case settled for less than $6,000. There was a full release with language to prevent future medical care from CMS, thereby protecting the employer from the unpredictable expenses of future claims to the same body part. Litigated Case 2: A truck driver who was employed for less than a month experienced an unwitnessed fall from a truck and alleged injuries to his back, plus cumulative trauma. When the comparison tests were done, it was revealed that he had substantial pathology on the baseline that was unchanged in the EFA post-loss test.The claim remained denied based on the EFA-STM program, but he continued to receive treatment. No payments were made for the patient's care, and he continued to pursue the issue through the legal process. The employer agreed to an independent medical exam (IME) appointment to review the status of the EFA comparisons and help establish AOECOE. The IME doctor, based on the EFA reports, found no work-related injury, leading to an uncomplicated resolution of this case. Litigated Case 3 was detailed in Part 1 of this series. In summary, the results of the EFA-STM program demonstrated no change in condition, and the findings were affirmed in court. In these three case examples, no unnecessary medical care was permitted; paid time off work was shortened; and litigation was resolved earlier in the process, reducing costs. Even though people will sometimes still litigate, the baseline testing gave objective medical evidence for AOECOE conditions and supported the defense of the case. A review of the history of claims in businesses also shows that utilization of EFA –STM program significantly reduces the frequency of workers’ compensation injury claims. In summary, the EFA program leads to more accurate diagnoses and ultimately better site-specific care to the injured worker. There are far fewer litigated cases, and even these cases are less costly because the objective evidence leads to more rapid, accurate and favorable results. ls baseline testing worth the effort? Indubitably, yes!

Redefining Detox in Workers' Comp

Weaning injured workers off dangerous drugs must go beyond chemical detox and include the psychosocial, the mind-body issues.

When most people in workers’ compensation hear the term “detox” they think of chemical detox, the process of removing or reducing the prescription drugs patients are taking to deal with their pain. Indeed, injured workers on drug regimens with questionable clinical efficacy (low function, low quality of life) need to go through a process to lower the dosage and number of drugs they’re taking or eradicate them entirely. Chemical detox can be very complicated; a benzodiazepine like Valium or Xanax can take as long as 18 months to wean and should typically be the final drug weaned because of how this category of drugs complicates the medication regimen and causes side effects. Methodone or Suboxone might be added to help facilitate the weaning, but they come with their own issues -- significant clinical complications for Methodone and becoming a long-term maintenance drug for Suboxone. However, if you think of detox only as a chemical weaning process, you can miss the most important component in affecting permanent change: the psychosocial aspect. Removing dangerous drugs without any plan for addressing how claimants can physically and mentally cope with their pain can lead to relapse. Folks in the functional restoration field say that 75% of patients remain off 75% of their original drugs after 12 months if they are involved in a best-practices clinic. I’ve researched this issue over the past two years, visiting many detox and functional restoration programs. Functional restoration and detox facilities are not created equally, and not all physicians are knowledgeable or proficient in weaning. I am absolutely convinced that best practices involve an interdisciplinary treatment approach. If you do not have a team composed of a licensed MD/DO to manage the medical and addiction issues, a licensed physical therapist to increase function, flexibility and stamina and a licensed psychologist to address psychosocial issues, the injured worker won’t make all the behavioral and mental changes required to stay off inappropriate drugs. Work comp is deathly afraid of a psych-compensable diagnosis because it can open doors well beyond vocational, but we cannot ignore what happens in a patient’s conscious and subconscious mind. If you ignore the psychology behind addiction and dependency and neglect to address things like low self-esteem, catastrophizing and perceived injustice, the patient isn’t likely to truly and permanently change. Two to three months after being discharged as clean, the patient is likely to resume old habits of overusing or abusing prescription drugs. Relapse may also occur if the patient fails to learn non-pharmacological pain-coping skills like yoga, Pilates, stretching and other physical exercise. It is tempting to try to close a claim upon receipt of a clean discharge from a detox facility. After all, the drug regimen will look as good then as it ever will, and it would be naïve to think that isn’t a driver in some cases. But if the goal is to truly restore claimants to as close to pre-injury condition as possible for the long term, do your homework on those conducting the weaning and take into consideration the body-mind connection.

Mark Pew

Profile picture for user MarkPew

Mark Pew

Mark Pew is a senior vice president at Prium. He is an expert in workers' compensation medical management, with a focus on prescription drug management. Areas of expertise include: abuse and misuse of opioids and other prescription drugs; managing prescription drug utilization and cost; and best practices for weaning people off dangerous drug regimens.