Download

Urine Drug Testing Must Get Smarter

The current approach to urine drug testing yields too many false negatives and positives, doing no favors to either workers or employers.

Medical treatment guidelines, such as the American College of Occupational and Environmental Medicine and the Work Loss Data Institute’s Official Disability Guidelines, recommend urine drug testing (UDT) for monitoring injured workers who are prescribed opioids. Yet studies show that few physicians actually order the tests. There are a variety of concerns about UDT, including its potential overuse, underuse, effectiveness and cost. The guidelines are fairly nonspecific in terms of the frequency and type of testing that are most appropriate for injured workers. The fact is, all UDTs are not created equal and should not be used interchangeably. Immunoassay tests, for example, are preferred when simply trying to detect the presence or absence of illegal drugs in a person’s system. More sophisticated tests, such as liquid chromatography, may be more suitable for clinical applications. They are far more accurate than immunoassay tests, can identify parent medication and metabolites and can identify specific medications, rather than just drug classes. The differences in the types of drug testing have important ramifications for patients. For example, inappropriate or insufficient testing can put injured workers at risk for drug overdoses. "The type of testing clinicians use should depend on the purpose," said Steve Passik, vice president of Clinical Research and Advocacy for San Diego-based Millennium Health. "The immunoassay test comes from a forensic application and vocational application. In those settings, only the most egregious offenders are meant to be caught." Job seekers, workers involved in workplace accidents, and athletes are among those typically subject to forensic tests. For them, immunoassay testing is appropriate and is based on the Mandatory Guidelines for Federal Workplace Drug Testing Programs, developed by the U.S. Department of Health and Human Services. Because much of UDT today has its roots in forensic applications, the methods and mindsets of simple immunoassay testing are often used in clinical settings. These tests are subject to a high number of false positives; therefore, only positive results are typically sent for confirmatory testing to avoid falsely accusing people of drug use that might have dire consequences, such as job loss. "This is problematic," Passik said. "An injured worker who is using drugs and has a false negative result is potentially at risk if the physician uses a forensic mindset and only confirms positive test results. If the injured worker's pain medications are mixed with whatever drugs he may be abusing, he could suffer an overdose. Or, his addiction could worsen since it is not being detected by the workers' comp claims administrator." Immunoassay tests are generally cheap, fast and readily available. However, they are not designed for, nor are they very effective for, many clinical applications on their own. "Take a worker who is being prescribed pain medications and is overusing them. The worker runs out of his or her medication and then borrows some from a friend or family member and even further supplements by abusing heroin when these are unavailable," Passik said. "If his result on an immunoassay test comes back positive for an opioid, this lends a false sense of security that it is, in fact, the prescribed opioid that caused the result. This result is actually a ‘clinical false negative’ for the non-prescribed opioid and heroin. If the clinician has a forensic mindset that sets out simply to catch people but not falsely accuse them, the testing would end there.” Another example might be seen in the worker prescribed an opioid for pain but also using cocaine who knows not to use it within two to three days of doctors’ visits to avoid testing positive on the immunoassay. The immunoassay test would likely yield a false negative, and testing would, again, end there. “This worker could be quite vulnerable and might even engage in the type of self-deception whereby he convinces himself that he has no drug problem because he can stop in time to produce a negative specimen for cocaine, ”said Passik. The mixing of cocaine or heroin and prescribed and borrowed pain medications would make the worker susceptible to an overdose and to other drug interactions or to triggering his addiction. But the medical provider in this case would have no idea the person is abusing drugs. "That's the rub," Passik said. "If I were using UDT in a worker's comp setting, I would have a more flexible policy that allows the provider to use his clinical judgment to determine whether to send either positive or negative results from immunoassay tests to a lab for confirmation testing, or simply skip the immunoassay test and go straight to the lab." Immunoassay tests often produce false negative results because of the high cutoff levels that prevent the tests from detecting low levels of medications. They may also fail to detect opioid-like medications such as tramadol and tapentadol, as well as synthetic opioids such as fentanyl and methadone. False positive results also occur, because certain immunoassay tests are subject to cross-reactivity from other medications and over-the-counter drugs and may produce inaccurate results. And there is a limited specificity for certain medications within a class. Liquid chromatography tests, on the other hand, enable detection of a much more expansive list of drugs. This is significant, as virtually all injured workers on opioid therapy would be expected to test positive on a drug screening. The liquid chromatography test could detect which opioid was present in the injured worker’s system and at which levels. In a 2012 study that analyzed results for point-of-care tests using immunoassay in physicians' offices or labs, Millennium Health found 27% of the test results were incorrectly identified as positive for oxycodone/oxymorphone. The low sensitivity of immunoassay tests can mistakenly identify codeine, morphine or hydrocodone as the same drugs. Similarly, the study results showed the immunoassay tests missed the identification of benzodiazepines in 39% of the results. One example of clinical chromatography is liquid chromatography tandem mass spectrometry (LC/MS-MS). These tests are far more accurate than immunoassay tests, can identify parent medication and metabolites and identify specific medications, rather than just drug classes. "Professionals can now accurately test with both great sensitivity and specificity to understand whether patients are taking their prescribed medication, avoiding the use of non-prescribed licit controlled substances and whether or not they are using illicit drugs, which allows for better clinical decision making," Passik explained. "LC/MS-MS results are now rapidly available to clinicians, allowing for a much greater integration of these results into clinical practice." In fact, Passik says much of the growth in the use of LC/MS-MS in recent years is because of the speed with which results can now be obtained, often within 24 hours. In terms of drug monitoring for injured workers, Passik says immunoassay testing alone does not provide the physician with an accurate basis on which to make good clinical decisions. These tests may be positive for opiates – which, if the person has been prescribed opiates, would be expected. "In this case, a positive result would need to be sent to the lab to confirm that the opioid detected in the test was solely the medication prescribed and there are no other licit — or illicit — drugs present. The immunoassay positive result by itself doesn't provide enough information," Passik said. "However, if the worker is well known to the prescriber and has a long history of UDTs showing he is taking his medications as prescribed, the provider might decide the immunoassay test result will suffice at that point. But, again, it would need to be in the context of appropriate results of UDTs and a clinical exam that do not suggest otherwise." Beyond the confusion about the types of UDT, a handful of unscrupulous clinicians are overusing the tests by performing them in their offices or labs they own, regardless of the patient's risk factors for abuse or overdose. Payers are overcharged by these providers, as they do more testing than is necessary and charge for the initial test, analysis and confirmatory test (because virtually all tests on injured workers receiving opioid therapy would be positive), resulting in three separate bills. There are also questions surrounding the frequency with which these tests should be performed on a given injured worker. Passik and other experts say the frequency of the tests should be determined by a medical provider based on the injured worker's risk factors. An injured worker who is depressed, male, a smoker and has a personal or family history of substance abuse would likely warrant more frequent testing than someone with no known risk factors who is fully cooperating with those handling his claims and is eager to do, or is already doing, light duty work. It’s a tough call, and, so far, it is not an exact science. "If the patient is older and has no history of addiction or other risk factors, you would probably test her a couple of times a year," Passik said. "But a coal miner in southeastern Kentucky who has been traumatized from an accident, has addiction history in his family, lives in an area where he can make money [by selling the drugs] — that's a high risk person who likely needs to get tested more often. Most people fall in between, so it’s best to rely on the clinician’s extensive training and individual assessments of their patients and potential risk factors to consider when developing a treatment plan." Part of the decision making on the part of medical providers involves figuring out strategies to integrate the two methods of testing, immunoassay and chromatography – "specificity when you need it and the frequency when needed so you can do it in the most cost effective fashion," Passik said. "The tests should be integrated in a smart way." The nature of workplace injuries is such that more testing up front may be required. "Unfortunately, workers' compensation is heavily loaded with high-risk patients," Passik said. "They tend to be younger, traumatized because they are injured, and suffer from depression — all of which are risk factors for addiction." The best advice for practitioners is to look for thorough documentation from providers, communicate with all parties, especially the injured worker, and become informed on the type and frequency of UDTs performed for each injured worker.

3 Key Steps for Predictive Analytics

The basis of competitive advantage has changed for predictive analytics. Having them used to be an edge, but now you have to do more.

The steady drumbeat about the dire need for data and predictive analytics integration has been there for several years now. Slowly, many carriers have started to wake up to the fact that predictive analytics for underwriting is here to stay. According to Valen Analytics’ 2015 Summit Survey, 45% of insurers who use analytics have started within the past two years, and, of those that don’t currently implement analytics, 56% recognize the urgency and plan to do so within a year. Although it used to be a competitive advantage in the sense that few were using predictive analytics, it can now be viewed as table stakes to protect your business from competitors. The real competitive advantage, however, now comes from how you implement predictive analytics within your underwriting team and focus its potential on strategic business issues. New competitors and disruptors like Google won’t politely wait around for insurers to innovate. The window to play catch-up with the rest of tech-driven businesses is getting narrower every day, and it’s either do or die for the traditional insurance carrier. All of this buzz about data and predictive analytics and its importance can be deafening in many ways. The most important starting point continues to center on where to get started. The most pertinent question is: What exactly are you trying to solve? Using analytics because everyone is doing it will get you nowhere fast. You need to solve important, tangible business problems with data-driven and analytic strategies. Which analytic approach is best, and how is it possible to evaluate the effectiveness? Many insurers grapple with these questions, and it’s high time the issue is addressed head-on with tangible steps that apply to any insurer with any business problem. There are three key steps to follow. First Step: You need senior-level commitment. You consume data to gain insights that will solve particular problems and achieve specific objectives. Once you define the problem to solve, make sure that all the relevant stakeholders understand the business goals from the beginning and that you have secured executive commitment/sponsorship. Next, get agreement up front on the metrics to measure success. Valen’s recent survey showed that loss ratio was the No. 1 one issue for underwriting analytics. Whether it’s loss ratio, pricing competitiveness, premium growth or something else, create a baseline so you can show before and after results with your analytics project. Remember to start small and build on early wins; don’t boil the ocean right out of the gate. Pick a portion of your policies or a test group of underwriters and run a limited pilot project. That’s the best way to get something started sooner than later, prove you have the right process in place and scale as you see success. Finally, consider your risk appetite for any particular initiative. What are the assumptions and sensitivities in your predictive model, and how will those affect projected results? Don’t forget to think through how to integrate the model within your existing workflow. Second Step: Gain organizational buy-in. It’s important to ask yourself: If you lead, will they follow? Data analytics can only be successful if developed and deployed in the right environment. You have to retool your people so that underwriters don’t feel that data analytics are a threat to their expertise, or actuaries to their tried-and-true pricing models. Given the choice between leading a large-scale change management initiative and getting a root canal, you may be picking up the phone to call the dentist right now. However, it doesn’t have to be that way. Following a thoughtful and straightforward process that involves all stakeholders early goes a long way. Make sure to prepare the following:
  • A solid business case
  • Plan for cultural adoption
  • Clear, straightforward processes
  • A way to be transparent and share results (both good and bad)
  • Training and tech support
  • Ways to adjust – be open to feedback, evaluate it objectively and make necessary changes.
Third Step: Assess your organization’s capabilities and resources. A predictive analytics engagement is done in-house or by a consultant or built and hosted by a modeling firm. Regardless of whether the data analytics project will be internally or externally developed, your assessment should be equally rigorous. Data considerations. Do you have adequate data in-house to build a robust predictive model? If not, which external data sources will help you fill in the gaps? Modeling best practices. Whether internal or external, do you have a solid approach to data custody, data partitioning, model validation and choosing the right type of model for your specific application? IT resources. Ensure that scope is accurately defined and know when you will be able to implement the model. If you are swamped by an IT backlog of 18-24-plus months, you will lose competitive ground. Reporting. If it can be measured, it can be managed. Reporting should include success metrics easily available to all stakeholders, along with real-time insights so that your underwriters can make changes to improve risk selection and pricing decisions. Boiling this down, what’s critical is that you align a data analytics initiative to a strategic business priority. Once you do that, it will be far easier to garner the time and attention required across the organization. Remember, incorporating predictive analytics isn’t just about technology. Success is heavily dependent on people and process. Make sure your first steps are doable and measurable; you can’t change an entire organization or even one department overnight. Define a small pilot project, test and learn and create early wins to gain momentum by involving all the relevant stakeholders along the way and find internal champions to share your progress. Recognize that whether you are building a data analytics solution internally, hiring a solution provider or doing some of both, there are substantial costs involved. Having objective criteria to evaluate your options will help you make the right decisions and arm you with the necessary data to justify the investment down the road.

Credit Data Flash Yellow Alert on GDP

Credit data have been deteriorating for six months. It's not yet a red alert but is definitely cause for concern for the U.S. economy.

In the economic cycle of 2003-2007, one question we asked again and again was, “Is the U.S. running on a business cycle or a credit cycle?” How much of the growth was sustainable, and how much depended on an expansion of credit? That question was prompted by a series on credit data we have tracked for decades, data that tells a very important story about the character of the U.S. economy. That credit data series is the relationship of total U.S. credit market debt relative to U.S. GDP. Let’s try to put this in English, because the credit data is sending a warning signal about the U.S. economy. What is total U.S. credit market debt? It is an approximation for total debt in the U.S. economy at any point in time. It’s the sum total of U.S. government debt, corporate debt, household debt, state and local municipal debt, financial sector and non-corporate business debt outstanding. It very much captures the dollar amount of leverage in the economy. GDP is a very straightforward number: the sum total of the goods and services we produce as a nation. So what we are looking at is how much financial leverage in the economy relative to the growth of the actual economy itself has changed over time. What is clearly most important is long-term trend. From the official inception of this series in the early 1950s until the early 1980s, growth in this representation of systemic leverage in the U.S. grew at a moderate pace. Liftoff occurred in the early 1980s as the Baby Boom generation came of age. We believe two important demographic issues help explain this change. First, there is an old saying on Wall Street: People do not repeat the mistakes of their parents, they repeat the mistakes of their grandparents. From the early 1950s through the early 1980s, the generation that lived through the Great Depression was largely alive and well and able to tell their stories. A generation was taught during the Depression that excessive personal debt can ruin household financial outcomes, so debt relative to GDP in the U.S. flatlined from 1964 through 1980. As our GDP grew, our leverage grew in commensurate fashion. Dare we say we lived within our means? To a point, there is truth to this comment. Alternatively, from the early 1980s onward, we witnessed an intergenerational change in attitudes toward leverage. Grandparents who lived through the Depression were no longer around to recite personal stories. The Baby Boom generation moved to the suburbs, bought larger houses, sent the kids to private schools, financed college educations with home equity lines of credit and carried personal credit balances that would have been considered nightmarish to their grandparents. The multi-decade accelerant to this trend of ever-increasing systemic leverage relative to GDP? Continuously lower interest rates for 35 years to a level no one ever believed imaginable, grandparents or otherwise. That is where we find ourselves today.
Why have we led you on this narrative? Increasing leverage has been a key underpinning to total U.S. economic growth for decades. Debt has grown much faster than GDP since 1980. For 3 1/2 decades now, in very large part, expanding system-wide credit has driven the economy. Although U.S. total debt relative to GDP has fallen since the peak of 2008, in absolute dollar terms, U.S. total credit market debt has actually increased from $50 trillion to $60 trillion over this time. Moreover, U.S. federal debt has grown from $8 trillion to close to $18.5 trillion since Jan. 1, 2009, very much offsetting the deflationary pressures of private sector debt defaults. To suggest that credit expansion has been a key support to the real U.S. economy is an understatement.
By no means are these comments on leverage in the U.S. economy new news, so why bring the issue up now? We believe it is very important to remember just how meaningful credit flows are to the U.S. economy now because a key indicator of U.S. credit conditions we monitor on a continuing basis has been deteriorating for the last six months. That indicator is the current level of the National Association of Credit Managers Index. As per the National Association of Credit Management (NACM), the Credit Managers Index is a monthly survey of responses from U.S. credit and collections professionals rating factors such as sales, credit availability, new credit applications, accounts placed on collection, etc. The NACM tells us that numeric response levels above 50 represent an economy in expansionary mode, which means readings below 50 connote economic contraction. For now, the index rests in territory connoting economic expansion, but the index is also sitting quite near a six-year low. In our April monthly discussion, we spoke of the slowing in the U.S. economy in the first quarter of 2015. We highlighted the Atlanta Fed GDPNow model, which turned out to be very correct in its assessment of Q1 U.S. GDP. While the Atlanta Fed was predicting a 0.1% Q1 GDP growth rate number, the Blue Chip Economists were expecting 1.4% growth. When the 0.2% number was reported, it turns out the Atlanta Fed GDPNow model was virtually right on the mark. As of now, the Atlanta Fed GDPNow model is predicting a 0.8% GDP number for Q2 in the U.S. (the Blue Chip Economists are expecting a 3.2% number).
Now is the time to keep a close eye on credit expansion in the U.S. We’ve been here before in the current cycle as the economy has moved in fits and starts in terms of the character of growth. Still, a slowing in the macro U.S. economy along with a slowing in credit expansion intimated by the NACM Credit Managers Index is a yellow light for overall U.S. growth. A drop below current levels in the NACM numbers would heighten our sense of caution regarding the U.S. economy. Although no two economic cycles are ever identical in character, fingerprint similarities exist. At least for the last two to three decades, the rhythm of credit availability and credit use has been one of those key similarities. Although we know past is never a guaranteed indicator of the future, the NACM Credit Managers Index was an extremely helpful indicator in the last cycle. This index dropped into contractionary territory (below 50) in December 2007. In the clarity of hindsight, that very month marked the onset of the Great Recession of late 2007 through early 2009. Again, for now we are looking at a yellow light for both credit expansion and the US economy. A further drop through the lows of the last six years in the Credit Managers Index would not be a good sign, but we are not there yet. As always, we believe achieving successful investment outcomes over time is not about having all of the right answers, but rather asking the correct questions and focusing on key indicators. September 2015 will mark the seven-year point of the U.S. Fed sponsoring 0% short-term interest rates in the U.S. If this unprecedented Fed experiment was not at least in part aimed at sparking U.S. credit expansion, then what was it all about? The 0% interest rate experiment is likely to end soon. The important issue now becomes just what will this mean to U.S. credit expansion ahead? We believe the NACM Credit Managers Index in forward months will reveal the answer.

'Safer' Credit Cards Already Vulnerable

Chip-and-PIN credit cards are supposed to cut way down on exposure to hacking, but early results from the rollout show potential problems.

A recent Gallup survey found that 69% of Americans worry “frequently” or “occasionally” about having a credit card compromised by computer hackers. It’s not shocking. Consumers are becoming more educated on the topic, and financial institutions are beginning to do more to combat fraud, including introducing new types of credit cards. One example of the latter is chip-and-PIN technology, which everyone from consumers to the president has hailed for its ability to help prevent fraud. But is it the panacea that it’s been made out to be? Let’s take a closer look at exactly what this technology entails. Unlike cards that use a magnetic stripe containing a user’s account information, chip cards implement an embedded microprocessor that contains the cardholder’s information in a way that renders it invisible even if hackers grab payment data while it is in transit between merchants and banks. The technology also generates unique information that is difficult to fake. There is a cryptogram that allows banks to see if the data flow has been modified and a counter that registers each sequential time the card is used (sort of like the numbers on a check), so that a would-be fraudster would have to guess the exact historical and dynamic transaction number for a charge to be approved. Already used in every other G20 country as a more secure payment method, chip-and-PIN cards can be found on the consumer side of a global payment system known as EMV (short for Europay, MasterCard and Visa). The system will be rolled out in the U.S. in 2015, and many of us in the banking and data-security industries believe that it will stanch the flow of money lost to hackers while simultaneously cutting down on credit- and debit-card fraud. MasterCard, Visa and American Express have already begun sending out chip cards to their American cardholders. The technology is expensive—the rollout of chip cards in the U.S. will cost an estimated $8 billion—and this cost may balloon exponentially if the implementation of the new technology is done incorrectly, as a recent spate of fraudulent charges using chip-and-PIN-based technology shows. This recent trend is one early sign that chip-and-PIN may not be the cure-all many consumers were hoping for, at least during the rollout phase. According to Brian Krebs, during the past week, “at least three U.S. financial institutions reported receiving tens of thousands of dollars in fraudulent credit- and debit-card transactions coming from Brazil and hitting card accounts stolen in recent retail heists, principally cards compromised as part of the breach at Home Depot.” The curious part about this spate of credit- and debit-card fraud is that fraudsters used account information pilfered from old-school magnetic stripe cards skimmed in that attack and ran them as EMV purchases in what’s called a “replay” attack. “After capturing traffic from a real EMV-based chip card transaction, the thieves could insert stolen card data into the transaction stream, while modifying the merchant and acquirer bank account on the fly,” Krebs reported. It sounds confusing, but the bottom line is money was stolen. As with many scams, this particular evolution in the world of hacking for dollars cannot succeed without human error, which is probably the biggest liability in the coming chip card rollout. Krebs spoke with Avivah Litan, a fraud analyst with Gartner, who said, “It appears with these attacks that the crooks aren’t breaking the EMV protocol but taking advantage of bad implementations of it.” In a similar attack on Canadian banks a few months ago, one bank suffered a large loss because it was not checking the cryptogram and counter data, essential parts of the protocol. As with all solutions in the realm of data-security, there is no such thing as a sure thing. Whether the hackers banked a false sense of security at the institutional level, knowing that the protocols might be deemed an unnecessary expense, or the recent attacks are merely part of the chip card learning curve, this latest technology is only as good as its implementation. So, despite the best efforts of those in the financial services industry, the truth is I can’t blame anyone for worrying a bit about credit card fraud. The good news is that in almost all cases, the consumers aren’t responsible when they’ve been hit with fraud. The banks take care of it (though it can be trickier with debit cards, because money has actually left your account). These days, though, the reality is that you are your own first line of defense against fraudulent charges. That means pulling your credit reports at least once each year at AnnualCreditReport.com, monitoring your credit scores regularly for any sudden and unexplained changes (you can do that for free using free online tools, including those at Credit.com), keeping a close eye on your bank and credit card accounts daily and signing up for transactional monitoring programs offered by your financial institutions.

Maturing Use of Mobile in Insurance

Mobile is now widely use to interact with customers on personal lines and to help with loss and risk management in commercial lines.

“Can you hear me now?” The use of mobile technology is indeed maturing in the insurance industry! Recent SMA research shows that, over the last year, insurers have increasingly invested in developing digital strategies. Most intend to migrate, over time, to a comprehensive digital insurer approach. Some others pick a specific area to work on, such as mobile agent/broker support or self-servicing capabilities for policyholders. Although both approaches are perfectly justifiable, we strongly recommend to tie all digital and mobile initiatives together under a “digital insurer” strategy. This approach will ensure consistency between business functions, market segments and customer experiences – and it is the approach that will help prioritize investments. A big part of a digital strategy is a plan for implementing mobile technology. Most phones are not being used primarily to make calls anymore. (When I was overseas last week, and my phone didn't work, I experienced first-handed how much we all rely on our smart phones for information and transactions, restaurant and hotel bookings, travel info, weather, banking and shopping.) Today, people expect to be able to transact on their mobile device as if it is a desktop or laptop. So how is our industry responding to these expectations? Especially in the direct writing, personal lines space, mobile has become a mature and widely implemented technology. Direct writers support pretty much all informational and transactional interactions with their policyholders via mobile devices. In the last year or two, we have also seen carriers with agent/broker distribution channels invest heavily in mobile services. This investment tends to be triggered by one or more of three drivers: cost savings because of self-servicing; distribution channel experience (ease of doing business) and expectations; or competitive pressure. Almost all of these carriers start their mobile implementations with purely informational capabilities, followed by enabling transactions. In addition, some of the multi-channel carriers are now starting to expand their mobile capabilities beyond the distribution channels into the policyholder relations, carefully balancing what to communicate directly to policyholders and how to continue to fully engage the agent/broker. On the commercial side of the business, we have seen a slightly different approach to mobile enablement. Carriers first built mobile capabilities around loss or risk management functions, including information on replacements materials and costs, uploading pictures of damaged assets, providing tools for risk assessments or location-specific information. In most cases, these capabilities were first rolled out to distributors; now we see some carriers that also offer them to their policyholders. Especially in the commercial segment, however, insurers are very cautious about reaching out directly to policyholders, and almost all communication is a three-way process among carrier, agent/broker and policyholder. As both our research and our interactions with specific insurers have shown, mobile strategy and implementation have matured rapidly. Our industry is definitely past the “can you hear me now” days. The next focus area will be how to integrate mobile into a true digital strategy and how to capitalize on the information we are starting to gather on our policyholders and partners. That is the point where all investments made will truly start paying off.

How to Apply 'Lean' to Insurance

"Lean" techniques are historically associated with manufacturers but can do a lot to improve the business of brokers and risk managers.

If you're like many employers, you say you run your business in this order: people first, process second and profit last. But for employees and customers alike, they feel as if it's: profit first, process second, then people last. With 60% to 70% of your employees disengaged, it's not time to change the way they think, but the way you think first. If you do, you'll make more money by putting things in the right order. How you run your business indicates how you sell. With more agents "spreadsheet selling," just based on numbers, learning how to identify and remove root causes of customer problems has gone by the wayside. One could argue that few producers even know how to sell anything other than spreadsheets. When there are other alternatives for customers, however, spreadsheets add no real value in customers' eyes. Toyota's definition of adding value, along with that of other companies that have adopted the principles of lean manufacturing, is the one to study when trying to improve your business and help customers improve theirs, too. At Toyota, it really is people first, process second and profit last. Before we get to how to apply Toyota's thinking to insurance, let's study how its version of lean manufacturing made its way from America to Japan. Early on during World War II, America was in desperate need of quality and speedy production to build machinery to fight and win the war. Tanks, airplanes, guns and submarines were in short supply when Japan surprised America at Pearl Harbor. The U.S. government turned to the Training Within Industries program to educate American manufacturers on how to improve quality and reduce costs while increasing the rate of production. With a crisis threatening to destroy everything we knew, we developed an enlightened sense of purpose. American executives listened and changed the way they looked at people and how they built things. The principles of lean were born. After WWII ended, Gen. Douglas Mac Arthur was given full responsibility to rebuild the Japanese economy. When he arrived, he found devastation, burned-out cities with no functional capacity and people existing on just 800 calories per day. He also discovered he had no way to distribute propaganda necessary to convince Japanese citizens about what Americans wanted to achieve. With quality Japanese radios in short supply, MacArthur turned to Bell Labs, which turned to employee Dr. Walter Shewhart for help improving radio communications. Shewhart, who was unavailable, recommended that 29-year-old engineer Homer Sarasohn be sent to Japan, to teach statistical quality control. Sarasohn then spent four years working closely with Japanese scientists and engineers, improving their knowledge about how to best manufacture and sell goods and services. When Sarasohn left in 1950, the reins of teaching continuous improvement were turned over to Dr. W. Edwards Deming. Deming expanded on what Sarasohn began, and lean manufacturing took hold at hundreds of companies, including Toyota, one of the many Japanese companies Deming consulted for until he died in 1993. Today, Toyota is known for its driving principle; respect for people is the core to the culture. All decisions for improvement are made with this principle in mind. Even when it comes to reducing labor costs, respect for people is at the forefront. For example: Toyota has never laid off a single employee. It has, instead, turned to employees to improve their processes by finding wasteful steps and activities that impede value customers demand. And when it comes to profitability, Toyota's profits in 2013, exceeded Ford, GM and Chrysler combined, even though Toyota built roughly half the number of cars. So, how can you as an insurance agent/risk manager use the same concepts to grow and improve your business? Quite simple:
  1. Improve capacity by first engaging employees in identifying wasteful activities. Then reduce or eliminate the activities. Activities such as:
    1. (T)ransporting something.
    2. (I)nventory–keeping too much or failing to meet customer demand.
    3. (M)otion–looking, reaching or stooping to get something that isn't in its best place.
    4. (W)aiting for information. How often do you wait while someone else produces material? How much time is spent waiting for loss runs, proposals, and other data?
    5. (O)verproducing information. For example: sending out copies of emails to multiple parties unnecessarily--emails that take time to be read by each recipient.
    6. (O)veranalyzing information or taking too much time to make a decision.
    7. Creating (D)efective information that must be redone. Certificates, proposals and routinely changing human resource policies come to mind.
    8. Failing to maintain a (S)afe working culture.
These are, based on the initials, the TIMWOODS of waste, and identifying them is your starting point.
  1. As capacity improves, employees have more time on their hands. The first cost you'll reduce is overtime. That's because employees will meet production demands better. Remember, you're looking for reducing, or eliminating altogether, processes and activities that add no customer value. A secondary benefit? Employees won't feel that their valuable skills are wasted on activities they don't enjoy anyways.
  2. As capacity improves, share what you learned about your improvement efforts with customers and their supply chain. You'll be busy with ample prospective opportunities.
  3. Then offer to work with customers and their supply chain to teach them how to use what you've learned.
  4. Develop strategies using your new capacity to expand your business. Focus on creating opportunities that reduce risk and improve internal and external customer efficiencies. That's value through the eyes of your customer.
Don't believe lead times matter within the service industry? Look at what Western Union accomplished: Lead times were reduced from 22 days to just 19 minutes.
  1. Before improvements
  2. After improvements
Lean has benefits to offer the entire insurance and risk management community. We've prioritized profits over processes and people and missed out. It's time to re-order our priorities.  

How to Find Mobility Solutions (Part 2)

Until insurers (and agents and brokers) can operate entirely using apps on smart devices, they can't claim to have mobility solutions.

Before continuing from the "How to Find Mobility Solutions (Part 1)" post, I want to repeat my bias: I think that until insurers, and insurance agencies/brokers, can operate entirely using apps on smart device they can't really call themselves "mobile-next." Potential insurance mobility solutions Focus on enabling producers to use a smart device that has the requisite apps to:
  • Manage their day (and week and month) -- seeing list of sales opportunities, setting up appointments, finding meeting locations and going to meetings, using the native calendar/GPS/mapping capabilities of the smart device
  • Get notices about traffic conditions and suggested alternative routes to take if the producer is driving to a meeting
  • Get alerts about severe weather
  • Pull information about the customer from an agency management system or a carrier's customer relationship management (CRM) system before the meeting
  • Note comments about the progress of each sale after each meeting, whether by using the keyboard, stylus (if applicable) or voice entry
  • See charts showing progress-to-date or progress-to-goals
  • Pull all relevant forms into a "potential sale area" on the device -- forms related to the sale of a specific line of insurance and required by the insurance company or regulators
  • View the status of each sale in process and see the steps the carrier still needs to complete, with time estimates of each step
  • Get a quote for any insurance products the producer is allowed to sell
  • Walk a prospect through a policy application form either on the producer's device or by sending it to the prospect's smart device
  • Coordinate a 3-way video session with a subject-matter expert, the prospective client and the producer to answer questions the prospect or producer might have about the insurance product
  • Start a video session with a customer-service representative (CSR) or other colleague in the agency or in the carrier to ask questions or collaborate on an issue - from campaign management to new products to new requirements triggered by new regulations
  • Complete the policy application form, including getting the prospect's e-signature if that can be done at the moment. If completion isn't possible at the time of the meeting with the prospect, then enable the producer to store the policy application and filled-in data on the producer's smart device and also upload the information to the relevant agency or carrier systems
  • Get alerts about any of the producer's customers filing a claim, including the "when, where and why" of the claim
  • See how much time until the next meeting takes place (this is specifically for a smart watch) and get an alert (sound or haptic touch on the wrist) when the producer is close to or at a meeting location.
I realize this is only a starter list of mobile applications for a producer. What would you add?

Barry Rabkin

Profile picture for user BarryRabkin

Barry Rabkin

Barry Rabkin is a technology-focused insurance industry analyst. His research focuses on areas where current and emerging technology affects insurance commerce, markets, customers and channels. He has been involved with the insurance industry for more than 35 years.

It’s Time for a Data Breach Warning Label

Warning labels are required on food and credit card contracts. It's time to make companies list data breaches and how they were handled.

|
The breach at Home Depot is only the most recent in a torrent of high-profile data compromises. Data and identity-related crimes are at record levels. Consumers are in uncharted territory, which raises a question: Is it time to do for data breaches and cybersecurity what the nutritional label did for food? I believe we need a Breach Disclosure Box, and that it can be a powerful consumer information and education tool. Once just a normal part of doing business, data breaches today can sap a company’s bottom line -- and that's the best-case scenario. At their worst, data breaches represent an extinction-level event. The real-world effects for consumers can be catastrophic. Because there is a patchwork of state and federal laws related to data security—some good, some bad, all indecipherable—and none that work together, it’s impossible to know just how safe your personally identifiable information is, and has been, at the places where you shop and with the companies and professional organizations where you do business. Data security, identity-related consumer issues and privacy are all areas screaming for big-picture solutions. This is a situation in search of a paradigm shift—one that produces tools that enable consumers to make informed choices. There is a precedent that could serve as a template. It was passed in 1988, though not implemented until 2000. You may recognize its name—it’s called the Schumer Box. This is the law that put the fine print of credit terms and conditions in your face—bigger, bolder and easier to understand. You see it all the time featured in those countless pleas for your credit business that land in your email and your mailbox. The Schumer Box is simple. It requires that financial services companies provide certain information to the consumer when making a pitch for their business—information like long-term rates, the annual percentage rate for purchases and the cost of financing—and that the information be displayed in a standardized fashion. The Schumer Box is to credit cards what the nutritional label is to food. A Concise Disclosure for Breaches The Breach Disclosure Box that I am proposing would need to be simple, too. While I believe it is important to create a system that informs consumers about breaches, bear in mind that all breaches are not alike. There are breaches where the only piece of compromised information was a credit card number, which can be easily replaced and for which the consumer had zero liability. Then there are breaches involving Social Security numbers, detailed banking data or personal health information. These are very different situations. But they all share one thing in common: Something about you is “out there” and can be used by a criminal to commit either a crime against you or in your name. The “solution” — regardless of a breach’s severity — is the same. I place “solution” in scare quotes because it’s a misnomer to talk about solutions and identity-related crime in the same breath. There is no solution to the pandemic, only containment strategies and best practices. The Breach Disclosure Box would be a crucial part of data-related best practices at the consumer level where it’s all about the 3 M's: Minimizing your exposure, monitoring your public records and financial accounts and managing any damage that occurs from data compromises. Best practices can mean the difference between having a bad day and being financially ruined (or worse), and knowledge of a company’s data security track record can help consumers be better-informed about the risks they’re taking – and ultimately to decide if the risk is worth it. The Breach Disclosure Box would also be a catalyst for companies to step up their game on data security as well as design and implement a breach preparedness plan that promotes an urgent, transparent and empathetic response to any compromise of consumer and employee data. While the following list of Breach Box disclosures could be longer or shorter, the basic idea of a Breach Disclosure Box is essential to consumer safety in this ever-changing and crafty world of data-related crime and data breaches. The box should list:
  • How many times has this company been breached within the past five years?
  • If there has been a breach, what kind(s) of information was exposed?
  • Does this company encrypt all consumer and employee data?
  • Does this company have a breach notification policy?
  • What did the company offer affected consumers?
  • What type(s) of information are customers obligated, or not obligated, to provide?
  • Best practices for avoiding victimization (The 3 M’s)
The contents of the Breach Disclosure Box would ultimately have to be framed by lawmakers and interested parties intent on limiting the amount of ink spilled (or bytes used) to comply with whatever the legislation looks like when it leaves committee; but this bipartisan issue goes way beyond blue state-red state politics. When it comes to data-related crime, we’re all in the same state—a state of emergency.

Alternative Strategies for Provider Networks

Although dealing with provider networks is wildly complex, there are concrete steps that employers can take to cut their medical costs.

In this second article regarding sustainability of provider networks and managing health plan costs, we will focus on carve-out programs, integration of provider delivery models and direct contracting. As referenced in the first article, the Affordable Care Act (ACA) has hurt re-pricing through preferred provider networks (PPNs). Claim amounts being billed by specialty and institutional providers has escalated to such a level that preferred provider organizations (PPOs) have lost much of their appeal. As a result, the introduction of commercial accountable care organizations (ACOs), direct employer/provider contracting, narrow network arrangements and cost-to-charge methodologies have gained significant market share. While the self-funded industry has begun applying many of these alternatives, a high percentage of employers are unwilling to fully embrace some of these changes. Instead, we are seeing intermediate steps through the application of network carve-outs, integration of existing PPOs with specialty care vendors and limited direct contracting. What outcomes do these intermediate steps offer an employer? To start with, a better control of healthcare consumption and lower overall claim costs. Let’s explore the basics of network carve-out programs. The simplest carve-out is an organ transplant product that removes claims from the underlying medical excess coverage and places them with a separate policy. In most cases, the transplant policy includes a centers of excellence network where the procedure must be completed for 100% of the claim to be eligible for reimbursement. When a non-participating facility provides the service, reimbursement may be limited to a lower percentage of the bill and will in many cases have caps. These products may include individual deductibles, waiting periods and lifetime maximums. Less common carve-out solutions are non-risk bearing and target specific treatment types, such as renal dialysis or surgical events. A renal or surgical carve-out is accomplished through a change in plan document provisions that move the service to a non-network benefit. This can be a challenge when dealing with national PPOs, which typically include these service providers in their networks. When considering any form of carve-out program, the client should take care to avoid any reference to a specific disease state and mind the gaps that could potentially exist between the plan document, underlying medical stop loss policy and carve-out policy or provision. The industry is buzzing with the term “transparency,” yet most people are unable to determine the actual cost of service provided to patients. Solutions include the integration of existing PPO networks and specialty care providers through direct contracting and domestic tourism. Additionally, a number of surgical centers are now publishing fee schedules and treatment outcomes online. This disclosure is enticing patients to acquire services in these facilities. The result is a creation of carve-out referral agreements for self-funded employers with fees significantly lower than the most aggressive PPO contract. If the initial reports are accurate, then, in addition to significant savings, patients are experiencing shorter recovery times and fewer complications than through traditional networks. We are also seeing an integration of specialty care providers with traditional networks as a cost-effective tool. In our experience, clients have integrated direct contracts with oncologists, orthopedists, surgical centers, dialysis centers and pain management clinics to more effectively manage care and cost. This approach may be challenged by traditional PPO networks, but the outcome is worth the effort. In a number of cases, we have found it effective to integrate PPOs for institutional services only and contract directly with medical groups based on a capitated model. In other situations, we have contracted with a PPO network for professional services tied to the Medicare Regionally Based Relative Value Schedule (RBRVS) and re-priced the institutional claims on a cost-to-charge or referenced-based pricing scenario. We will discuss reference-based pricing more in a coming article. For PPOs to remain relevant, they must adapt to these emerging innovative solutions. For some, innovation will start with direct contracting on behalf of our client health plans. The process of direct contracting can be relatively painless when working with an independent practice association (IPA) or multispecialty medical group. The purpose of these groups is to establish and oversee patient protocols, referrals and outcomes management on behalf of their member providers with health plans and health maintenance organizations (HMOs). Medical groups typically contract with payers through discounted fee for service, Medicare RBRVS or capitation (pre-payment). While direct contracting can take many forms, our discussion will focus on provider engagement through capitation arrangements. Since ACA's implementation, providers have become more receptive to assumption of risk through direct capitation agreements with employer groups. In its purest form, capitation is essentially a monthly retainer paid to the provider for services to be rendered to the covered member. The provider is then responsible to deliver care with a goal to making a profit from the monthly pre-payment. For this to be effective, the provider must have a patient population whose utilization and medical histories support this methodology. Some may ask how capitation is possible without an HMO license. In some states, such as California, laws allowing the creation of HMOs do not require licensing for pre-payment arrangements when risk-sharing between various medical groups and institutions does not exist. Therefore, if a medical group contracts without sharing in a profit or risk pool with other not-related practices, capitation may be allowed. This approach has been implemented and successfully tested through the Department of Managed Healthcare in California. With this in mind, I would caution employers from running out to look for a willing medical group. The challenge is to find the right medical group that can meet all of the client’s healthcare needs. Will the capitated approach work with institutional providers? The simple answer is yes, though in our experience the process is difficult because many facilities struggle to clearly identify cost of care, and hospitals do not control direction of care.  In settings where capitated institutional models are not practical, we have utilized hospital-only PPO carve-out and referenced-based reimbursement solutions with varying degrees of success. Providers are rushing to establish community risk assumption models, resulting in the elimination of traditional insurance contracts. We will address the provider direct model more in the following article. While we have focused on direct contracting through capitation, I want to briefly introduce another successful approach that integrates current PPO contracting methods with HMO-type protocol management and measurements. The measurements may include average length of stay, bed days per thousand, re-admission and encounter frequency, delivery setting, prescription dispensing and adherence to published standards of care. Practice management providers may participate in profit sharing even in a self-funded plan. This model is not commonly available through third-party administrators (TPAs) because most systems are not equipped to support the protocol and outcomes management required for risk-sharing models. The TPAs with the greatest potential for administering these programs are those that are owned by hospital or provider organizations and that manage risk on behalf of HMO contracts. That being said, we have identified several TPAs that offer these services to self-funded employer plans. The topic of provider contracting will be debated for years to come, and the number of opinions are as great as the options they represent. The challenge for us today is to move the needle of cost management and improved outcomes forward.

John Youngs

Profile picture for user JohnYoungs

John Youngs

John Youngs is the chairman and CEO of OneSource StopLoss Insurance Services. He entered the insurance industry in 1983, working as a broker, and moved to the insurer side in 1989, with a focus on large group self-funded, group life and long-term disability and development of community health plans, the precursor of affordable care organizations.

Chasing the Right Numbers on Claims

Metrics are great, but only if they're the right numbers, based on the right goals, and aren't distorted by the time they reach the daily staff.

Managing a claims operation is challenging. There are so many moving parts, dynamics and procedures. Information comes gushing in like a fire hose, making it difficult for many companies to effectively assemble and organize it. It's crucial to help claims divisions focus on the right numbers instead of chasing numbers that have no value. Most claims leaders know that there are a few factors that affect the majority of claim outcomes. However, many times organizations will mistakenly target metrics “for metrics' sake,” at the expense of common sense. Traditionally, a claims supervisor or branch manager will receive metric targets from senior leadership. Unfortunately, the intent of these goals is skewed dramatically by the time they reach front-line personnel. For example, let’s take a company that wants to improve customer service by inspecting vehicle damage the same business day. While this is a noble idea and has the potential to increase customer satisfaction, branch level managers are often forced to abandon rational thinking to meet a specific “inspection metric” or quota. Managers will chase the numbers to obtain an inspection, often having staff appraisers take photos of damaged vehicles over fences or taking shortcuts in an attempt to meet requirements. This often leads to compromised accuracy and raises the question -- “Does it really make sense?” It does to the manager who needs to meet goals and protect her job but does it truly increase customer satisfaction? Not necessarily. Having a goal at the top doesn’t mean that the numbers will retain their true meaning by the time they get to the daily staff. It’s crucial to focus on figures that actually create better claim outcomes and customer experiences. Here’s another example of how differing goals within a claims organization can skew overall results when managers are forced to manage to the wrong numbers: Let’s say your insured damages another vehicle and that claimant decides to go through his own carrier for repairs. Now the carrier sends in a subrogation demand that includes excessive rental, overlapping operations, duplicate invoices and mathematical errors. Would it be a good idea to just pay what is being asked without reviewing for accuracy? Well, for some insurers that don’t have the staffing or the expertise in the subrogation department, quite often an excessive demand like this might just be rubber-stamped. The subrogation department may be overseen by an individual who has been compartmentalized away from day-to-day claims. If this manager’s goals and metrics don’t include accuracy, he may just pay this overinflated demand. Chasing the wrong numbers can give the misperception that the manager is achieving goals, but the best possible outcome wasn’t achieved. So what’s the answer? The key is matching numbers to desirable outcomes that make sense. Eliminate any metrics that provide little value and only serve to create busywork. With the wealth of data that companies are able to gather and analyze, the focus should be on information that has a direct impact on customer retention and quality service. One must carefully focus on the right numbers to add value and help push the organization forward to achieving that ideal balance of client satisfaction and operational efficiency.