Download

The New Cyberthreat You Face at Work

Cyberthreats are becoming far more sophisticated, using insider knowledge to fool senior executives into giving up access to their computers.

The latest greatest swindlers in the cybercrime racket know you’re onto their digital three-card monte, and they’ve made a few adjustments, putting yet another wrinkle in the corporate-hacking game by targeting top-level employees for major profits. These hackers appear to be based in North America or Western Europe, and they know a great deal about the companies and industries they’ve been cracking. They could be “white-collar hackers” or just good studies of character. It really doesn’t matter. Here’s what counts: They are hatching cyberthreats so nuanced you may not see the hack that takes out your company 'til the smoke clears. These hackers may have worked for your company, or one like it. They are going to know how your teams communicate. They’ll use the lingo and shorthand that you see every day. Emails may be super simple, like, “I need another pair of eyes on this spreadsheet about [term of art only people in your business would know].” They may know what you are likely to be talking about after certain kinds of industry news releases, and they’ll have a good idea of what times of day get busy for you so that you are more distracted and less likely to think before you click. “The attacks are becoming much more sophisticated than anything we’ve seen before,” says Jen Weedon, a threat intelligence officer at the Silicon Valley-based cybersecurity firm FireEye. The New York Times reported about one such group of hackers targeting senior executives at biotech companies with a goal of garnering insider information to game the stock market. FireEye has been tracking the group, which the company calls Fin4, for a year and a half. (The “Fin” designation is assigned by the company to indicate groups where the main goal is to monetize proprietary information.) “Fin4 has reached a threshold of capability that sets them apart,” Weedon told me during a phone conversation. “They are very thoughtful about who they target. They go after specific companies and are a lot more scoped in their approach.” Attacks of this kind may start with the studied e-impersonation of trusted colleagues, business associates or anyone from a constellation of contacts—compliance officers, regulators, legal or financial advisers—with the single purpose of getting someone in a senior position to personally, unwittingly hand over the keys to the castle. Once Fin 4 is in, sensitive—potentially lucrative—information can be accessed and put to use. “They will send a very convincing phishing email,” Weedon said. “It may prompt a link that looks just like Outlook.” The target enters her credentials to see the attachment, not realizing that she was not in Outlook at all. There may even be a legitimate document on the other side of that fake login page, but it’s a trap. Once the hacker gets into a key person’s inbox, Outlook settings have been reset to send any messages containing the words “hacked” or “malware” directly to the user’s trash folder, thereby giving the cyber-ninja more time in the system to collect information about mergers and acquisitions, compliance issues, press releases, non-public market-moving information—anything that can be used to make a smarter stock market trade. According to Weedon, the group has been able to infiltrate email accounts at the CEO level. Once they’ve gained access, the hackers may simply collect everything in the CEO’s inbox or take an attachment found there and plant malware that then spreads throughout the company, thereby exposing still more information. The difference here is that the hack relies on legitimate credentials to gain access, so it’s a much lighter touch with potentially much more information being compromised. If the hackers forgo malware, there aren’t necessarily any traces at all of the compromise. The “old” way these breaches worked—one still very much practiced by Chinese and Russian groups—involved the use of general information, kinda-sorta knowledge of the target’s business and hit-or-miss English. Because there is often less specificity and more variables in these kinds of softer attacks, the dodge is easier to spot. It’s more likely to find a lower-level employee falling for it. In most cases, these targets don’t have the kind of access to information that can cause major damage. Having gained whatever access is possible through their mark, old-school hackers move laterally into the organization’s environment, whether by recording keystrokes to exploit privileged employee credentials or blasting a hole in the company firewall. They might as well be Bonnie and Clyde robbing a bank. The goal is to siphon off information that can be turned into an easy profit, but the process leaves traces. What’s so worrisome about Fin4 is that the hackers can come and go—gaining access to everything and anything pertaining to your company—and you may never know it. For the numerous healthcare and biotech companies that they targeted, the only real-life consequence could be an advantageous trade that somehow anticipated the announcement of a new drug, or shorted a stock associated with a failed drug trial. If you are the target of choice, you will have to be exceptionally well trained by a cutting-edge information security professional and completely tuned in to the subtleties of your workflow to avoid getting got. These fraudsters will have at their fingertips the kinds of information that only an insider should know, and the bait they dangle in front of you will be convincing. While the art is very different, the basic mechanism is the same. Company-killing compromises require human error. While more common hacks rely on a weakest link that can be exploited, the more hackers evolve, the more we all must evolve with them.

The Misconceptions About Millennials

Insurers have important misconceptions about Millennials as customers. They are not all technology geeks with blind trust in social media.

When it comes to successfully engaging with a new generation of customers (and employees), there’s very little doubt that insurers have their work cut out for them. There can be very little doubt that members of the Millennial generation generally consider insurance to be boring and that reputation of insurance brands among this group is low. So how can insurance companies bridge this gap and find a way to meet the challenges that this new generation of customer present? Perhaps the first thing to do is to challenge existing preconceptions of this group. Many insurers may well be oversimplifying and mythologizing the digital and financial behavior and attitudes of Millennials. Indeed, contrary to popular opinion, the vast majority of Millennials are not technology geeks. What this means for insurers is that developing and offering an app isn’t going to have the impact expected among this group. Technology for technology’s sake will not interest Millennials; they have to see clear value. More broadly speaking, insurers still have much to do when it comes to connecting Millennials and insurance companies. It’s clear that younger customers view insurance brands as solid, safe and staid, guarantors when something goes wrong. However, they also see insurers as faceless organizations that have little understanding of their needs. The successful insurance brands of the future will be those that can provide the established, safe reputation that Millennials have come to expect from insurers, alongside an understanding of their lifestyles, which aligns with the way they interact with one another. It’s also interesting to consider, in this context, how Millennials make strategic decisions about financial management and, specifically, around how they buy insurance. What many insurers may not realize is that many are using word-of-mouth recommendations and advice from family and friends, which can bring the reputation and brand of the insurer to the fore. For this reason, establishing brand reputation and using word-of-mouth campaigns will be key. Because the customer journey of the Millennial is less certain, it will also be increasingly important for insurers to invest in a sound omni-channel strategy. Because they are dealing with customers – or at the very least, potential customers – who are savvy across a diverse range of channels, and who will dip in and out of them at regular intervals before they make a purchasing decision, it can be almost impossible for insurers to know exactly which channel they will use or prefer. What is particularly striking, however, is how small a part social media plays for Millennials when it comes to how they experience customer service. Contrary to popular belief, most seem to have fenced off social media interaction into their personal world and are not convinced that this is where they’ll engage with insurers on customer service issues. Perhaps we should give greater credit to Millennials’ understanding of how social media attacks can backfire and public castigation is a waste of energy. In any case, when it comes to complaints, insurers should consider that Millennials are probably no different than any other generation. They ask for efficient and effective response to direct complaints. While less concern should be given to Millennials causing reputational damage via social media "flaming," they are likely to take decisive action earlier, with a third willing to switch immediately. It’s important to remember that Millennials aren’t looking for digital-only channels, and that they place great value on personalization and self-service. Millennials want just-in-time advice and support, delivered right at the moment they need it. They do not want to get "just in case" advice and support that is delivered at some inappropriate moment (and through an inappropriate channel) and that may not be the right content (which they will have forgotten by the time they need to apply it anyway). Perhaps the most significant consideration is the extent to which Millennials might be willing to share personal data in exchange for a discount or a reduced premium. This seems to justify experiments in telematics and may be the basis for insurers to innovate around newer technologies like wearables. All of this should strongly influence technology choices for how insurers make sure their businesses are responsive to customers. Systems that embed consistent best practice every time, as part of every interaction, to give the absolute optimum outcome for both the insurer and the customer as an individual are critical. In summary, while not all Millennials are the same, they all share similar traits – namely, that they what they want, when they want it (just in time), and they want all of it. With this in mind, there seems very little doubt that the most successful insurers when it comes to dealing with Millennials will be those that are authentic and trustworthy and that are able to offer pricing at the "right" level. Those insurers that can incorporate all of these facets into a personalized service, which sees and leverages every previous interaction and anticipates their next requirement "like magic," will be those that bridge the generational insurance gap and get ahead.

Nigel Walsh

Profile picture for user NigelWalsh

Nigel Walsh

Nigel Walsh is a partner at Deloitte and host of the InsurTech Insider podcast. He is on a mission to make insurance lovable.

He spends his days:

Supporting startups. Creating communities. Building MGAs. Scouting new startups. Writing papers. Creating partnerships. Understanding the future of insurance. Deploying robots. Co-hosting podcasts. Creating propositions. Connecting people. Supporting projects in London, New York and Dublin. Building a global team.

2015 ROI Survey on Customer Experience

Shares of companies that did the best job on customer experience outperformed average companies by 35 percentage points.

|

Six years ago, we launched the Customer Experience ROI Study in response to a sad but true reality: Many business leaders pay lip service to the concept of customer experience – publicly affirming its importance, but privately skeptical of its value. We wondered... how could one illustrate the influence of a great customer experience, in a language that every business leader could understand and appreciate?

And so the Customer Experience ROI Study was born, depicting the impact of good and bad customer experiences, using the universal business “language” of stock market value. It’s become one of the most widely cited analyses of its kind and has proven to be an effective tool for opening people’s eyes to the competitive advantage accorded by a great customer experience.

This year’s study provides the strongest support yet for why every company – public or private, large or small – should make differentiating their customer experience a top priority. Thank you for the interest in our study. I wish you the best as you work to turn more of your customers into raving fans.

THE CHALLENGE

What’s a great, differentiated customer experience really worth to a company? It’s a question that seems to vex lots of executives, many of whom publicly tout their commitment to the customer, but then are reluctant to invest in customer experience improvements. As a result, companies continue to subject their customers to complicated sales processes, cluttered websites, dizzying 800-line menus, long wait times, incompetent service, unintelligible correspondence and products that are just plain difficult to use.

To help business leaders understand the overarching influence of a great customer experience (as well as a poor one), we sought to elevate the dialogue. That meant getting executives to focus, at least for a moment, not on the cost/benefit of specific customer experience initiatives but, rather, on the macro impact of an effective customer experience strategy. We accomplished this by studying the cumulative total stock returns for two model portfolios – composed of the Top 10 (“Leaders”) and Bottom 10 (“Laggards”) publicly traded companies in Forrester Research’s annual Customer Experience Index rankings. As the following vividly illustrates, the results of our latest analysis (covering eight years of stock performance) are quite compelling:

THE RESULTS

8-Year Stock Performance of Customer Experience Leaders vs. Laggards vs. S&P 500 (2007-2014) graph Comparison is based on performance of equally weighted, annually readjusted stock portfolios of Customer Experience Leaders and Laggards relative to the S&P 500 Index. Leaders outperformed the broader market, generating a total return that was 35 points higher than the S&P 500 Index. Laggards trailed far behind, posting a total return that was 45 points lower than that of the broader market.

THE OPPORTUNITY

It’s worth reiterating that this analysis reflects nearly a decade of performance results, spanning an entire economic cycle, from the pre-recession market peak in 2007 to the post-recession recovery that continues today. It is, quite simply, a striking reminder of how a great customer experience is rewarded over the long term, by customers and investors alike. The Leaders in this study are enjoying the many benefits accorded by a positive, memorable customer experience:

  • Higher revenues – because of better retention, less price sensitivity, greater wallet share and positive word of mouth.
  •  Lower expenses – because of reduced acquisition costs, fewer complaints and the less intense service requirements of happy, loyal customers.

In contrast, the Laggards’ performance is being weighed down by just the opposite – a poor experience that stokes customer frustration, increases attrition, generates negative word of mouth and drives up operating expenses. The competitive opportunity implied by this study is compelling, because the reality today is that many sources of competitive differentiation can be fleeting. Product innovations can be mimicked, technology advances can be copied and cost leadership is difficult to achieve let alone sustain. But a great customer experience, and the internal ecosystem supporting it, can deliver tremendous strategic and economic value to a business, in a way that’s difficult for competitors to replicate.

LEARN FROM THE LEADERS

How do these Customer Experience Leading firms create such positive, memorable impressions on the people they serve? It doesn’t happen by accident. They all embrace some basic tenets when shaping their brand experience – principles that can very likely be applied to your own organization:

  1. They aim for more than customer satisfaction. Satisfied customers defect all the time. And customers who are merely satisfied are far less likely to drive business growth through referrals, repeat purchases and reduced price sensitivity. Maximizing the return on customer experience investments requires shaping interactions that cultivate loyalty, not just satisfaction.
  2. They nail the basics, and then deliver pleasant surprises. To achieve customer experience excellence, these companies execute on the basics exceptionally well, minimizing common customer frustrations and annoyances. They then follow that with a focus on “nice to have” elements and other pleasant surprises that further distinguish the experience.
  3. They understand that great experiences are intentional and emotional. The Leading companies leave nothing to chance. They understand the universe of touchpoints that compose their customer experience, and they manage each of them very intentionally – choreographing the interaction so it not only addresses customers’ rational expectations, but also stirs their emotions in a positive way.
  4. They shape customer impressions through cognitive science. The Leading companies manage both the reality and the perception of their customer experience. They understand how the human mind interprets experiences and forms memories, and they use that knowledge of cognitive science to create more positive and loyalty-enhancing customer impressions.
  5. They recognize the link between the customer and employee experience. Happy, engaged employees help create happy, loyal customers (who, in turn, create more happy, engaged employees!). The value of this virtuous cycle cannot be overstated, and it’s why the most successful companies address both the customer and the employee sides of this equation.

To download a copy of the complete Watermark Consulting 2015 Customer Experience ROI Study, please click here.


Jon Picoult

Profile picture for user JonPicoult

Jon Picoult

Jon Picoult is the founder of Watermark Consulting, a customer experience advisory firm specializing in the financial services industry. Picoult has worked with thousands of executives, helping some of the world's foremost brands capitalize on the power of loyalty -- both in the marketplace and in the workplace.

Why Insurers Need to Transform

In the past, competition came from within the industry. Now, pressure is coming from outside forces like digitization. The only option is to transform.

Would insurers rather be enticed and pulled into the future by the motivation and promise of transformational technologies or pushed into transformation by circumstances that lie outside of their control? This is a question every CEO and CIO should face and answer as they evaluate the next steps in modernization that will lead to their secure futures. In the past, the forces that had a bearing on insurers and their businesses were primarily competitive, lying within the insurance industry itself. Today, however, pressures coming from outside the industry, such as a digital way of life and widespread consumerization, are weighing upon the capabilities of entrenched systems and processes. The entire insurance industry, as a whole, is grappling with prototypes that will keep it relevant. A close look at the business with an eye on the future will yield a greater understanding of the reasons insurers need to transform. In general terms, transformation is a matter of business priorities. Insurers need to transform to improve the business and meet business imperatives. This idea isn’t new. But insurance competitiveness is now at the forefront of industry concerns. Remaining competitive and growing the business is the main reason that transformation must occur. Why Do Insurers Pursue Transformation? Transformation Meets These Common Business Needs and Goals
  • The world is competitive. The company needs to compete and win.
  • Transformation enables the organization to grow.
  • Transformation can create optimal customer experiences.
  • An insurer needs good data, better data organization and segmentation and transformed views of data to understand its business.
  • Accelerating product development will make the company more agile.
  • Modern technology costs less to maintain.
  • Reducing technology debt will create efficiency in technology spend and reduce technical risks.
  • Creating business process efficiencies and reducing manual business processes will allow business resources to spend time on growing the business rather than administering it.
  • Eliminating the risk of business interruption by pursuing a guided, framework approach to modernization will maintain legacy system integrity during the process.
Transforming to compete is a necessity. The modern insurance model isn’t going to allow for old paradigms to remain unchallenged. The competition is becoming more agile, efficient and faster at every point of their operations, including customer service. They are learning faster than ever, as well, using increased access to higher-quality data to improve their data insights. All of this is being fueled by rapidly improving technologies for data gathering, data handling and analytics. Improving the customer experience is a goal for transformation efforts because it answers a number of business improvement strategies (including competitiveness and growth). Nearly every process and technology within the organization is tied to the customer, if not directly, then indirectly. Most insurers agree that creating one brand experience across devices and channels leads to greater customer satisfaction. Transforming this area is also likely to contribute to the quality of data an insurer receives from its policyholders. Transforming the gathering and use of data will accomplish a number of strategic business goals. It will lead to better service, greater distribution effectiveness, improved pricing assumptions, better claims experiences, less fraud (harmful to the company) and a lower risk of security breaches (harmful to the customer AND the organization’s reputation). Transforming product development and creating efficiencies will enable growth. Transformation within core systems will unify data and environments, bring agility to areas such as product development and result in a simplification that saves time, effort and resources (as well as reducing fears and anxieties over legacy breakdowns). These internal modernizations will serve business by both simplifying architectures and by opening up more areas to access by business users. Growing levels of configurability and new methods of handling regulatory compliance can revolutionize an organization’s ability to compete. The efficiencies found in reducing manual processes and simplifying environments will also allow the organization to shift some resources away from administrative tasks and into growth pursuits. Reducing/removing technology debt is a benefit that is strong enough to justify transformation all on its own. We have seen numerous examples of insurers that understood that the choice to modernize ended up saving long-term maintenance costs as well as potential loss costs if the organization was pressured to change in a hurry later. Keeping the reasons for transformation in sight. Transformation isn’t an end unto itself. It is a journey that needs continual monitoring and alignment. If an organization stays focused on its business imperatives and doesn’t allow transformation to diverge from those imperatives, it stands a logically greater chance of improving the business with each transformation effort.

William Freitag

Profile picture for user WilliamFreitag

William Freitag

William Freitag is executive vice president and leads the consulting business at Majesco. Prior to joining Majesco, Freitag was chief executive officer and managing partner of Agile Technologies (acquired by Majesco in 2015). He founded the company in 1997.

Finding Actionable Provider Ratings

Accurate provider ratings can be found in workers' comp claims data, but it must be integrated, not left in the silos where it typically exists.

|

With the tongue-in-cheek title, "When Yelp reviews are better than hospital rating systems" Jason Beans of Rising Medical Solutions discloses the inconsistencies of standard hospital reviews. He cites a Health Affairs study that points out that traditional rating systems, those that have been relied upon in the healthcare industry for years, rarely come up with the same results for the same hospital.

Provider ratings score hospital performance in an effort to determine quality and safety in hospitals. The Health Affairs study concludes that discrepancies among provider ratings systems are likely explained by the fact that each uses its own rating methods, defines quality differently and stresses different measures of performance. Apparently, no standards for quality and safety in hospitals are available.

This raises the question of how scoring systems for rating other medical providers differ from those of hospital scoring systems. More specifically, what about those used to score provider performance in workers' compensation?

Lest the conclusion be that all provider performance scoring systems lack credibility, it might be instructive to at least loosely compare hospital rating systems with physician scoring in workers' compensation.

Not Similar

The conditions, methodology and approaches are significantly different. The hospital rating systems cited by Health Affairs evaluate general health in acute care settings. To measure cost, they measure an episode of care on a per diem (per day) basis for individual hospital stays, adjusted by diagnosis and procedures. Often, subjective reports are used, as well.

On the other hand, measures of quality performance in workers' compensation are unique to the industry, and the number of measurable variables is numerous. How a medical provider acknowledges and influences distinctive industry factors along with success of the medical treatment procedures are indicators of quality performance.

Episode of Care

One major difference is that the episode of care in workers' compensation is not per diem, but is defined by the scope of the claim. An episode of care (claim) is from the date of injury to claim closure and includes all treatment, medical providers, vendors, events and outcomes that occur during that time. An episode may or may not include hospitalization, but when it does, those costs and events are included with total claim. In other words, the episode of care is highly definable in workers' compensation. It is broad and comprehensive.

Quality Indicators

A number of non-medical indicators found in the data reflect unique conditions in workers' compensation that are influenced by treating providers. Measures of quality include return to work and indemnity costs, neither of which is medical treatment precisely, but is strongly influenced by the treating provider and affects the cost of the claim. Consequently, these factors must be included in evaluating performance.

Frequency and duration of treatment, as well as duration of the claim, are indicators of provider performance. Providers can contain or increase costs described by these factors. Functional outcome described in the data as disability ratings at the conclusion of the claim are also measures of treatment success.

Clinical factors and treatment processes are important quality indicators, of course, and must be included in the evaluation and scoring. Abuse of Schedule II drugs are, for example, a major cost driver in workers' compensation. Dispensing medications is another.

Source Data

Each data-rich claim contains the information necessary to evaluate medical provider performance for workers' compensation. Importantly, the data must be integrated from the silos of bill review, claims system and PBM (pharmacy) to achieve a comprehensive picture of the claim and medical providers' involvement.

Workers' compensation can be more complex than general health because it is a legal system rather than a defined benefit. Every claim has administrative aspects as well as medical assessment and treatment. It's all in the data.

Objective Data

Scores of medical provider quality indicators can be found in workers' compensation data. The beauty is that data describes what actually took place during the course of the claim, not what should have happened or an opinion about it. It is concrete and objective. Yelp can't help.

Huge Change in Home Insurance

After years of discussions and negotiations, ISO is finally closing what can be a catastrophic gap in the coverage of home insurance.

ISO has filed countrywide perhaps the biggest homeowners insurance change in at least 40 years. If you sell, underwrite, adjust, risk manage or regulate home insurance, you must read this article about this change. In fact, even if you work in commercial lines but you or someone you know owns a home, please read this article. In addition, share this article with everyone you believe needs to understand the nature of this change. The Most Important Home Insurance Change in 40 Years

The Rise of the Robo-Advisers?

Robo-advisers are winning investment clients and have big implications for financial services firms, including insurers.

The robots are here. Not the humanoid versions that you see in Hollywood movies, but the invisible ones that are the brains behind what look like normal online front-ends. They can educate you, advise you, execute trades for you, manage your portfolio and even earn some extra dollars for you by doing tax-loss harvesting every day. These robo-advisers also are not just for do-it-yourself or self-directed consumers; they're also for financial advisers, who can offload some of their more mundane tasks on the robo-advisers. This can enable advisers to focus more on interacting with clients, understanding their needs and acting as a trusted partner in their investment decisions. It's no wonder that venture capital money is flowing into robo-advising (also called digital wealth management, a less emotionally weighted term). Venture capitalists have invested nearly $500 million in robo-advice start-ups, including almost $290 million in 2014 alone. Many of these companies are currently valued at 25 times revenue, with leading companies commanding valuations of $500 million or more. This has motivated traditional asset managers to create their own digital wealth management solutions or establish strategic partnerships with start-ups. Digital wealth management client assets, from both start-ups and traditional players, are projected to grow from $16 billion in 2014 to roughly $60 billion by end of 2015, and $255 billion within the next five years. However, this is still a small sum considering U.S. retail asset management assets total $15 trillion and U.S. retirement assets total $24 trillion. What has caused this recent "gold rush" in robo-advice? Is it just another fad that will pass quickly, or will it seriously change the financial advice and wealth management landscape? To arrive at an answer, let's look at some of the key demographic, economic and technological drivers that have been at play over the past decade. Demographic Trends The need for digital wealth management and the urgent need to combine low-cost digital advice with face-to-face human advice have arisen in three primary market segments, which many robo-advisers are targeting:  
  • Millennials and Gen Xers: More than 78 million Americans are Millennials (those born between 1982 and 2000), and 61 million are Gen Xers (those born between 1965 and 1981); accordingly, this segment's influence is significant. These groups demand transparency, simplicity and speed in their interactions with financial advisers and financial services providers. As a result, they are likely to use online, mobile and social channels for interactive education and advice. That said, a significant number of them are new to financial planning and financial products, which means they need at least some human interaction.
   
  • Baby Boomers: Baby boomers, numbering 80 million, are still the largest consumer segment and have retail investments and retirement assets of $39 trillion. Considering that this segment is either at or near retirement age, the urgency to plan for their retirement as well as draw down a guaranteed income during it is critical. The complexity of planning and executing this plan typically goes beyond what today's automated technologies can provide.
   
  • Mass-Affluent & Mass-Market: Financial planning and advice has largely been aimed at high-net-worth (top 5%) individuals. Targeting mass-affluent (the next 15%) and mass-market (the next 50%) customers at an affordable price point has proven difficult. Combining automated online advice with the pooled human advice that some of the digital wealth management players offer can provide some middle ground.
  Technological Advances Technical advances have accompanied demographic developments. The availability of new sources and large volumes of data (i.e., big data) has meant that new techniques are now available (see "What comes after predictive analytics?") to understand consumer behaviors, look for behavioral patterns and better match investment portfolios to customer needs.  
  • Data Availability: The availability of data, including personally identifiable customer transactional level data and aggregated and personally non-identifiable data, has been increasing over the past five years. In addition, a number of federal, state and local government bodies have been making more socio-demographic, financial, health and other data more easily available through open government initiatives. A host of other established credit and market data companies, as well as new entrants offering proprietary personally non-identifiable data on a subscription basis, complement these data sources. If all this structured data is not sufficient, one can mine a wealth of social data on what customers are sharing on social media and learn about their needs, concerns and life events.
   
  • Machine Learning & Predictive Modeling: Techniques for extracting insights from large volumes of data also have been improving significantly. Machine learning techniques can be used to build predictive models to determine financial needs, product preferences and customer interaction modes by analyzing large volumes of socio-demographic, behavioral and transactional data. Big data and cloud technologies facilitate effective use of this combination of large volumes of structured and unstructured data. In particular, big data technologies enable distributed analysis of large volumes of data that generates insights in batch-mode or in real-time. Availability of memory and computing power in the cloud allows start-up companies to scale on demand instead of spending precious venture capital dollars setting up an IT infrastructure.
   
  • Agent-Based Modeling: Financial advice; investing for the short-, medium- and long-term; portfolio optimization; and risk management under different economic and market conditions are complex and interdependent activities that require years of experience and extensive knowledge of numerous products. Moreover, agents have to cope with the fact that individuals often make investment decisions for emotional and social reasons, not just rational ones.
  Behavioral finance takes into account the many factors that influence how individuals really make decisions, and human advisers are naturally skeptical that robo-advisers will be able to match their skills interpreting and reacting to human behavior. While this will continue to be true for the foreseeable future, the gap is narrowing between an average adviser and a robo-adviser that models human behavior and can run scenarios based on a variety of economic, market or individual shocks. Agent-based models are being built and piloted today that can model individual consumer behavior, analyze the cradle-to-grave income/expenses and assets/liabilities of individuals and households, model economic and return conditions over the past century and simulate individual health shocks (e.g., need for assisted living care). These models are assisting both self-directed investors who interact with robo-advisers and also human advisers. Evolution of Robo-advisers We see the evolution of robo-advisers taking place in three overlapping phases. In each phase, the sophistication of advice and its adoption increases.  
  • First Generation or Standalone Robo-Advisers: The first generation of robo-advisers targets self-directed end consumers. They are standalone tools that allow investors to a) aggregate their financial data from multiple financial service providers (e.g., banks, savings, retirement, brokerage), b) provide a unified view of their portfolio, c) obtain financial advice, d) determine portfolio optimization based on life stages and e) execute trades when appropriate. These robo-advisers are relatively simple from an analytical perspective and make use of classic segmentation and portfolio optimization techniques.
   
  • Second Generation or Integrated Robo-Advisers: The second generation of robo-advisers is targeting both end consumers and advisers. The robo-advisers are also able to integrate with institutional systems as "white labeled" (i.e., unbranded) adviser tools that offer three-way interaction among investors, advisers and asset managers. These online platforms are variations of the "wrap" platforms that are quite common in Australia and the UK, and offer a cost-effective way for advisers and asset managers to target mass-market and even mass-affluent consumers. In 2014, some of the leading robo-advisers started "white labeling" their solutions for independent advisers and linking with large institutional managers. Some larger traditional asset managers also have started offering automated advice by either creating their own solutions or by partnering with start-ups.
   
  • Third Generation or Cognitive Robo-Advisers: Advances in artificial intelligence (AI) based techniques (e.g., agent-based modeling and cognitive computing) will see second generation robo-advisers adding more sophisticated capability. They will move from offering personal financial management and investment management advice to offering holistic, cradle-to-grave financial planning advice. Combining external data and social data to create "someone like you" personas; inferring investment behaviors and risk preferences using machine learning; modeling individual decisions using agent-based modeling; and running future scenarios based on economic, market or individual shocks has the promise of adding significant value to existing adviser-client conversations.
  One could argue that, with the increasing sophistication of robo-advisers, human advisers will eventually disappear. However, we don't believe this is likely to happen anytime in the next couple of decades. There will continue to be consumers (notably high-worth individuals with complex financial needs) who seek human advice and rely on others to affect their decisions, even if doing so is more expensive than using an automated system. Because of greater overall reliance on automated advice, human advisers will be able to focus much more of their attention on human interaction and building trust with these types of clients.  Implications to Financial Service Providers How should existing producers and intermediaries react to robo-advisers? Should they embrace these newer technologies or resist them?  
  • Asset Managers & Product Manufacturers: Large asset managers and product manufacturers who are keen on expanding shelf-space for their products should view robo-advisers as an additional channel to acquire specific type of customers - typically the self-directed and online-savvy segments, as well as the emerging high-net-worth segment. They also should view robo-advisers as a platform to offer their products to mass-market customers in a cost-effective manner.
   
  • Broker Dealers and Investment Advisory Firms: Large firms with independent broker-dealers or financial advisers need to seriously consider enabling their distribution with some of the advanced tools that robo-advisers offer. If they do not, then these channels are likely to see a steady movement of assets - especially of certain segments (e.g., the emerging affluent and online-savvy) - from them to robo-advisers.
   
  • Registered Independent Advisers and Independent Planners: This is the group that faces the greatest existential threat from robo-advisers. While it may be easy for them to resist and denounce robo-advisers in the short term, it is in their long-term interest to embrace new technologies and use them to their advantage. By outsourcing the mechanics of financial and investment management to robo-advisers, they can start devoting more time to interacting with the clients who want human interaction and thereby build deeper relationships with existing clients.
   
  • Insurance Providers and Insurance Agents: Insurance products and the agents who sell them also will feel the effects of robo-advisers. The complexity of many products and related fees/commissions will become more transparent as the migration to robo-adviser platforms gathers pace. This will put greater pressure on insurers and agents to simplify and package their solutions and reduce their fees or commissions. If this group does not adopt more automated advice solutions, then it likely will lose its appeal to attractive customer segments (e.g., emerging affluent and online-savvy segments) for whom their products could be beneficial.
  Product manufacturers, distributors, and independent advisers who ignore the advent of robo-advisers do so at their own risk. While there may be some present-day hype and irrational exuberance about robo-advisers, the long-term trend toward greater automation and integration of automation with face-to-face advice is undeniable. This situation is not too dissimilar to automated tax-advice and e-filing. When the first automated tax packages came out in the '90s, some industry observers predicted the end of tax consultants. While a significant number of taxpayers did shift to self-prepared tax filing, there is still a substantial number of consumers who rely on tax professionals to file their taxes. Nearly 118 million of the 137 million tax returns in 2014 were e-filings (i.e., electronically filed tax returns), but tax consultants filed many of them. A similar scenario for e-advice is likely: a substantial portion of assets will be e-advised and e-administered in the next five to 10n years, as both advisers and self-directed investors shift to using robo-advisers.

Anand Rao

Profile picture for user Anand_Rao

Anand Rao

Anand Rao is a principal in PwC’s advisory practice. He leads the insurance analytics practice, is the innovation lead for the U.S. firm’s analytics group and is the co-lead for the Global Project Blue, Future of Insurance research. Before joining PwC, Rao was with Mitchell Madison Group in London.

How to Extend ERM to IT Security

There are numerous frameworks for both IT security and for ERM, but it's important to combine the two initiatives to manage risk better.

|
What is SERMP? An idea born out of the necessity when I needed a way to help my colleagues in the information technology security field explain to others in the organization the level of risk they faced and the progress being made in managing it. IT security and ERM frameworks are numerous and readily available, but I wanted a way to bring the two together. After many editions and revisions, the Security Enterprise Risk Management Program, or SERMP, was born. The SERMP is a customized framework that creates a repeatable process that can be executed by non-IT professionals. Working with others in your organization, you form a team, develop a vision and scope and populate your customized framework tool overtime. As an ERM practitioner, you don’t need to be an IT expert to be well-suited to implement a SERMP. You know how to identify and articulate risk, quantify or qualify the degree of risk and identify appropriate mitigations that can reduce uncertainty and create greater opportunity. In short, you are taking knowledge and skills that you use every day and applying them to a different environment. SERMP is initiated to establish a model to assess, track and monitor all security risks and initiatives empirically and allows the organization to be confident that it is focused on the right things at the right time, and align new security risks and initiatives with the proper emphasis and investment. At its core, the SERMP is an enterprise risk management (ERM) model but expands upon the ERM model by incorporating a holistic framework approach, with strict emphasis on empirical support statements. This allows leadership to have very specific and targeted discussions regarding risk and impact with defensible data to support key decisions. So, where do you start? Playing off the idea of crowdsourcing, you build your team based on the following characteristics: #1 Willingness… Okay, there are really no other characteristics needed. Because the SERMP framework breaks down IT security into components at a level that limits scope, and because it is repeatable, allowing for brief bursts of time commitment, the knowledge required to execute each segment is also reduced. If someone is willing to join the team and give of their time and has a desire to learn about IT security, they can contribute -- even better if they have some impact on IT security. Many people have or should have an impact on IT security, but until engaged in SERMP may not even realize that they do. Examples: Facilities: physical security Human Resources: access control and training Communications: awareness and messaging Risk Management: risk frameworks and cyber insurance Procurement: contracts and vendor management Project Managers: process documentation Audit: frameworks and assurance Compliance: governance and policy It is highly recommended to have at least one IT security member on the SERMP team, but it is not required. The IT security staff (internal or external) will need to provide time to meet with SERMP team members, but they do not have to be working members of SERMP. Anyone else willing to volunteer? How much time is this going to take? SERMP is about gathering and analyzing information over time. You can move quickly or slowly and expand or contract your scope as your resources allow, given the complexity of the IT structure and the organization's environment. I recommend doing small bursts of effort but on a regular basis:
  • Initial introductory meeting to explain the concept (30 – 60 minutes)
  • Planning meeting to develop a charter that includes vision and scope (1 – 2 hours)
  • 30-minute meetings every two weeks with the SERMP team to discuss progress
  • Quarterly report out to leadership and appropriate committees, i.e. risk or audit committee (30 minutes)
  • Team-member monthly time commitment: five hours per team member to gather information, populate the framework and attend the 30-minute meetings
The above does not account for the time that you as the leader of the project will spend in oversight and administration, but because you control the pace and scope you can work the program into your schedule as time and resources permit. Establishing a charter or similar document is helpful in having “recruiting” discussions for your SERMP team. Example charter: The Security Risk Management Program (SERMP) is initiated to establish a model to assess, track and monitor all security risks and initiatives empirically, and allows the organization to be confident that it is focused on the right things at the right time, and can add and align new security risks and initiatives with the proper emphasis and investment. SERMP is a continuing program, but the major phases are:
  • Establish Approach
  • Enterprise Current State Assessment
  • Risk Initiative Planning and Prioritization
  • Risk and Initiative Progress: Quarterly reporting
  • Annual Review
What are the challenges and issues we are trying to solve? Security controls and investments were implemented without a measurable understanding of effectiveness or appropriateness. Have we invested our security dollars in the right places? Without a structured framework, we are guessing at worst, and at best have a siloed understanding of the security posture picture. What are the goals of the program? Establish a repeatable framework to:
  • Confidently understand our current security posture
  • Identify our key security risks and priorities
  • Determine security remediation strategy
  • Align remediation initiatives with strategy
  • Establish empirical key risk indicators (KRIs) and key performance indicators (KPIs)
The complexity of security is organized into 12 separate domains (grounded in ISO 27000 but customized for the organization). Each domain has a lead, who is responsible for understanding the holistic posture of its scope, measure across the entire enterprise.   chart   For each domain, there will be a strategy:
Domain Details Description
Vision Value & Scope Scope & Definition
Policy Mapping Controls, Standards
Risk Assumptions Observations & Metrics
Metrics Baseline, KRI, KPI
Initiatives & Roadmap Initiatives, plan, 3 yr roadmap
Programs & Services What is in place
Partners Up & Downstream
  A reporting tool is used to track the SERMP, which houses the information for each domain, plus how the organization is managing the program: Establish, Assess, Treatment, Monitor, Review activities and Metric tracking for Risk Statement, Risk Impact, Key Risk Indicators(KRIs), Risk Remediation Initiatives, Current State (KPI), Target State (KRI) and Projects. The reporting tool is used not only as a repository for information gathering but provides the team with a framework that breaks down the elements to be gathered and analyzed into manageable components. In the above charter, ISO 27000 is referenced, but other or multiple standards may be used.

Does CGL Cover for Data Breach?

A highly unusual case finds for the insurers but underscores that CGL covers many cyber issues -- if the insureds are willing to do battle.

In a highly anticipated May 26 decision, the Connecticut Supreme Court ruled that two commercial general liability (CGL) insurers, Federal Insurance and Scottsdale Insurance, are not required to cover losses in connection with the mysterious disappearance of computer tapes containing employment-related data, including the Social Security numbers, of approximately 500,000 current and former IBM employees in Recall Total Information Management, Inc. v. Federal Ins. Co.[1] Although the insurers in Recall Total won this particular battle, Recall Total’s value as precedent value as insurer-ammunition in their war against data breach coverage under CGL policies is severely limited by a highly unusual fact pattern. Recall Total can reasonably be read to assist insureds facing more typical kinds of data breaches, like the Target breach and many others. Below is a brief summary of the facts, the key coverage issue, the ruling and five takeaways. The Facts The facts of Recall Total are unusual, to say the least: The computer tapes at issue, which belonged to IBM, fell off the back of a transportation subcontractor’s van near a highway exit ramp.[2] About 130 of the tapes were then removed from the roadside by an unknown person and never recovered.[3] In the wake of this highway misadventure, IBM incurred more than $6 million in expenses to address the incident, including notification, call center services and credit monitoring.[4] IBM sought indemnification from its vendor, Recall Total Information Management (Recall), which had contracted with IBM to transport off-site and store the computer tapes at issue.[5] Recall settled with IBM and, in turn, sought indemnification from its transportation subcontractor, Executive Logistics (Ex Log), which lost the tapes after they fell off its van during transit. Ex Log agreed to pay more than $6.4 million to Recall and assigned to Recall its rights under a $2 million primary CGL policy and a $5 million umbrella policy following a coverage tender and denial.[6] Ex Log and Recall then initiated coverage litigation.[7] Key Coverage Issue: Was There a “Publication”? ExLog’s CGL policy at issue, similar to the current ISO standard form CGL policy,[8] states in relevant part that the insurer “will pay damages that the insured becomes legally obligated to pay … for … personal injury.”[9] The policy defines the key term “personal injury” to include “injury … caused by an offense of ... electronic, oral, written or other publication of material that ... violates a person’s right to privacy.”[10] The Ruling The intermediate appellate court, in a decision adopted by the Connecticut Supreme Court, appeared ready to find, or at least was not averse to finding, “publication” satisfied if there was any evidence of access to the data. Based upon the unique facts, however, the intermediate appellate court determined that the “publication” requirement was not satisfied because there was no evidence that the data on the tapes, which could not be read by a personal computer, “was ever accessed by anyone”[11] -- let alone used it for “any improper purpose.”[12] As the intermediate appellate court stated, there was not even any evidence that the party who took the tapes “even recognized that the tapes contained personal information.”[13] Under these unique facts, and the fact that no IBM employee had suffered any injury, the court determined that it was “unable to infer that there has been a publication” and concluded that “[a]s the complaint and affidavits are entirely devoid of facts suggesting that the personal information actually was accessed, there has been no publication.”[14] In a brief per curiam opinion, the Connecticut Supreme Court affirmed on the basis that there was no “publication,” noting that “[t]here is no evidence that anyone ever accessed the information on the tapes or that their loss caused injury to any IBM employee.”[15] Takeaways
  1. The “Access” Lacking in Recall Total Is Present in Many Data Breach Cases
Recall Total is of limited utility to insurers seeking to avoid CGL coverage for data breaches given its peculiar factual setting. As the decision makes abundantly clear, it hinged on the fact that there was no evidence of access to the sensitive data. In fact, there was no evidence that the data could be accessed -- or even that the party who took the tapes was aware that they contained sensitive data. This is in stark contrast to a typical data breach fact pattern, in which there is no question that sensitive information was accessed. In breaches like Target, and innumerable others, information is specifically identified and targeted by the actors taking it, and then used for criminal activity. In those cases, there is abundant evidence that the data in question was accessed.
  1. Other Courts Have Found the CGL “Publication” Requirement Satisfied Without Proof of “Access” in the Data Breach Context
Although “access” to data may be required under Connecticut law, courts in other jurisdictions have appropriately determined that the CGL “publication” requirement can be satisfied without proof that data was accessed. In one recent case involving the alleged posting of confidential medical records on the Internet, for example, the Eastern District of Virginia determined that “publication” does not require proof of “access”: [T]he issue is not whether a third party accessed the information because the definition of “publication” does not hinge on third-party access. Publication occurs when information is “placed before the public,” not when a member of the public reads the information placed before it. By Travelers’ logic, a book that is bound and placed on the shelves of Barnes & Noble is not “published” until a customer takes the book off the shelf and reads it.[16] The bottom line: access to data storage devices alone, including laptops, may suffice to satisfy the “publication” requirement in other jurisdictions -- and even in Connecticut under a different set of facts.
  1. Insureds Must Be Prepared to Fight to Secure CGL Coverage
The insurance industry has made it abundantly clear that it does not want to cover “cyber” and data privacy related exposures under CGL policies. Although there is potential valuable coverage under CGL policies, insureds should expect that they will need to fight to secure it. Insurers routinely assert, among other things, that there has been no “publication” of data. The good news is that if insureds decide to fight for coverage, they may well prevail. Many courts have upheld coverage for data breaches and other claims alleging violations of privacy rights in a variety of settings.[17]
  1. Insureds Should Be Aware of New CGL “Data Breach” Exclusions
Insurance Services Office (ISO), the insurance organization responsible for drafting standard-form CGL language, recently promulgated a series of data breach exclusionary endorsements.[18]   The exclusions became effective in most states in May 2014 and began appearing on new placements and renewals, in various forms, almost immediately.[19] Although it is important to be aware of new, potentially limiting, coverage terms, it also is important to recognize that the applicable policy in a data breach situation -- where breaches often are discovered long after the “occurrence” that triggers coverage -- may predate the newer exclusions. Where policies do contain the newer exclusions, insureds should not assume that they necessarily void coverage. Coverage will depend on myriad factors, including the particular facts of the case, specific policy language and applicable law. The very existence of the exclusions, moreover, illustrates the insurance industry’s awareness that there is valuable potential data breach coverage under CGL policies. Indeed, when ISO filed the newer exclusions, it acknowledged that there currently may be data breach coverage for data breaches under CGL policies and advised that the new exclusions may be a “reduction in personal and advertising injury coverage”: "At the time the ISO CGL and CLU policies were developed, certain hacking activities or data breaches were not prevalent, and, therefore, coverages related to the access to or disclosure of personal or confidential information and associated with such events were not necessarily contemplated under the policy. As the exposures to data breaches increased over time, stand-alone policies started to become available in the marketplace to provide certain coverage with respect to data breach and access to or disclosure of confidential or personal information. . . . To the extent that any access or disclosure of confidential or personal information results in an oral or written publication that violates a person’s right of privacy, this revision may be considered a reduction in personal and advertising injury coverage.[20] " The implication is that the insurance industry understood there was CGL data breach coverage in the absence of the new exclusions.
  1. Organizations Are Advised to Consider Cyber Insurance
Given the insurance industry’s clear indication that it does not want to cover data breaches under CGL policies, organizations are advised to consider purchasing cyber insurance. In addition to providing defense and indemnity coverage in connection with claims arising out of a data breach, among many other types of cybersecurity and data privacy-related exposures, cyber policies generally cover a range of “crisis management” expenses, such as attorney “breach coach” fees, notification to potentially affected individuals, forensics, credit monitoring, call centers, ID theft protection and public relations efforts, which often are required after a breach of any consequence. Cyber insurance coverage can be extremely valuable, but choosing the right insurance policy presents a real and significant challenge. There is a diverse and growing array of cyber products in the marketplace, each with its own insurer-drafted terms and conditions that vary dramatically from insurer to insurer—and even between policies underwritten by the same insurer. Because of the nature of the cyber insurance and the risks that it is intended to cover, a placement should include the involvement and input, not only of a capable risk management department and a knowledgeable insurance broker, but also of in-house legal counsel, information technology professionals and compliance personnel, among other key internal players -- and insurance coverage counsel well-versed in this challenging and dynamic line of coverage. [1] --- A.3d ----, 2015 WL 2371957 (Conn. May 26, 2015), aff’g 83 A.3d 664 (Conn. App. Ct. 2014). [2] Recall Total, 83 A.3d at 667. [3] Id. [4] Id. at 668. [5] Id. [6] Id. [7] Id. [8] The current standard industry form states that the insurer “will pay those sums that the insured becomes legally obligated to pay as damages because of ‘personal and advertising injury,’” which is defined to include “[o]ral or written publication, in any manner, of material that violates a person’s right of privacy.” ISO Form CG 00 01 04 13 (2012), Section I, Coverage B, §1.a., §14.e. [9] Recall Total, 83 A.3d at 672. [10] Id. [11] Id. at 673. [12] Id. [13] Id. at n.9 (emphasis added). [14] Id. at 672 (emphasis added). [15] Recall Total, 2015 WL 2371957, at *1. [16] Travelers Indem. Co. of America v. Portal Healthcare Solutions, LLC, 35 F.Supp.3d 765, 771 (2014). [17] See, e.g., Hartford Cas. Ins. Co. v. Corcino & Assocs,. 2013 WL 5687527, at *2 (C.D. Cal. Oct. 7, 2013) (upholding coverage in a data breach case for statutory damages of $1,000 per person under the CMIA and statutory damages of as much as $10,000 per person under the California Lanterman-Petris-Short Act under a policy that covered damages that the insured was “legally obligated to pay as damages because of ... electronic publication of material that violates a person’s right of privacy”). [18] One of the exclusionary endorsements, entitled “Exclusion - Access Or Disclosure Of Confidential Or Personal Information,” adds the following exclusion to the standard form CGL primary policy: This insurance does not apply to: Access Or Disclosure Of Confidential Or Personal Information “Personal and advertising injury” arising out of any access to or disclosure of any person’s or organization’s confidential or personal information, including patents, trade secrets, processing methods, customer lists, financial information, credit card information, health information or any other type of non public information. This exclusion applies even if damages are claimed for notification costs, credit monitoring expenses, forensic expenses, public relations expenses or any other loss, cost or expense incurred by you or others arising out of any access to or disclosure of any person’s or organization’s confidential or personal information. CG 21 08 05 14 (2013). [19] See Roberta Anderson, “Coming To A CGL Policy Near You: Data Breach Exclusions,” Law360, April 23, 2014. [20] ISO Commercial Lines Forms Filing CL-2013-0DBFR, at pp. 3, 7-8 (emphasis added).

Next Tsunami of Work Comp Payments

Work comp payments may expand greatly because of a California decision awarding death benefits related to a drug overdose.

2009 was a milestone in workers' comp. In that year, the Centers for Medicare and Medicaid Services (CMS) formally announced that it would review future prescription drug treatment in Workers' Compensation Medicare Set-Aside (WCMSA) proposals based on "appropriate medical treatment as defined by the treating physician." While the U.S. culture and Centers for Disease Control and Prevention (CDC) had already noticed the prescription drug epidemic, this new requirement more clearly highlighted high-cost drug regimens that were doing more clinical harm than good. Yes, the monthly drug costs were already known to be expensive. Yes, reserves often had to be raised annually. But until the workers' comp industry had to follow explicit rules to calculate the lifetime cost associated with continued inappropriate polypharmacy regimens, the problems hadn't really registered. The new requirement dramatically changed the ability to settle and close a claim, so addressing the overuse and misuse of prescription drugs, primarily related to non-malignant chronic pain, became a white hot priority. The financial exposure highlighted by the WCMSA was a tsunami that changed the contours of the claims shoreline. Well, another milestone has been achieved for workers' comp. I have been talking about it, as well, over the past three years, because I could see the riptide indicators of the next tsunami to hit. And now the surge is about to hit the shore. This next workers' comp tsunami? Death benefits that will be paid because of drug overdoses. This has already been affirmed in a handful of states, among them Pennsylvania (James Heffernan), Tennessee (Charles Kilburn) and Washington (Brian Shirley). Death benefits have been denied in other states, including Connecticut (Anthony Sapko) and Ohio (John Parker). I'm sure this is not a complete list. The list shows how individual circumstances and jurisdictional rules can drive different decisions, but what is not up for debate is whether payers face an issue concerning injured workers dying from an overdose (intentional or unintentional) of prescription drugs paid for by workers' comp. The game-changer could be a new decision in California, South Coast Framing v. WCAB. The full Supreme Court decision can be found here, and a good article that gives additional context can be found on WorkCompCentral (requires a subscription). To summarize, Brandon Clark died on July 20, 2009. The autopsy reported his death "is best attributed to the combined toxic effects of the four sedating drugs detected in his blood with associated early pneumonia." Elavil, Neurontin and Vicodin were being prescribed by his workers' comp physician, while Xanax and Ambien were prescribed by his personal doctor. Of that list, the four sedating drugs are Elavil, Vicodin, Xanax and Ambien -- obviously a mixture of workers' comp and "personal" drugs. The qualified medical evaluator (QME) doctor ascribed the overdose to the additive effect of Xanax and Ambien and not the workers' comp drugs. However, he allowed that Elavil and Vicodin could have contributed (the deposition quotes on pages three and four remind me of a Monty Python skit, as he tried inartfully to not provide apportionment). So ... what is the strength of causality between the industrial injury and death? Tort is much more precise in its understanding -- cause, in fact, and proximate cause. Workers' comp (which is no-fault) is not tort, and neither is its definition of causality -- contributing cause of the injury. Did Clark misuse or overuse the drugs through willful misconduct? Possibly. Should one of his physicians have recognized the additive sedative effects from the combination of drugs and done something different? Probably. Was Clark trying to address continued legitimate pain that originated with his workplace injury? Likely. Is this a tragedy? Definitely. So the decision came down to whether the workers' comp drugs (Elavil and Vicodin) could have been part of why Clark died. The Court of Appeal concluded that Elavil only "played a role" and was not a "significant" or "material factor." The Supreme Court found the evidence to be substantial that Elavil and Vicodin, to some degree, contributed to his death. Therefore, they awarded death benefits to Clark's wife and three children. What does this mean? At least in California, it means that the bar of establishing causality (did workers' comp drugs somehow contribute) is not as high as you might have expected. There is no further debate because this is a Supreme Court decision. Does that mean more death benefits are to come in California? In a highly litigious state where representation is commonplace. And prescription drug use for chronic pain is an overwhelming problem. Hmmm .... My "magic eight ball" is in for maintenance, but my educated guess (I am not an attorney) would be yes. What about other states? Well, every state has different rules and case history, but because trends often start in California, and the Supreme Court was articulate in its decision-making process, it's possible this causes a re-examination by all parties. The fact that some states already have established case law to grant death benefits could be a compounding effect. Therefore, it's a definite maybe. This may be an isolated case that has no repercussions in California or elsewhere. On the other hand ... Consider this your RED FLAG warning for the riptide that precedes the tsunami. And you thought paying for drugs was expensive!