Download

The Aging Workforce and Succession Plans

The exodus of Baby Boomers from the workforce will create huge knowledge gaps, but few insurers have yet to take notice.

In 1969, Neil Armstrong became the first man to set foot on the moon, marking the culmination of a $24 billion NASA space program. Ten years later, NASA sheepishly admitted they could not return to the moon even if they wanted to -- they couldn’t remember how. This is a perfect example of what is referred to as the “knowledge gap”: the loss of critical information when employees leave their place of employment. In the case of NASA, all the key people involved in the original Apollo 11 project had retired…and no one thought to jot down what they knew. To make matters worse, blueprints for Saturn V, the only rocket powerful enough to travel to the moon, were lost. Even though this NASA fumble took place 30 years ago, the exact scenario is being played out in spades as Baby Boomers (those individuals born between 1946 and 1964) are reaching retirement age. Most employers have made no effort to capture the Boomers’ knowledge before they eventually leave. In the next 20 years, 76 million Boomers will sing the Johnny Paycheck song as they walk out the door, taking with them an entire generation's worth of knowledge that can never be replaced. There is an inconvenient truth in the potential calamity, and most companies aren't ready for the aftermath. Boomers make up more than one third of the nation's work force. They fill many of its most skilled and senior jobs. Thanks to their near-workaholic habits, they are among the most aggressive, creative and demanding workers in the market today. Economists predict their exit will cause a great, sucking hole in the workplace universe. Companies need to bear in mind that the coming retirement years are going to be larger than at any other time in U.S. history. With 76 million Boomers leaving the workforce and only 46 million Generation Xers (those born between 1965 and 1980) available to take the newly vacant roles, there will be a deficit of 30 million workers. So while the Millennials (also known as Generation Y -- those born between 1981 and 1995) number approximately 100 million, the oldest of them are still too young and inexperienced to step into leadership roles. A study earlier this decade by the Bureau of Labor Statistics reported that more than 17% of Boomers holding executive and managerial positions are expected to leave their careers by 2010. While some companies have begun scrambling to hire trainees, and close the potential knowledge gap created by the Boomer exodus, most companies haven't even taken notice, according to Elizabeth Kearney, founder and president of Kearney & Associates, a nationwide alliance of experts who specialize in this trend. In fact, according to the Institute for Corporate Productivity (i4cp), only 29% of responding organizations report that they incorporate retirement forecasts into their knowledge transfer practices. Furthermore, i4cp found that only a third add "skills gap analysis" into those forecasts; less than half say they train their managers to identify critical skills; only 23% are educated in critical skills transfer; and most companies admit they do not formally measure the effectiveness of their knowledge transfer practices. Cornerstone OnDemand has released a whitepaper finding that most organizations, particularly larger ones, are not ready for the pending talent shortage caused by the looming retirement of Boomers. The paper, titled, "Managing Talent in the Face of Workforce Retirement," summarizes key findings of Knowledge Infusion's "2010 Talent Readiness Assessment," which indicates, among other things that:
  • Organizations with more than 2,500 employees indicated that approximately one in five workers are over the age of 55;
  • More than 50% of respondents said the retiring workforce will cause a knowledge/skill gap; and yet,
  • Less than 30% of organizations that responded had a knowledge retention plan in place.
David DeLong, author of the book Lost Knowledge: Confronting the Threat of an Aging Workforce, recently pointed out that there are direct and indirect costs associated with lost knowledge. Direct costs occur through the loss of workers with specific knowledge through retirement and attrition. When these experts are no longer around, it accentuates the indirect costs of knowledge loss: poor documentation and storage. A holistic approach is necessary to deal with an aging workforce and knowledge retention problems, according to DeLong. The approach combines effective knowledge transfer practices, knowledge recovery initiatives, strong knowledge management technologies and finally, more effective HR processes and practices to deal with the problem on a more systemic level. Here are three things DeLong recommends companies should be doing to deal with aging workforce problems:
  • Harvest critical information now and make it available at point-of-need. Companies should begin by identifying where they are most at risk from the loss of knowledge and experience. This involves, in part, establishing performance management and career development processes that identify employees with the most critical knowledge and expertise. For example, to sustain business after the 9/11 attacks, Delta Air Lines was forced to make workforce cuts to remain competitive. This meant that Delta had less than two months to identify which of the 11,000 laid-off employees had jobs for which no backups or replacements had been trained, and then capture that knowledge before it walked out the door. Supervisors worked with a team from Delta’s learning services unit to narrow the list down to those veterans whose departure would represent a critical job loss. Once these outstanding performers were identified, they were interviewed about their roles at the company. This way, Delta retained as much critical knowledge as possible on very short notice.
  • Use real-time collaboration tools to enable workers to interact with colleagues. As collaboration and knowledge management have grown, relevant technologies and tools have become increasingly sophisticated. Things like workspace portals are revolutionizing knowledge management and collaboration solutions by giving workers access to enterprise data and applications, productivity and virtual collaboration tools, and documented knowledge, all of it personalized.
  • Use advanced e-learning techniques. Performance simulation gives employees the opportunity to practice, in real time, the key skills and competencies they must acquire to address knowledge drain.
  • Employ better workforce planning and targeted knowledge retention initiatives to address the brain drain that now threatens entire industries.
"Companies need to proactively assess their organizations and determine a plan of action before this threat becomes a reality," said Adam Miller, president and CEO, Cornerstone OnDemand. "Understanding the overall goals of the organization and which employees are key to achieving these goals including their role, skills and level within the company is important to implementing a retention plan." Not all employers are ignoring the inevitable. The i4cp study found that there are a number of up-and-coming practices in use or under consideration. "Communities of practice" are utilized by a third of all responding companies to transfer knowledge, and the use of Webcasts and services such as "Lunch and Learn" and "SharePoint" are on the rise. Harvesting the knowledge is only part of the equation. The captured knowledge must then be reformatted into a usable database with easy access by the employer. It does no good to house the data in a three-ring binder and then place it on a dusty shelf, never to be seen again. Northrop Grumman has been on the forefront of knowledge management for many years. In 1997, with the Cold War behind them, thousands of NG engineers, who had helped design and maintain the B-2 bomber, were asked to leave the integrated systems sector. In a short period, 12,000 workers filed out the door, leaving only 1,200 from an original staff of 13,000 employees, to help maintain the current fleet of bombers. The 12,000 took with them years of experience and in-depth knowledge about what was the most complex aircraft ever built. Without appropriate measures, this could have been a disaster of epic proportion. Instead, before the exodus, NG formed a “Knowledge Management Team” who identified the top experts and videotaped interviews with them. To this day, the company uses a variety of tools to retain and transfer knowledge from its engineers -- before they retire. The company has implemented document management systems, as well as common work spaces to record how an engineer did her job for future reference. NG also brings together mature and young engineers across the country to exchange information via e-mail or in-person about technical problems. No company wants to be in the position in which NASA found itself -- having to explain why it can’t recreate the single greatest event in modern history. If employers don’t plug the knowledge gap prior to the great Boomer exodus, it’s going to be more than just Houston that has a problem.

Daniel Holden

Profile picture for user DanielHolden

Daniel Holden

Dan Holden is the manager of corporate risk and insurance for Daimler Trucks North America (formerly Freightliner), a multinational truck manufacturer with total annual revenue of $15 billion. Holden has been in the insurance field for more than 30 years.

How to Apply ERM to Cyber Risks

Insurance and reinsurance are not alternatives to ERM; cyber risks must be assessed and mitigated like all other risks.

sixthings
The advent of new technologies has enabled risk stakeholders to perform enhanced data analytics to gain more insights into the customer, risk assessment, financial risk management and quantification of operational risk. Companies manage many risks aligned to their risk profile and risk appetite. They do so by risk awareness and risk assessment. The visionaries and early adopters do so dynamically by use of mathematics (stochastically or actuarially) and simulations for the future based on the historical loss data to correlate all the risks of the enterprise into one holistic view. Factors to consider include: Cyber risk. Operational risk affects every organization on an equal basis and is often quantified as a percentage of gross written premiums. Cyber risks are no different from any other risk in terms of risk management and risk transfer. However, IT departments, even with the best of intentions, can increase  cyber risk by their strategy — and there is no silver bullet to protect the company. Keyless signature infrastructure (KSI) enables companies to plan data breach strategies where systems administrators are no longer involved in the security process. This will bring great comfort to risk managers who see  new technology being introduced that will increase cyber risk. Risk mitigation. Insurance and reinsurance are not alternatives to enterprise risk management (ERM).  Risk transfer programs should be used to address structural residual risk. From EY’s experience, companies can identify risks and adopt leading practices to ease the process of finding the right cover at the right price — with the correct reinsurance optimization. The insurance industry should insist upon this enterprise level of risk mitigation before it issues cover for large risks and data breaches. Risk modeling. The exercise in Figure 1 uses a robust industrial risk modeling tool to look at cyber risk.  The red is the tail value at risk (TVAR) and the area that needs to be mitigated by risk transfer mechanisms. Reinsurance, the most obvious mechanism, is not the replacement for leading-practice risk management. The assumption is that data integrity standards have already been adopted here, so we are looking at the residual risk mitigation following that implementation. Capture The bottom graph shows the situation prior to reinsurance, where small claims are aggregated and a long tail cuts into the companies’ risk-based capital limits. The top graph shows a leaner risk situation after the application of reinsurance, bringing it back in the comfort zone. The standard deviation process will depend on how the regulator views cyber risk and solvency. Currently, solvency models are geared on average to a 1-in-200-year event, which may be suitable for earthquake and other peril risks but is likely to be different for cyber risks and to vary by country risk appetite. Other risk transfer mechanisms. In addition to reinsurance, cyber captives are used to address continuing risk. A point worth noting is the potential to mathematically create a “cyber index” in the same manner that weather and stock market indices appear in the macroeconomic models representing market risk exposure correlation to other enterprise risks. This cyber index could be created from the data patterns of the cyber catastrophe models and other data and then used as a threshold to trigger a data breach claims process following notification of a data breach. Special-purpose vehicles (SPVs). This risk transfer approach is used in conjunction with capital market investors and sponsors, and it is similar to the catastrophe bond investments that protect countries from earthquake risk. It creates a bond shared by government and private industry to pay and share claims by loss bands in the event of a large or black-swan event. While these partnerships are very effective, such bonds often have a 10-year span, and a shorter life-span vehicle will be more suitable to cyber. Sidecars. For natural catastrophes, these two-year vehicles have been referred to as sidecars, an SPV derivative of a captive where investors invest in a risk via A-rated hedge funds. If the event has not taken place within a given time frame, investors receive their money back with interest. This makes cyber risk part of an uncorrelated portfolio investment for chief investment officers. They can also base investment on the severity level of the attack, so investments are not lost on all events. It will take time for this SPV approach to evolve over reinsurance and captives, but with good data quality, proper event models, ratings and adoption of KSI and other standards in the IT space, the capability to use capital markets to risk-transfer cyber risks will emerge. Data integrity standards would increase investor confidence in such SPVs. For the full report on which this article is based, click here.

Shaun Crawford

Profile picture for user ShaunCrawford

Shaun Crawford

Shaun Crawford leads Ernst & Young's $1.4 billion global insurance business. He has been in the financial services industry for 27 years, having worked both in consulting or line management with the majority of European life assurers and U.K. retail banks at some point.

Has OSHA Become a Friend to Insurers?

A little-known regulation may provide a powerful tool outside of workers' comp law that can reduce fraud and lost work time.

It may be possible for employers to take a whole new approach to workers' comp cost containment based on an OSHA regulation that allows an employer to require injured workers to undergo a prompt medical exam outside of the workers' comp system and to obtain the release of prior medical records.

Most employers are unaware that they can utilize this little-known and virtually untried regulation that allows for employers to pay for second medical opinions under OSHA recordkeeping requirements and regulations.  The regulation can be found in §§ 1904.7(b)(3)(ii) and (b) (4)(viii).

There are two major facets to this statute. First, employers must pay 100% of the medical exam costs outside of the workers' comp system. Second, insurance companies and third-party administrators (TPA)s cannot schedule such exams or pay for such exams because they cannot work outside the state workers' comp system.

The costs of such exams are not included in an employers' overall workers' comp claim costs, nor are they included in experience modification calculations. The costs for such a program would have to come out of another budget, like risk management or safety.

Of major significance is that, while the regulation states such exams are outside workers' comp regulations, with proper procedure they and the related medical records are discoverable. They may be released and used in workers' comp claim adjudication.

This approach would not necessarily require any change in how medical professionals provide exams for injured workers. What would change is how these exams would be scheduled and paid for, outside workers comp.

A key issue is that these employer-directed exams would have to be "contemporaneous." OSHA defines this as, "no change in workers' condition" between the medical exams.

There would be a very short window, in my opinion, to utilize this OSHA prompt medical exam. These exams would need to be scheduled at the same time as the initial injury reporting.

The intent of OSHA is to allow employers to choose between two conflicting medical opinions (employee medical provider vs. employer medical provider) as to whether an injury or illness is "recordable" under OSHA regulations based on "authoritative" medical opinion.

OSHA regulations are silent on two fronts:  1) the actual timelines beyond "contemporaneous" and 2) whether these medical exams and prior medical records can be used through subpoena to question the need for continuing medical treatment and lost time under state workers' comp.

But I see no reason why the results of such an exam could not be used even after a claim is determined to be "recordable" under OSHA regulation, because the prompt medical exam and second medical opinions and reports are "discoverable" under proper procedure in state work comp systems, according to OSHA.

The employer can simply say, "We paid for a prompt medical exam under OSHA regulations, and this is what we found out." The employer would then have the right to share this information with its insurance company or TPA because the injured worker must agree to the exam and release of prior medical records.

This approach would be a great tool for employers in states such as Illinois whose workers' comp laws allow the employee to select the medical provider.

The recent federal court case in Illinois decided against Fed Ex may well have had a different outcome if the company had used federal OSHA regulations to support its policy of requiring employees to promptly report medical care.

Illinois state workers' comp law, among many others, clearly gives the employee the right to select the treating medical provider. Most people in the industry would say, "case closed!" But OSHA regulations (federal law) clearly also give employers the right to schedule a prompt medical exam and to choose between two conflicting medical opinions to determine the "most authoritative." OSHA also refers to Department of Transportation exams as an example of intermediary exams available to employers. Those exam records and results are not part of the comp record, but, with proper procedure and use of subpoena, records may become discoverable in work comp cases.

Employers have always felt powerless in states that allow the injured worker to select the treating medical provider, such as Illinois and New York. By using OSHA regulations, employers may very well have a powerful management tool in their arsenal that they didn't even know about to address potential fraud, abuse and inappropriate medical care and lost-work time.

Hence, I believe a little-known, rarely utilized outside of state workers' comp is available to employers under OSHA and could be very powerful.

Stay tuned.

Daniel Miller

Profile picture for user DanielMiller

Daniel Miller

Dan Miller is president of Daniel R. Miller, MPH Consulting. He specializes in healthcare-cost containment, absence-management best practices (STD, LTD, FMLA and workers' comp), integrated disability management and workers’ compensation managed care.

9 Technologies That Will Change Insurance

If driverless cars end personal auto insurance, how will that affect other products? How do we assess the risk of a 3D-printed structure?

"We're at maybe 1% of what is possible. Despite the faster change, we're still moving slow relative to the opportunities we have." This compelling statement from Larry Page, CEO and co-founder of Google epitomizes the power and potential of emerging technologies. Yet most insurers have difficult comprehending how fast emerging technologies are being introduced. And the pace is gathering speed, having a profound impact on our lives, our businesses and our industry. Moore's Law tells us that computing power doubles every 18 - 24 months, but even that seems to be irrelevant compared with the power of emerging technologies, because they are coming faster, and they are more formidable than ever before. This rapidly accelerating pace comes at a time when the convergence of advancing technologies, increasing customer expectations and access to capital for new technology start-ups are magnifying the extremes, and the impact to the insurance industry is more game-changing than ever before. Never before has technology advancement had as much influence as what we are experiencing now. Technologies promise breakthroughs that will challenge long-held business assumptions and shift the boundaries between business and industry – creating completely new businesses and industries. SMA is actively tracking nine emerging technologies: 3D printing, the Internet of Things (IoT), drones/aerial imagery, driverless vehicles, wearable devices, "gamification," artificial intelligence, semantic technologies and biotechnology. We are following them from a perspective inside the industry as well as taking an "outside-the-industry" view.  Not surprisingly, adoption is being led by the Internet of Things (IoT). The IoT is followed by artificial intelligence (AI), drones/aerial imagery and then gamification. The insurance industry's rapid adoption is impressive. Five of the nine technologies are projected to arrive at or go well beyond the tipping point within three years. All nine are projected to surpass the tipping point within five years. Adding to the momentum, individuals and companies that are a part of SMA's Innovation Ecosystem and represent outside-the-industry perspectives see an even faster rate of adoption and greater potential for the transformation of insurance. This underscores that the insurance industry is on the crest of a massive wave of change. Over the next five years, these emerging technologies, just like the Internet, smartphones and social media before them, are expected to drive new business models and foster the formation of companies from unexpected combinations of companies and industries — capturing the customer relationship and revenue. The astounding influence of these technologies -- over a relatively short period -- will begin to delineate a new generation of market leaders within and outside the insurance industry. Who will be the next Facebook, Uber or eBay? So how should insurers respond to this rapid adoption? Insurers must quickly begin to develop strategies and experiment with and invest in these technologies today. If not, many insurers will be placed at significant risk, because there is typically a minimum two-year lag time between leaders and the mainstream and a minimum four- to five-year lag time between leaders and laggards. And given the pace of adoption of these technologies by insurance customers, the lag time carries more potential for damage than it did in the past. Consider that Apple introduced the iPhone just seven years ago, in June 2007. The result has been massive destruction and transformation that has created new leaders while forcing others into increasing irrelevance. While it may be difficult to grasp the sheer magnitude of the change coming from the emerging technologies, remember that Larry Page of Google says we are only seeing 1% of the potential. Insurers must aggressively find a way to engage these technologies and uncover the potential, first to stay in the game, and then to win it. To do so, insurers must have modern core systems as a foundation to integrate the use of these technologies. Consider these questions: How will product liability need to be redefined for driverless vehicles? If individuals or businesses no longer need auto insurance, what is the impact on other products? Multi-policy discounts? Will the driverless car encourage shopping for alternative options? Will it drive commoditization into other products? How will insurers assess the value and risk of a 3D-printed structure, body organs or vehicle parts? How will biotechnology-based agriculture change risk factors? How will drones help underwriting and claims? Can drones also provide resources needed during catastrophes, creating new services and value? Could gamification be a new channel to help drive increased market penetration through engagement and education about life insurance, health, medical, liability, home, umbrella and more? These are but a few of the implications for insurance. They are inter-related and complex. They stress the significant disruption that is coming, and coming fast, as represented by the five out of nine emerging technologies that will reach the tipping point within three years … and some much sooner. Insurers that have not begun to pilot these technologies are already lagging behind and will struggle to keep up with this accelerated pace of adoption, not just from today's competitors, but also from tomorrow's competitors, as well as their customers. That poses a question: Will you remain relevant, or become the next Kodak, Blockbuster Video, Borders or CNN of insurance – the iconic brand that dies? The coming years hold unparalleled opportunities for innovation and matchless potential for becoming market leaders that leverage emerging technologies to increase customer value, engagement and loyalty to insurers. As Steve Jobs stated, "Everyone here has the sense that right now is one of those moments when we are influencing the future." The question to you is: Will you influence the future or be a remnant of the past? This article is adapted from a new research report, Emerging Technologies: Reshaping the Next-Gen Insurer.

Denise Garth

Profile picture for user DeniseGarth

Denise Garth

Denise Garth is senior vice president, strategic marketing, responsible for leading marketing, industry relations and innovation in support of Majesco's client-centric strategy.

What the Apple Watch Says About Innovation

The watch is designed to bolster the iPhone -- but innovators must be willing to move beyond old products and business models.

|
Now that the dust has settled on the long-anticipated unveiling of the Apple Watch, a major obstacle to its success is coming into view: the iPhone. The Apple Watch has been the subject of breathless anticipation for years because, as Tim Cook said at its introduction, it represents “the next chapter in Apple’s story.” Conceived three years ago, shortly after Steve Jobs’ passing, the Watch is the embodiment of multiple dramatic arcs and aspirations. It is the first major product developed under Tim Cook and Jony Ive outside of Jobs’ shadow—and thus has huge personal and legacy implications for both men. The Watch is also Apple’s attempt to catalyze and dominate the wearables category. Given the intense competition in the smartphone market and the widespread view that new killer products, platforms and ecosystems will emerge somewhere at the intersection of the Internet of Things and wearable computing, the Watch is central to Apple’s post-iPhone strategy. It might seem that the iPhone should be the Apple Watch’s greatest asset. Apple is positioning the Watch as a jaw-dropping, must-have peripheral to the iPhone. Millions of iPhone-toting Apple fans are sure to queue up upon the Watch’s 2015 launch to buy it. But do not mistake early adopters for market validation. For billions of other potential customers, the Watch’s close linkage and tethering to the iPhone could be a fundamental weakness. In the short term, Apple must convince existing customers that they need a Watch in addition to their iPhone. Apple, however, has yet to offer a convincing case for this. Long-rumored groundbreaking health apps built on Watch-mounted sensors have not materialized—disappointing many healthcare watchers (including me). That leaves Apple competing against more narrowly focused wearable devices like the Fitbit and Pebble—but at multiple times the price and fractions of the battery life. Apple is also touting Apple Pay as a killer app that will attract consumers to the Watch. But, while Apple Pay is an intriguing service-oriented strategy for Apple, there is no need for consumers to buy an Apple Watch to use it. Apple Pay will work fine with just the iPhone. For now, it seems that Apple has higher hopes for the Watch as a fashion accessory than as a category-defining killer app. But even that highbrow aspiration has ample skeptics who question the Watch’s fashion chops and business potential. In the long term, when and if compelling apps emerge for the Watch, Apple will have to convince Watch enthusiasts that they need an iPhone in addition to the Watch. This might not seem like a limiting factor given that there are more than 300 million active iPhone users. But imagine if the iPhone were just a peripheral to the Mac, thereby limiting its addressable market to Mac owners. Or imagine if the iPhone had to be tethered to the iPod. Do not such scenarios, in retrospect, sound implausibly shortsighted? Both the Mac and the iPod were great products with loyal followings at the iPhone’s introduction. Apple, however, did not limit the iPhone to its predecessors’ market niches. As shown in Figure 1, the result was a blockbuster that lifted Apple far beyond those earlier products. The iPhone has grown to represent more than half of Apple’s revenues and perhaps even more of its profits. chunkagraph1 Figure 1 — Apple Device Sales Now the iPhone has a loyal following but a small share of the smartphone market. Will Tim Cook limit the Apple Watch’s success to iPhone owners, or will Cook free it to dominate the potentially larger wearable devices space? Freeing the Watch is a strategic imperative. History tells us that market-leading technology products like the iPhone inevitably fade. The companies that depend on them must innovate into the succeeding categories or fade as well. Kodak, Polaroid, IBM, DEC, Nokia, Motorola, Blackberry, Intel, Sony, Dell and Microsoft are among those fading or faded companies. All of those other companies underutilized disruptive advances in information technology for (at best) incremental enhancements to their dominant products. By doing so, they missed out on new killer products, business models and industries that coalesced around the new platforms enabled by those technology advances. Thus, Kodak wasted decades trying to deploy digital photography (which it invented) as an enhancer to its dominant film-driven businesses. Microsoft was slow to the web and the cloud and killed its early e-reader and tablet devices because of internecine struggles over how those new categories related to its Windows and Office businesses. The list goes on: IBM did not lead in minicomputers. DEC and every other leading minicomputer maker missed out on personal computers. Motorola and Nokia were killed by smartphones, and Blackberry is near death. Limiting the Watch to a peripheral role in the iPhone-centric ecosystem would repeat the same mistake made by those earlier market-leading technology companies. That’s not to say there is not a lot of money to be made in the defend-the-cash-cow approach. Just look at the more than $650 billion in revenue and nearly $250 billion in earnings that Steve Ballmer delivered in his tenure as Microsoft CEO. Ballmer achieved those impressive numbers by defending and milking Microsoft’s dominant Office and Windows products. Ballmer, Microsoft and its investors missed out, however, on the market value created by Google, Apple, Facebook, Twitter and others that capitalized on search, big data, cloud computing, mobile devices and social media. Ballmer’s inability to grow beyond the core products that he inherited stagnated Microsoft’s market value for a decade. Likewise, Tim Cook could nurse Apple’s iPhone-driven revenue stream for a long time. I doubt, however, that Tim Cook would be satisfied with a value-creation legacy comparable to Steve Ballmer’s. It is too early to dismiss the Apple Watch’s potential to transcend the iPhone. We’ll get a measure of Apple’s foresight when it releases the software development kit (SDK) for the Watch. That will show how fundamentally tethered the Watch is to the iPhone and whether Apple has laid the groundwork for the Watch to be standalone at some point. The real gut check for Tim Cook is further out in time, when technology and creativity enables wearable devices like the Watch to not only stand alone from the iPhone but also to replace it. Will Tim Cook allow the Watch to cannibalize iPhone sales—as Apple previously allowed the iPhone to eat away at the iPod and risked the iPad's doing the same to the Mac? Or will Apple stagnate as competitors and new entrants out-innovate it? Will Apple fade away as the riches from new killer apps, devices, ecosystems and business models that coalesce around emerging wearables-centric platforms flow to others?

A Better Way Than Predictive Modeling

Issues indicating a workers' comp claim is becoming a major problem can be addressed in real time -- if the right systems are in place.

Even though it's obvious that early intervention drives better outcomes, few systems are in place to guarantee early intervention into problematic workers’ compensation claims. One widely acclaimed effort to identify troublesome claims early is predictive modeling, which conducts elaborate analysis of data using advanced mathematical methods. But finding those claims remains a guessing game. Not every future problematic claim will be tagged, because many do not meet the modeling criteria. Predictive modeling can be helpful but is imperfect and costly. A more practical methodology for identifying costly claims is to monitor the data on a concurrent basis to uncover dicey conditions in them. Rather than predicting which claims will be risky, the conditions that portend risk and cost in claims are isolated and then identified as they occur. Technology is used to find the claims that bear those conditions whenever they occur throughout the course of the claim. All claims are monitored, so none are missed. No guesswork is involved. The data monitoring approach is powerful, but, as with predictive modeling, only when the next step is taken. Organizations that undertake a process for identifying risky claims early stand to lose the entire benefit unless they also structure procedures for intervention. The appropriate persons must be notified about problematic claims immediately, and those persons must carry out the recommended procedures.
Whether the alert recipients are claims adjusters, medical case managers or someone else, they must follow intervention procedures. Using the data monitoring approach, each condition that is sought in the data should be associated with a prescribed intervention procedure. Follow-up procedures should be specific so that analyses can be made of their effects. For instance, when the condition identified in a claim is the third prescription for a Schedule II drug, the system might be set to alert the medical director. A standard method of intervention or approach for intervening with the treating doctor is then followed. If medical treatment is extended past a designated point for low back strain, an alert would be sent to the claims adjuster, whose procedure is to engage a medical case manager to investigate. The investigation procedure is standardized. The important thing is that intervention procedures are analyzed in advance and clearly stated so the actions are consistent across the organization and over time, regardless of who actually carries out the intervention. Medical case management as a strategy has been undervalued because each intervention was handled individually, meaning that results could not be measured. But it's possible to standardize and categorize tactics so outcomes can be measured and compared. Claims adjusters sometimes fail to refer cases to medical case management for a variety of reasons. When early intervention procedures are standardized, the referral is automatically made by the system, thereby eliminating the burden of referral for claims adjusters. Efficiency is good. Simply stated, early intervention is more effective than late intervention. The problems have not yet morphed into catastrophes and are usually easier to solve. When systems and procedures are established to automate problem claim identification and follow up procedures, best results can be pinpointed. The approach that produced those results can be continued. Approaches that produce lesser results can be modified. The upward spiral of quality improvement is continuous.

Karen Wolfe

Profile picture for user KarenWolfe

Karen Wolfe

Karen Wolfe is founder, president and CEO of MedMetrics. She has been working in software design, development, data management and analysis specifically for the workers' compensation industry for nearly 25 years. Wolfe's background in healthcare, combined with her business and technology acumen, has resulted in unique expertise.

What Insurers Need to Know About Bitcoin

The technology behind the crypto-currency can be applied in ways that mean just about anyone with the title "broker" should be concerned.

A bitcoin (lowercase b), as a currency, has several flaws that will continue to limit its ability to replace money, as we know it. There are millions of words published on the subject, so I’ll leave it to the reader to assess arguments on both sides. However, Bitcoin (upper case B) as a protocol for the transfer of value is an extremely important innovation that the insurance industry would be wise not to ignore. This article looks at the issue from the point of view where the insurance industry meets the engineering profession; this combination could be where some of the most important and valuable new opportunities arise. The Block Chain Protocol (BCP) The Block Chain Protocol is a brilliant innovation that cannot be un-invented – it is here to stay, and it will appear in many forms long after it sheds its association with so-called crypto-currencies. Bitcoin was designed to solve an age-old problem: the possibility of spending multiple times a promissory note such as currency. In the case of virtual currency, the problem is especially acute because a currency created on a computer can be easily copied by a computer. The BCP can be compared to a train leaving the station. When the train arrives, the door opens and everyone piles in. After a predetermined amount of time, the doors close. While the doors are closed – and only while the doors are closed – the people write contracts for each other to agree upon. When the doors open, everyone piles out, but the contracts stay. Soon after, the doors close forever. After the doors close, absolutely no changes can be made, ever. Any changes must be renegotiated as part of a new “block” in a continuing “chain” of transactions. This prevents someone from printing “money”, i.e., issuing the same contract to many recipients. Today, this function is performed by a legal system, brokers and intermediaries such as banks and credit agencies – it is easy to see how these institutions would be concerned that an upstart technology that is fully decentralized with no CEO or corporate structure could literally exterminate their brokerage fees. (While I used a mechanical analogy of door and trains, the BCP operates using time stamps and cryptography to manage identities, ownership, vetting, etc.) The big deal with bitcoin as a currency is that the value of a contract can be cast in time. The “crypto-currency” simply represents that value outside of the block for that exchange inside of the block. Many people, including the media, get hung up on the idea of currency because that is something that obviously concerns everyone in the age of impending financial doom. However, one must not be fooled by hype nor remain complacent and hope the bitcoin issue it will go away. The BPC is here to stay, and there are thousands of them in existence, not just bitcoin. Yes, this means threats to the status quo, but there are also great opportunities for those who learn how to use smart contracts to transmit value without institutional friction. The part that the insurance industry should be concerned with is the ability to transmit contracts. When contracts are executed on a block chain and locked cryptographically, these are called “smart contracts.” The seminal work on smart contracts was written by Nick Szabo and introduced in this 1997 primer: The Idea of Smart Contracts. The remainder of this article will focus on one very important type of smart contract: the adjudicated smart contract partnering the insurance industry and the engineering profession. The Oracle Adjudicated contracts are contracts involving three parties: the insurer, the insured and the adjudicator. The insurance adjuster should immediately come to mind, but the work of the adjudicator is much more flexible. In an insurance claim, there is often a forensic investigation involved. In many cases, the investigation may reveal failures of design, quality, defects and workmanship and moral hazard. When a payout is warranted, claim money is drawn for reconstruction and remediation per a contract. The insurance industry depends on actuarial statistics and forensics to manage these risks. What if forensics could be performed and actuarial data compiled before the failure occurs? Adjudication can be integrated directly into the performance contracts as the project is designed and built. Licensed professional engineers can “flip the switch” that releases funding or seal coverage for specific perils as they oversee the design/build contracts during design, construction and service life of a property. This would allow insurance companies the ability to price risk and adjust exposure pools with extreme accuracy. Assurance by Design In other words, it is possible to develop Block Chain Smart Contracts. My firm is doing this for the engineering, construction and property management industries. The concept is to codify current standard contract templates, such as AIA contracts, into a series of smart contracts on a cryptographic block chain. Contractual events will correspond to payment milestones underwritten by bank and insurance institutions. As each milestone is reached, the professional engineer will verify the proof of work and flip the switch that released the contract to the next insurable component. The Insurance Industry Is Threatened Today, many insurance companies are not too concerned with construction risk as long as it is priced correctly. What the insurance industry may not realize is that if too many good properties are subsidizing too many bad properties, private parties with good properties will use these adjudicated contracts to self-insure. For example, if a 250-unit, high-rise condo spends $4 million on a new potable water system and the insurance premiums are not discounted accordingly, the condo could now easily form its own risk-sharing pool with communities known to have new water systems. With Block Chain Protocol technology and readily available data, almost anyone can now form an insurance pool. The challenge then for the insurance industry is to use new technologies to build more and better insurance products using the legacy tools that they are built on and rapidly adopting new technological solutions that are now available to them. The Block Chain Protocol may be one of the most important innovations of the digital age. Pretty much anyone with the job title of “broker” should be seriously concerned. History provides countless examples of companies and industries that failed to adapt to new changes. For this reason, insurance should take the Block Chain Protocol very seriously. The technology is simple by design and only requires some creative adjustment and strategic partnerships to assimilate into the business plan. Nobody will do it for us. We need to do it ourselves.

Dan Robles

Profile picture for user DanRobles

Dan Robles

Daniel R. Robles, PE, MBA is the founder of The Ingenesist Project (TIP), whose objective is to research, develop and publish applications of blockchain technology related to the financial services and infrastructure engineering industries.

How Leadership Will Look in 20 Years

Leaders will emphasize asking the right questions, not on getting the answers right -- after all, Google will have all the answers.

Let’s face it, most of us are addicted technology futurists. Who doesn’t enjoy speculating about what technology marvels will be commonplace in the coming decades? Will it be 3D printing? Artificial intelligence? The “singularity”? All are buzzwords of the emerging technology future. But what about leadership? If we don’t get leadership right, all the bright shiny objects in the future will dangle beyond our reach. Will the tenets of great leadership change over time, and, if so, what will leadership look like 20 years from now? Here are six major shifts I believe will mark how the most effective leaders will behave in 20 years: 1. Questions, Not Answers. Today’s leaders are addicted to answers. Corporations reward being right at the expense of just about everything else. We promote those who choose correctly, and those who don’t mysteriously disappear from the org chart. But with technology advances, answers are quickly becoming a commodity. Today you can Google just about anything – just imagine how efficient “search” will be in 20 years. Internal systems will capture corporate learning like never before, allowing you to tap deep into the set of corporate experiences. Of much greater value will be the ability to ask the right questions. In a chaotic situation, winning requires focus, and knowing where to focus will be determined by the questions you are asking. In the future, your effectiveness as a leader will be defined by your ability to ask the right questions. 2. Employee Pull. For nearly 100 years, leadership has been a top-down game. The Industrial Revolution brought about scale, and the only way leaders knew to manage this scale was through hierarchy. It was assumed that individuals could only effectively manage between 8 and 15 people, so as companies added more people they added more layers. But today’s marketplace moves and changes at great speed, and the inherent slowness of larger hierarchy is rapidly being trumped by the need for speedy, market-based decisions. Rather than having the “leaders on high” determining strategy and operational decisions and pushing them down through the organization, tomorrow’s winning organizations will delegate decision making authority to the “edge.” Decisions will not be pushed from the central command – they will be pulled from the edges of the organization, where the employees are closest to customers, and increasingly working directly in partnership with them. The most effective leaders will be those who embrace this extreme empowerment, while still effectively managing quality and risk. 3. Customer Pull. Like employee pull, the entire traditional “push” system of marketing will be turned on its head. Instead of executives in a wood-paneled conference room deciding on products and services to offer and then pushing those offerings out to the market, customers will make more and more of these decisions. Technology will make customization much more prevalent. Our experience when visiting websites will be unique to each of us. 3D printing will allow us to “materialize” what we consume in quantities of one. The Four P’s of marketing – price, product, place, promotion – will be controlled by a fifth and overriding factor – personalization. Leaders in this new marketplace will be those able to set up and manage systems of tools with which customers can interact and create. Great marketers will be replaced by great marketplace managers. 4. Chaos Learning. In global business, the last 20 years have been marked by leaders in pursuit of the elimination of variance. We ask our consultants to simplify the world into a 2×2 matrix, identify “best practices,” write detailed policies and procedures that limit behavior choice and hope that the current version of market reality lasts long enough for these changes to be effective. But stable reality is being replaced by constant change, and at an accelerating pace. Tomorrow’s most effective leaders will embrace this new, chaotic world. Planning will be replaced by intelligent reaction. Leaders will anticipate where the next disruption may come from and prepare for multiple scenarios (windshield, not the bug!). Instead of relying on proven static methods and processes, leaders will focus on building a learning capability, being comfortable with ambiguity, continually working within a changing landscape and anticipating and reacting to it with agility. 5. Focus on Growth. To put it bluntly, efficiency will become ubiquitous. It is going to be a price of entry in nearly every industry. Most activities that are currently labeled “operations” will be shifted to computers and robots (read: unemployment). And we will all have access to the same tools. As a result, the battle for competitive advantage will be fought and won based on growth – rewarding those companies that can consistently invent and commercialize new products and businesses. Leaders who excel will understand and reward the skills and behaviors that create growth and innovation. Examples include “fail fast” experimentation, rapid prototyping and continuous iteration. 6. Purpose. Purpose is important for leaders today. It will become increasingly vital in the future. We are entering a great age of empowerment for both employees and customers. Employees will demand the right to choose who they work for, which tribes they join and which products they associate with. Purpose will be the basis of much of this choice, and the greatest leaders will rally around missions that offer the chance to have dramatic positive impact. So there you have my futurist vision: 20 years from now, great leaders will ask the right questions, let employees pull information and customers pull their desired products and services, organize for chaos, foster the behaviors of growth and guide the entire system toward a positive purpose. I can hardly wait.

Rick Smith

Profile picture for user RickSmith

Rick Smith

Rick Smith is a speaker, serial entrepreneur and best-selling author. He is perhaps best known as the founder of World 50, the world's premier global senior executive networking organization. World 50 contributors include Robert Redford, Bono, Alan Greenspan, President George W. Bush, Francis Ford Coppola and more than 100 other iconic leaders. More than half of the Fortune Global 1000 are World 50 customers.

Post-SB 863: Now How Do We Contain Costs?

With the bill's failure to cut workers' comp costs, California should take three steps to encourage the right behavior by healthcare providers.

|
Several recent articles and publications have highlighted the challenges we continue to face in California workers’ compensation. Following the "state of the state" report in August by the Workers Compensation Insurance Rating Bureau (WCIRB), Mark Walls noted in an article that the challenges in California continue to mount as California now accounts for 25% of U.S. workers’ comp premiums, with some of the highest medical costs in the nation. The recent Oregon report noted that California now has the most expensive comp system in the nation, having risen from the third most expense in 2012 to the #1 spot -- a dubious distinction that should serve as a continued call to action. As Walls so aptly noted, we in California need to move beyond the notion that we are always going to be different. We cannot continue to mark our “progress” against our own past performance, overlooking the sobering comparison to other states. If we do, we’ll see the return of television commercials touting nearby states as welcoming alternatives for employers. With no shortage of reforms over the past 15 years, Mark’s comment about our focus on reducing frictional costs in the system without really addressing medical provider behavior rings true. The recent reform attempted to tackle the frictional costs, particularly the costs of liens and utilization review (UR) disputes. It was assumed that the lien filing fee and statute of limitations on liens would reduce the extraordinary burdens and costs that were expended to both litigate and settle these expensive and often unjustified charges. It was also thought that independent medical reviews (IMRs) would speed the delivery of necessary medical care and would keep UR disputes out of the courts. Although there certainly appear to be fewer liens, the problem has not been solved. In addition to some inevitable liens for disputed medical treatment, we continue to see liens filed after bills are reduced to conform to the approved fee schedule. In a state with a fee schedule, why should an employer be forced to litigate or settle a lien for charges that exceed the fee schedule? We know we can resist the lien, have a bill reviewer testify at a lien trial and have a good chance of prevailing. Unfortunately, though, the cost of winning is very high, including the cost of the hearing and the larger cost of keeping a claim open, delaying a settlement and maintaining a reserve. This is the very real dilemma that often causes payers to settle a lien that is not owed, rather than defending against it. What if the prevailing party was reimbursed for the full cost of a lien hearing? Perhaps that would persuade claimants to carefully evaluate their liens before proceeding, while also forcing the defense to evaluate the validity of the lien before allowing the lien to go to trial. The other significant attempt at reducing the frictional costs was the introduction of independent medical review. What have we seen, as a claims administrator that limits the use of utilization review by empowering examiners to approve significant numbers of diagnostics and treatments? We’ve seen in excess of 97% of the URs submitted to IMR upheld by the IMR process. Yet, for those 97%, our clients have incurred the added expense (IMR is not inexpensive), and the claims process was delayed while the IMR process was completed. Some oversight is definitely healthy and necessary. The challenge is in finding a less costly, less time-consuming method of ensuring that injured workers are treated fairly -- a method that actually changes provider behaviors so that the injured workers who are treated by high-performing providers are not swept up in a system of reviews and re-reviews. Although no solution is likely to satisfy all constituents, there must be something we can do to provide incentives for the right provider behaviors. What about using all the medical bill reviews and other data to analyze provider behavior and “certifying” providers? The consequences could be: 1- A fee schedule “add on” or bonus for the top quartile of providers 2- A six month “bye” from utilization review for the top 50% of providers 3- Some sort of added oversight for providers performing below the 50th percentile This is certainly not as easy as it sounds. Perhaps some representative providers would have some suggestions. Perhaps we should engage them in a discussion. But it doesn’t seem that there can be any harm in considering a “pay for performance” model. The answers may lie in the data, and they may not. The answers may also lie in the programs of one or more of the 49 states that offer less costly workers’ compensation coverage to employers. It certainly behooves us to look everywhere until we find those answers.

Judy Adlam

Profile picture for user JudyAdlam

Judy Adlam

Judy Adlam is president and CEO of LWP Claims Solutions, an organization that leverages a culture of teamwork and excellence to consistently deliver results that are far superior to industry standards. She is a chartered property casualty underwriter (CPCU) and a senior claims law associate (AEI).

5 Issues for Boards on Risk Appetite

To avoid common misconceptions, management must have extensive experience with planned and monitored risk taking.

Many have struggled to find and articulate a risk appetite. It is actually not too hard to find, if you know where to look. It is right there – on the border. Risk appetite is the border between the board and management. Once management has proposed a risk appetite and the board has approved it, then management is empowered to take risks. As long as the risks are within the risk appetite, then management does not need to inform the board until after taking those risks. If management plans to take risks that are outside of the risk appetite, then executives must go to the board in advance for permission. That, of course, is just the bare minimum communication with the board about risk. There are five topics that make up a good level of board communications: 1. Risk appetite and plan 2. Risk position and profile 3. Top=risk mitigation and capabilities 4. Emerging risks 5. Major changes to risk environment and risk plan The first and last items are the subject here. The other topics will be covered in later posts. Notice that the first item on the list above is appetite AND Plan. Before discussing risk appetite, both management and the board need to be very familiar with the company’s historic levels of risk and the intentions for risk level. If there is no history of risk planning, it is totally premature to even discuss risk appetite. It is doubtless true in all cases that management has vast experience with risk taking, as well as experience with risk taking that ended up creating losses or other undesirable adverse consequences. But unless there has been experience of planned and monitored risk taking, there is a natural propensity to start with the presumption that, in the past, the highest-risk activities are those that ended in losses and that activities that did not end up with losses were lower-risk. While losses are a good indication of one sort of risk, they are not the only way to assess risk. Imagine the risk of an earthquake in a specific area. There have been no earthquake losses there in living memory. But that doesn’t mean that there is no risk. There was a devastating earthquake there just 150 years ago, thus there is certainly some potential for future events. Risk is not loss, and loss is not risk. Risk is the potential for loss. It only exists in advance of an event. Loss is the negative outcome of an event. Risk appetite sits on another border. That is the border between regular and extraordinary – mitigation, that is. For each of the major risks of a firm, we have a regular process for control, mitigation and treatment of risk that we have and and that we acquire. We also should have some idea of what we might do if the level of risk gets out of hand. For example, a life insurer writing variable annuities might have a hedging program that is used to mitigate unwanted equity market risk. A P&C insurer might have a reinsurance program to lay off excess aggregations of property risk. A bank might have a securitization program to mitigate the portion of mortgage risk that it does not want to keep. In all three cases, an unexpected jump in closing rate or a new very successful distributor might suddenly cause the level of residual risk after normal mitigation to become excessive. Usually, this is evidenced by a weakening solvency margin. The company must go into extraordinary mitigation mode. That means that for the risk that has become excessive, or for another risk if they have a nimble risk steering function, there will need to be some major change in operations to bring the level of risk back into line. The choices for these extraordinary mitigations may be simple adjustments to the normal mitigation processes, a shift in hedging targets, a drop in the reinsurance retention or an increased emphasis on securitizing all tranches. But most often these extraordinary mitigations involve real changes to plans, such as a change in pricing structure, risk acceptance procedures, a change in product or distribution strategy to discourage the least profitable or highest risk sales or a change in a share buy-back plan. In the most extreme cases, there might be a need to temporarily shut down the source of the excessive risk. Unexpected losses might also cause a sudden shift downward in risk capacity and therefore in risk appetite. In such cases, extraordinary mitigations will favor options that might speed the rebuilding of capital. In the most extreme cases, the final stage mitigation would be to sell an entire operation along with the embedded risk exposures. Almost all of those extraordinary mitigation choices are not decisions that management prefers for businesses. But good managers have some advance idea of the priority order in which they might apply those tactics as well as the triggers for such actions. Those triggers are the boundary for risk taking. They are reflective of the risk appetite. So if you recognize that risk appetite is this boundary condition, you realize that the talk you hear in some places of “allocating risk appetite” is not the approach that you want to take. What you really need is a risk target that is allocated. The risk target is your plan. It is not totally “efficient,” but there should be a buffer between the risk target and the risk appetite. That buffer allows for the fact that we do not control and may not even immediately notice all of the things that might cause our risk level to fluctuate, but we need a risk target because risk appetite is really the border that we hope not to cross.

Dave Ingram

Profile picture for user DaveIngram

Dave Ingram

Dave Ingram is a member of Willis Re's analytics team based in New York. He assists clients with developing their first ORSA (own risk and solvency assessment), presenting their ERM programs to rating agencies, developing and enhancing ERM programs and developing and using economic capital models.