Download

A Better Way to Diagnose Back Pain

Tools commonly used in workers' comp, including MRIs, can be overly sensitive and lead to overtreatment.

|
Neck and back disorders account for an estimated one third of all work-related injuries in the private sector. In only about 5% of all cases is back pain associated with serious underlying pathology requiring diagnostic confirmation and directed treatment, yet magnetic resonance imaging (MRI) is, controversially, often used for diagnosis. New technology can specifically diagnose muscle-related back pain and produce better outcomes. According to the Centers for Disease Control and Prevention, back pain is the single most common reason Americans seek medical attention, and a U.S. Department of Health study showed that managing this type of health disorder costs $850 billion annually. About 20% to 40% of the working population is estimated to experience back pain at some point, with a recurrence rate of 85%. The majority of back pain comes from musculoskeletal disorders (MSD), which are treatable through medication and physical therapy. MRI is frequently used to diagnose back pain, yet it is overly sensitive in identifying the cause unless it correlates with an objective clinical exam. European Spine Journal ran an article in February 2012 that found that a considerable number of cases of lumbar disc herniation (HNP) and spinal stenosis that were diagnosed through MRI may have been classified incorrectly. MRI is overly sensitive in exposing structural abnormalities of the spine, but not specific enough to diagnose accurately the cause of the back pain. Even though MRI imaging is commonly used to diagnose the cause of back pain, it is costly, ineffective and contributes to overuse. In fact, lumbar spine scans have risen dramatically in recent years and account for about a third of all MRIs done in some regions, despite the poor correlation between its findings and clinical signs and symptoms. In addition, there are at least two studies that have been conducted to assess MRI findings in patients without back pain and that have raised concerns. In 2001, Spine published a study of 148 patients; all were asymptomatic, yet an MRI scan showed that 83% had moderate desiccation of one or more discs, that 64% had one or more bulging discs and that 32% had at least one disc protrusion. The second study, published in the New England Journal of Medicine in 1994, found that only 36% of 98 asymptomatic subjects had normal test results from an MRI. The evidence indicates that it is common for patients who experience back pain to have abnormal MRI scans, regardless of their condition. Spine surgeons, knowing that MRI can be overly sensitive and non-specific in diagnosing back pain, also use discography, a provocative and invasive test, to attempt to accurately pinpoint the cause of pain. In reviewing many studies of this tool, it is clear that even discography can be overly sensitive and often inaccurate in identifying the cause of back pain and in predicting the outcome of surgery. In addition, because it is invasive, discography can actually contribute to further injury in certain patients. Imaging diagnosis for acute back pain often leads to surgery, and complications from unnecessary surgery can prolong back pain or lead to permanent disability. Because costly imaging studies often fail to produce positive health outcomes for patients with back pain, X-ray, MRI and CT scans should be used primarily for patients with neurogenic disorders or other serious underlying conditions. Because the majority of back pain is musculoskeletal in nature, the primary tools used to diagnose back pain are ineffective. What is needed is a tool that effectively diagnoses a musculoskeletal disorder. Electrodiagnostic Function Assessment (EFA) is an emerging technology that is a non-invasive and safe diagnostic device registered with the FDA. It can distinguish between spinal, neurogenic and MSD conditions, which can greatly help physicians reach a specific diagnosis. This is especially true in terms of workplace injuries, where MSD conditions are prevalent and difficult to diagnosis and treat, given that the complaints are often subjective. The following are two case examples where EFA technology, in combination with a neurosurgeon’s evaluation, was used to make accurate diagnosis and treatments: In the first case, a 34-year-old patient sustained a work-related injury from repetitively using an air-powered grinder. As a result of a court-ordered independent medical exam (IME), the patient went to a neurosurgeon with complaints of bilateral, radiating neck pain and numbness in his right hand. After undergoing an EFA examination, it was found that his resting readings were within normal limits for all muscle groups evaluated. The EFA did indicate non-significant spine and muscular irritation, with chronic muscular weakness. The patient then underwent an MRI, which was abnormal, showing diffuse stenosis but no herniated discs or neural impingement. The IME doctor deemed he was not a surgical candidate and recommended treatment with conservative, site-specific physical therapy and muscle relaxants. The EFA and neurosurgeon prevented unnecessary surgery and were able to help with appropriate care to get this case satisfactorily closed. The second case involved a 30-year-old mechanic who sustained a work-related injury, straining his neck while opening the hood on a semi. The EFA revealed no muscular irritation, but spinal pathology revealed an issue in the neck area that could be clinically significant. In addition, the EFA findings indicated acute neck pain, increased curving of the spine and loss of range of motion. In this case, the IME neurosurgeon requested an MRI, which confirmed the findings of the EFA examination. The MRI further showed a herniated disc consistent with the patient’s symptoms and exam. The patient failed physical therapy, and appropriate surgery was recommended. The patient underwent surgery and had an excellent outcome. In both of these cases, the administering physicians were able to make exceedingly accurate diagnoses by having the correct tools available to them. This would not have been possible without the assistance of the EFA. By using the appropriate diagnostic tool, each physician was able to render a more accurate diagnosis and appropriate treatment, which not only assisted the patient but helped to lower healthcare and workers' compensation costs. The use of MRI or other imaging technologies alone in diagnosing causes for back pain can be misleading and inaccurate in localizing pain generators. However, a more accurate diagnosis can be made when used in conjunction with the findings of EFA, so that appropriate site-specific treatments can be provided, leading to better patient outcomes and improved healthcare. The authors invite you to join them at the NexGen Workers' Compensation Summit 2015, to be held Jan. 13 in Carlsbad, CA. The conference, hosted by Emerge Diagnostics, is dedicated to past lessons from, the current status of and the future for workers' compensation. The conference is an opportunity for companies to network and learn, as well as contribute personal experience to the general knowledge base for workers' compensation. Six CEU credits are offered. For more information, click here.   Comment from Brent Nelson, Area Medical Director/Medical Director Occupational Medicine AZ at NextCare Urgent Care:
Very interesting article. As a physician treating and managing providers who treat work related injuries, I am often surprised at the number of referrals I see for advance imaging for back/neck pain. I was trained in an industrial athlete model for treating musculoskeletal injuries and one of the key points in the model is that an MRI or other advanced imaging should only be ordered to confirm a diagnosis, not find one. When this method is employed, the use of the imaging is less, and the findings are usually accurate and directly related to the complaint. When an MRI is ordered simply for back pain that is not responding to treatment as well as expected, and the provider does not have a clear idea of what the problem may be, ambiguous findings may serve only to muddy the waters and increase the cost of treatment and possibly even result in unnecessary procedures. A bulging or ruptured disk without nerve impingement, annular tear, facet arthropathy, etc. are findings that may exist in asymptomatic populations, and may not be the cause of the pain. A very detailed and thorough examination should always be performed at each visit, and this coupled with a detailed history should lead to an accurate diagnosis. Quality of physical therapy must also be assessed when patients do not return to baseline as quickly as expected. Is the patient being treated by a physical therapist with experience in sports medicine? These PTs tend to have a better outcome for back and neck pain. Is there an indication for kinesio taping? Would an IFC/stim unit help breach a plateau? These are all considerations in treatment that may help with resolution prior to an MRI. And again, an MRI should be ordered to confirm a diagnosis, and is most often indicated for a persisting radiculopathy or for an injury that may have resulted in an acute facet injury (not the same as degenerative changes in facet joint). Simple XRays when conservative treatment begins to fail can give hints as to underlying degenerative issues which mean patient will take a little longer to return to baseline, and help prevent advanced imaging being ordered prematurely. In short, the physical exam should give a good physician an idea of the problem and advanced imaging ordered only when one wants to confirm a suspected diagnosis. The importance of knowledgeable physicians and therapists working in collaboration, and involving the carrier during the process, is often overlooked (and often times hard to find). The majority of the time, the patients answers to questions and an appropriate physical exam will give one the answers to the questions about origin of pain and indicated treatment.
tomecek

Frank Tomecek

Profile picture for user FrankTomecek

Frank Tomecek

Frank J. Tomecek, MD, is a clinical associate professor of the Department of Neurosurgery for the University of Oklahoma College of Medicine-Tulsa. Dr. Tomecek is a graduate of DePauw University in chemistry and received his medical degree from Indiana University. His surgical internship and neurological spine residency were completed at Henry Ford Hospital.

Are Obamacare Wellness Programs Soon to Be Outlawed?

The EEOC filed a suit that could end workplace programs and expose brokers and vendors to similar lawsuits.

The Equal Employment Opportunity Commission (EEOC) filed suit Aug. 20 against a Wisconsin company, Orion Energy Systems, which severely penalized and then fired an employee who refused to participate in the type of wellness program now encouraged by the Affordable Care Act. The EEOC is arguing that there was “no business necessity” for this program and that the exam and other intrusive screening were “not job-related.” If the EEOC were to prove that the standards it cites (part of the Americans with Disabilities Act) apply to wellness programs, this has strong implications for the Insurance Thought Leadership community. This could spell the end of workplace wellness generally, and specifically could expose your clients with penalty-based or mandatory medical wellness programs to similar lawsuits. Although the White House is probably hoping the EEOC loses, winning this suit should be a layup. It can easily be shown that medical wellness programs are not job-related and have no business necessity. Quite the contrary, the three most basic provisions of “medicalizing” the workplace with wellness -- health risk assessments (HRAs), biometric screens and enforced doctor visits -- are more likely to harm employees than benefit them. HRAs makes employees disclose things like whether they routinely examine their testicles (which men are not supposed to do) or whether women intend to become pregnant. These HRAs then also give feedback with no basis in medicine, such as recommending a prostate cancer test that the federal government strongly advises against and perpetuating the myth that all women under 50 should get regular mammograms. Biometric screens pose an even greater risk to health than HRAs. Although medical societies are urging fewer screenings to avoid overdiagnosis and overtreatment, employer human resource departments can’t get enough of them, thanks to incentives created by Obamacare. The inevitable consequence: More people identified for “early intervention” to treat clinically insignificant conditions. For instance, an overzealous Nebraska colonoscopy screen caused the state’s vendor to trumpet that it saved the lives of 514 state employees who never had cancer in the first place. Yet instead of calling for an investigation, the state promoted this and other equally fallacious results successfully enough to win their program  the C. Everett Koop wellness award. Biometric screens usually include weigh-ins and penalties for refusing to participate (or, sometimes, for not losing weight). Shaming people into losing weight is unhealthy and unproductive, and body image issues reinforced by workplace “biggest loser contests” affect 20 million women and can be fatal.  Meanwhile, weight has only a slight effect on health spending during the working years, and, if economic incentives could generate sustained weight loss, Oprah Winfrey would have kept her weight off instead of giving up her lucrative Optifast endorsement contract. Medical science has no clue what causes obesity. Some novel theories are being proposed, but, whatever the cause, Obamacare-inspired fines are not the cure. Forcing employees to go to the doctor when they aren’t sick is perhaps the most curious and expensive wellness requirement. The clinical literature is quite clear about the futility of this custom, which may do more harm than good. Obviously checkups can’t save money if all they do is increase diagnoses and treatments with no offsetting benefit to actual health. Perhaps employee checkups are job-related in a few fields – public safety, airlines, sports, adventure travel – but otherwise it’s hard to see how worthless checkups improve an employee’s ability to answer the phone or do most other typical job-related tasks. The ADA standard is “business necessity,” meaning these hazards and punishment might be acceptable if money was being saved or morale was being improved, but – as the book Surviving Workplace Wellness shows, quite the opposite is true. No wellness vendor has ever shown savings that weren’t obviously made up, and most won’t defend their own claims. Even Nebraska somehow “found” huge savings despite all these unnecessary cancer treatments and no meaningful change in employee health, savings claims that their vendor now refuses to defend. Further, the wellness industry’s own recently published analysis shows no savings. Likewise, morale impacts are so negative that CVS and Penn State employees rose up in revolt against them. Increasing employee resistance also explains why employers have needed to almost triple fines since 2009 (now averaging $594) against employees who refuse to allow their companies to pry, poke and prod them. Perhaps Orion Energy’s defense could be that trying to control employee health behaviors and fining employees who eat too many Twinkies is a “business necessity” because it shows employees who’s the boss. There is, after all, no provision in employment law that requires employers to be nice. That defense might win the suit but also generate some headlines worthy of late-night talk shows.  Still it’s hard to imagine any other defense succeeding. Insurance brokers and consultants need to follow developments closely. If the suit succeeds, you’ll need to caution your clients to scale back on “playing doctor” with employees, and certainly on penalties for non-compliance. Orion’s penalty was draconian – a few hundred dollars in fines is probably still OK. Focusing wellness efforts on less sexy issues like serving healthy food and getting employees to exercise more should also keep your clients out of trouble. The worst development would be a flood of these lawsuits, but we at ITL will follow up with what you can do to avoid being one of the targets.

The 4 Major Sources of Change for Insurance

To plan, it is helpful to examine how other industries have either successfully navigated large changes and prospered, or have disappeared.

sixthings
The insurance industry faces disruption from a host of new technological and social phenomena. To plan for these, it is helpful to first examine how other industries have either successfully navigated large-scale changes and prospered, or have failed to do so and disappeared. This article will examine past and future market disruptors. It provides case studies of businesses that have failed or succeeded to navigate large-scale changes. By reviewing these cases, business leaders in the insurance industry will get a sense of how to prepare for inevitable disruptions. Four Major Sources of Change and How to Deal with Them Many of the recent and impending market disruptions fall into the following categories: Source#1: Disruptive Innovations Some innovations completely displace old markets or create new ones. This can be devastating for businesses if they fail to adapt quickly. The best example of this is the Kodak/Fuji Film rivalry that took place during the advent of digital media. Both companies were in the same dire straights: Inexpensive digital photos would soon replace the lucrative camera film, decreasing profits by more than a quarter. However, when digital media eventually took over, Fuji Film was able to thrive, while Kodak nearly faded out. As Fuji clearly demonstrated, the best way to handle disruptive innovation is through radical flexibility. In a 2012 article, The Economist summarized the rivalry outcome as follows: “Kodak acted like a stereotypical change-resistant Japanese firm, while Fujifilm acted like a flexible American one.” While Kodak was complacent, Fuji developed new products, sold intellectual property such as chemical compounds and sought new markets for film. By the time Kodak had gone into bankruptcy proceedings, Fuji had diversified enough to remain competitive, at one point growing to some $12.6 billion in market value while Kodak’s shrank to less than $220m. Source #2: Technological Upheaval Some new technologies change the way businesses operate from within. The best example of this is analytics software. Analytics refers to the use of sophisticated mathematical techniques to produce new value from data. The adoption of analytics will become virtually universal. According to technology-research firm IDC, big-data technology and services will grow at a 27% compounded annual rate through 2017 to more than $32 billion worldwide. A study conducted by MIT Sloan Management Review and IBM found that organizations that excel in analytics usually outperform new adopters of analytics by three to one. To manage technological upheaval, businesses are thinking creatively about new possibilities presented by new technology. For example, Sky Italia, a satellite TV provider, uses analytics to predict what kind of content its customers want to see, based not only on their watching habits but on their social media activity. Casinos use analytics to gauge customer behavior based on such fine points as when patrons order drinks, where they play the most and even when they smile. Source #3: Consumer-Culture Shifts Digital technology has a widespread impact on culture that affects customer/vendor relationships. One prominent outcome of this is that buyers are moving much further down the sales funnel before interacting with salespeople. For example, in 2012, Ernst & Young completed a global survey of 30,000 banking customers and found that those who were unhappy with their banks were twice as likely to switch to a competitor as they were in 2011. Because accounts can be transferred with just a few clicks of the mouse, banks now have to work harder to keep their customers from leaving. Further, banking clients are increasingly performing their own research without input from bankers. The same is true for B2B customers, as one CEO of a B2B company described in an interview with Forbes: “My sales team has called on every possible client, and they don’t know where to go next.” According to member-based business advisory company CEB, buyers now go through about 57% of the purchasing process before ever talking to sales. To react to changes in consumer culture, marketers must replace the old sales models with “facilitated buying” strategies. Vendors are increasingly interacting with prospects right where they are and provide more value on the front-end. By acting as buyers’ guides rather than salespeople, sales teams will grow relationships through trust. This is why content marketing strategies are displacing traditional advertising in many marketing budgets. Source #4: Price-Determination Fluctuations In the present consumer culture, price determination has become more elastic and complex. As such, many businesses are re-inventing their pricing structures. The health insurance and health benefits industries are examples where large-scale pricing shifts are taking place. Because of the Affordable Care Act, health benefits brokers will now have to disclose their commissions, which will give clients more negotiating leverage. Those brokers who have the most technical skill and who can flexibly price products and services are having the most success. Another contributing factor to pricing shifts is in the spending habits of “millennials.” These people, ages 13 to 30, are increasing in purchasing power by about 3% per year. Their spending is unpredictable, is mostly digital and will account for nearly one-third of total spending by 2020. To meet consumer demands for pricing options, businesses are becoming more inventive. For example, the Silicon Valley start-up Uber offers a crowd-sourced taxi-like service that employs “surge pricing.” Under this model, Uber services cost more when demand is high and the supply of cars low. “Sympathetic” pricing is another new pricing trend with humanistic intentions. According to business trend firm Trendwatching.com, waning consumer loyalty brought on by digital empowerment has made businesses eager to show consumers that they care. This has led to a series of warmer and fuzzier relationship-building strategies. For instance, “painkiller” pricing is an emerging strategy meant to provide relief. An example of painkiller pricing is where bars give discounts to patrons who have been served a ticket that day. Another example is “compassionate” pricing, which typically involves sliding scales for lower-income customers. Finally, “purposeful” pricing is meant to effect social change – such as through offering free public transport to alleviate inner city traffic. Conclusion For most industries, disruption is inevitable. Oftentimes, those businesses that are most accustomed to success will have the most trouble adapting. The first essential step in planning for disruptions is to gain a basic understanding of what the incoming challenges will look like. Once this is accomplished, insurance businesses can begin applying lateral and creative planning strategies to successfully navigate the change. Screenshot 2014-08-16 15.49.19

John Paul Nettles

Profile picture for user JohnPaulNettles

John Paul Nettles

John Paul Nettles is a consultant at GeauxPoint, a Baton Rouge, La.-based management consulting firm. John helps clients improve business performance. Typically, John’s clients experience 20% increases in efficiency, as well as other benefits. John’s professional background also includes journalism, technical writing and content marketing.

CHS Data Breach Underscores 3 'Don't's

For one, DON’T use the terms “identity theft” and “data breach” interchangeably. Why? Because they aren’t interchangeable.

In the wake of the data breach at Community Health Systems (CHS) that affected 4.5 million patients, many organizations feel that their customers are suffering from “breach fatigue,” that they think the CHS data breach is just one of many. While it’s true that the CHS breach is just adding to an already long list of breaches in the health/medical sector, the CHS breach is not one to ignore. If it feels as if it’s almost commonplace to hear about a data breach involving a medical or health entity, there’s a reason. And it’s important. Medical/health entities are in first place this year in number of breaches -- 43% of all the breaches reported by the ITRC are in this category. When the 4.5 million records from CHS were added to the list, health/medical also moved to first place in total records breached. (For more information on breaches and how the ITRC categorizes them, visit the ITRC Breach Report). Why are there so many breaches in the health/medical sector? This is a complex question, and there is no single answer. One reason may be the value of the type of data that is available in our medical records. It’s not necessarily the details of payment cards used for payment or medical history that make the hackers salivate. It’s your Social Security number, or SSN. Having your SSN exposed through a breach by your medical or healthcare provider does not just leave you vulnerable to medical identity theft. It can leave you vulnerable to all types of identity theft. The SSN remains the holy grail for identity thieves, and, in the CHS case, it appears that they got away with 4.5 million of them. It is of critical importance that we all react appropriately to this news. While we certainly don’t want to see panic ensue, we don’t apathy to take hold, either. Inaccurate reporting, headlines and story-telling could cause an unnecessary frenzy that will be wholly counterproductive. So, here are three important “don’ts” when it comes to breaches in general and this breach in particular: DON’T use the terms “identity theft” and “data breach” interchangeably. Why? Because they aren’t interchangeable. To state that all of the victims of this data breach are victims of identity theft (or even that they will be) is inaccurate, yet we frequently see this stated. Victims of a breach have had their personal identifying information (PII) compromised -- meaning that it has been exposed outside of the sphere in which they were granting access to it. Victims of identity theft have had their PII used to obtain money, goods or services without their authorization or knowledge. DON’T offer tips, or resources, that are inappropriate to a particular breach. All breaches are not the same, as the exposure of different types of PII carries different types of risk. Offering blanket tips may seem like the right move, especially when there is little concrete information, but it can cause even more confusion. In the case of the CHS breach, checking credit reports is wholly appropriate because breach victims have had their SSNs compromised. But in the case of a breach where payment card information (not SSNs) was compromised, offering credit monitoring services only further confuses the public. DON’T minimize the value of the notification processes. Issuing notification letters to affected individuals remains important. The topic of breach fatigue has been broached as more and more breaches hit the news. Lately, there have been suggestions that notification is no longer the answer. This stance doesn’t take into account that there will be consumers who receive a letter that will be their first exposure to this issue. Notification letters serve as an opportunity to educate customers of the immediate issue as well as the broader ones. Letters can often be the impetus for better identity management, password hygiene, etc. Whenever a large breach hits the airwaves, the ITRC phone lines light up with consumers seeking information about data breaches and how they can protect themselves. Without notification, there would be a huge missed opportunity.

Eva Velasquez

Profile picture for user EvaVelasquez

Eva Velasquez

Eva Velasquez is the President and CEO of The Identity Theft Resource Center. Velasquez has more than 500 hours of specialized training in the investigation of economic crimes and has been a presenter at numerous conferences across the state, including the PACT (Professionals Achieving Consumer Trust) Summit, the California District Attorney’s Association Consumer Protection Conference and the California Consumer Affairs Association annual conference.

Florida Work Comp Comes Full Circle

Still, despite the recent ruling, the challenge to courts is to not sit in judgment over what is fundamentally a legislative decision.

The recent Florida 11th Judicial Circuit Court decision in Florida Workers Advocates v. State of Florida, No. 11-13661-CA25 (2014) written by Circuit Court Judge Jorge Cueto, represents, in essence, a constitutional challenge to workers’ compensation that has come full circle. While during the early part of the 20th century there were a host of challenges to state workers’ compensation systems by employers, it has taken almost a century for workers to raise their own constitutional claims. The interest in this case that has been triggered across the country should be tempered by the fact that this is a trial court level opinion and that the Florida Supreme Court already has a constitutional challenge to the workers’ compensation system on its docket. This latest case, undoubtedly, will be added to the appellate mix. (See: Westphal v. City of St. Petersburg, Case No. 1D12-3563) As part of the reform process, stakeholders in every state workers’ compensation system have to come to grips with issues that require revisiting the original bargain. The inciting incident is inevitably the high cost to employers and the perceived abuses in the system by lawyers, medical providers and others. Seldom is the issue whether injured workers are being paid too much per se in terms of impairment or temporary or permanent indemnity benefits. The challenge to the courts, whether in Florida or anywhere else, is to not sit in judgment over what is fundamentally a legislative decision. As stated by the California Court of Appeal more than a decade ago, “…policy concerns, expressed in a parade of horribles—delay or denial of benefits, delay in employees' return to work, litigation explosion, increased claims costs, increased strain on government benefit programs, defense solicitation of ‘bought’ medical opinions—are better addressed to the legislature.” Lockheed Martin Corp. v. Workers' Comp. Appeals Bd. (2002) 96 Cal.App.4th 1237, 1249, 117 Cal.Rptr.2d 865. When the legislature enacts changes to the workers’ compensation system, it is not up to the courts to overturn such actions based on whether they comport with the courts’ version of what a good workers’ compensation system ought to be. As the California Court of Appeals also stated: “The judiciary, in reviewing statutes enacted by the legislature, may not undertake to evaluate the wisdom of the policies embodied in such legislation; absent a constitutional prohibition, the choice among competing policy considerations in enacting laws is a legislative function.” Bautista v. State of California (2011)201 Cal.App.4th 716, 733. Even though Judge Cueto cited New York Central R. Co. v. White 243 U.S. 188 (1917), a decision arising from when New York’s system came under immediate scrutiny almost a century ago, to support his finding that exclusive remedy was now unconstitutional, the U.S. Supreme Court in that case also found: “If the employee is no longer able to recover as much as before in case of being injured through the employer's negligence, he is entitled to moderate compensation in all cases of injury, and has a certain and speedy remedy without the difficulty and expense of establishing negligence or proving the amount of the damages. Instead of assuming the entire consequences of all ordinary risks of the occupation, he assumes the consequences, in excess of the scheduled compensation, of risks ordinary and extraordinary. On the other hand, if the employer is left without defense respecting the question of fault, he at the same time is assured that the recovery is limited, and that it goes directly to the relief of the designated beneficiary.”  White 243 U.S. at 201 (1917) The Court in White set out the boundaries for any constitutional analysis of a state workers’ compensation system when it said, in dicta, “This, of course, is not to say that any scale of compensation, however insignificant on the one hand or onerous on the other, would be supportable.” That language underscores the wide range of actions a state legislature may take when creating and changing benefits in a workers’ compensation system and how best they are to be delivered. Such discretion – and deference – is at the heart of the concept of separation of powers. Judge Cueto held that the Florida legislature has crossed this constitutional Rubicon. It will be up to the Florida Supreme Court, ultimately, to decide on which side of the bank its workers’ compensation is now docked.

Mark Webb

Profile picture for user MarkWebb

Mark Webb

Mark Webb is owner of Proposition 23 Advisors, a consulting firm specializing in workers’ compensation best practices and governance, risk and compliance (GRC) programs for businesses.

M&A Surges in P&C Claims, Technology

The surge will continue, creating advantages for well-capitalized players and raising challenges for smaller competitors.

sixthings
One of the most active sectors of mergers and acquisitions activity (M&A) today is the insurance claims and technology sector. An unprecedented number of powerful forces are converging to drive M&A activity in the North American Property & Casualty (P&C) insurance claims and technology “ecosystem” to historically high levels, including: • Claims supply chain rationalization and consolidation • Rising adoption and deployment of big data and analytics solutions • Insurance product commoditization and the resulting business transformation • An influx of private equity capital (already raised and seeking to be deployed in the sector) • Expectations of a continuation of a steadily improving economy, with the prospect of lingering low interest rates We expect these forces to amplify and further increase competition among well-capitalized strategic players and private equity participants that seek to create scalable and defensible positions in the industry. The implications for smaller, less-capitalized, regional or technology-challenged competitors are meaningful. Claims Supply Chain Consolidation The area in which we expect the greatest potential for increased activity in the second half of  2014 and in 2015 is within the claims supply chain. The P&C claims ecosystem is composed of thousands of small, local, independent firms, as well as larger regional, national and global vendors and business partners that provide mission-critical products and services to the claims operations of the P&C insurance industry, including; • Insurance technology and IT services, system integrators, core system and claims management software solutions and database and information providers, including communication, repair estimating and body shop management systems • Claims technology vendors (document management, compliance, data quality, payment systems, etc.) • Collision and auto glass repairers • Collision repair parts suppliers • Insurance replacement rental car providers • Third-party administrators and claims business process outsourcing firms • Claim services including independent auto and property adjusters and appraisers and catastrophe services • Insurance defense attorneys • Auto and casualty claims management solution providers • Salvage vehicle auctioneers and towing services • Insurance staffing firms • Insurance claims investigation firms One of the subsectors most affected by these factors is the highly fragmented and inefficient collision repair and parts business. Many of these are local, privately owned shops with limited technology capabilities and management talent. National consolidation, often driven by private equity, can lead to expense rationalization, upgraded information technology systems, improved management, the ability to respond to upstream customer pressure and improved pricing. • In the collision repair consolidator space, the largest participants include Boyd/Gerber Collision and Glass (more than 300 U.S. and Canadian locations, owned by Canadian publicly traded Boyd Group Income Trust); ABRA Auto Body and Glass (234 locations, acquired in August by Hellman & Friedman); Service King (177 locations including the 62-store Sterling Collision Repair Centers acquired from Allstate Insurance, and which recently changed hands from The Carlyle Group to Blackstone Group); and Caliber Collision (168 U.S. locations, recently acquired by OMERS Private Equity from ONCAP, both Canadian private equity funds). • Franchisor repair consolidation is being led by CARSTAR (410 U.S. and Canadian locations, funded by Champlain Capital), Driven Brands/Maaco (500 U.S. and Canadian locations, owned by Harvest Partners) and Fix Auto (345 locations). The companies continue to implement their growth strategy through a combination of franchising and building a banner network. • Since its founding in 1998, LKQ has consolidated the automotive repair alternative parts market in North America and elsewhere to become the largest provider of alternative collision replacement parts and a leading provider of recycled engines and transmission, with annual revenue exceeding $6 billion. In 2014, LKQ acquired Keystone Automotive, a leading distributor of aftermarket parts and equipment. Over the next twelve months, we expect to see further consolidation within the collision repair industry. This could include the combination of two or more of the largest consolidators or the simultaneous aggregation of multiple regional operators, resulting in the industry’s first truly national collision repair network. Additionally, one of the other important trends is the development of an electronic parts procurement and ecommerce solution for the large, and still highly fragmented and inefficient, North American auto repair parts supply chain. Within the next year, we expect that a significant open-market solution will emerge from among the many existing electronic parts procurement providers. Claims Information Provider Expansion and Consolidation North American insurance industry auto and property claims operations, including their auto, collision repair and property partners, primarily utilize the products and services of three claims information providers, each of which have expanded their offerings into automotive claims-related markets: • Private equity-backed CCC Information Services (Leonard Green & Partners plus TPG Capital), a database, software, analytics and solutions provider to the auto insurance claims and collision repair markets, recently acquired Auto Injury Solutions, a provider of auto injury medical review solutions. This follows CCC's earlier acquisition of Injury Sciences, which provides insurance carriers with scientifically based analytic tools to help identify fraudulent and exaggerated injury claims associated with automobile accidents. • The continuing series of U.S. and foreign automotive service industry and data acquisitions by Solera includes the insurance and services division of PPG (including Lynx, GTS & Glaxis), AutoPoint (U.S.) and AutoSoft (Italy) in 2014, HyperQuest (U.S.) in 2013 and Distribution Services Technologies, Services Repair Solutions, Serinfo (Chile), Pusula Otomotiv (Turkey), EziWorks/CarQuote (Australia) and APU Solutions in 2012. Since its initial public offering in 2007 (originally backed by private equity firm GTCR), Solera has completed more than 25 acquisitions globally and grown its annual revenue to more than $1 billion. • Mitchell International, a provider of technology, connectivity and information solutions to the P&C claims and collision repair industries, recently acquired Fairpay Solutions. Fairpay’s service offering includes workers’ compensation, liability and auto cost containment and payment integrity services, which will expand Mitchell’s solution suite of bill review and out-of-network negotiation services and complements its acquisition of National Health Quest in 2012. Mitchell was acquired in 2013 by KKR. Over the next twelve months, we expect these information providers to expand in several directions through internal product development supplemented by strategic acquisitions. This expansion will likely include: (i) deeper integration with claims management core systems, (ii) introduction of new tools and services utilizing advanced analytics-for-use cases across the entire auto and property claims process, (iii) deeper and wider integration with third-party companies in the auto and property claims supply chain, specifically including collision repair parts procurement, (iv) further development of auto casualty and workers’ compensation medical management networks and services and cost-containment solutions. For smaller providers in the claims supply chain, now may be the time to consider combining with a larger, better-capitalized player, especially given the trend toward vendor management by insurance companies. A “going it alone” strategy will be increasingly risky as larger, national players will garner more market share by offering better pricing, superior technology solutions and greater geographic coverage than “mom and pop” operations. If you are interested in discussing your strategic alternatives, please feel free to contact StoneRidge Advisors. Our transaction experience and in-depth knowledge of the claims sector make us the ideal financial advisor for the owners / entrepreneurs operating in this sector. This article was previously published by StoneRidge Advisors, which holds the copyright.

Stephen Applebaum

Profile picture for user StephenApplebaum

Stephen Applebaum

Stephen Applebaum, managing partner, Insurance Solutions Group, is a subject matter expert and thought leader providing consulting, advisory, research and strategic M&A services to participants across the entire North American property/casualty insurance ecosystem.

What Happens When Home Prices Plunge?

Families insure homes against fire, cars against collision, personal property against theft. Why not insure the equity in their homes?

sixthings
In 2008, many homeowners were stunned by the sharp drop in housing prices. Built-up home equity vanished. In a rising housing market, protecting against a loss in home equity seemed unnecessary. And while homeowners can’t turn back the hands of time, they could mount a defense against a drop in home equity with insurance that makes it possible for individuals and families to more comfortably take on the risk of buying or owning a home. Home ownership has largely defined the American Dream. For more and more individuals and families, the purchase of a home was becoming a reality during the early part of the past decade. By 2004, when home ownership peaked, slightly more than 69% of families had homes. And as home ownership rose, so did the wealth of families. The dream began to dim, however, in the middle of the decade when home prices slipped. The hint of trouble materialized in 2007 when the median price fell more than 8%, according to the S&P/Case-Shiller U.S. National Home Price Index (HPI). The following year, median prices tumbled more than 18%, a rate that accelerated during the first few months of 2009 before easing. Over the past few years, home prices have started to rebound. But if a family bought a home in the summer of 2006 and sold it at the end of 2011, it would have been deeply out-of-the-money if its price followed the median price of a home, which sank in value by more than 34% (or $83,600) over that time. Unlike the supply-and-demand principle that governs buying behavior of many other assets, falling home prices tend to scare off home buyers, which in turn sets off a downward spiral of events. The initial drop in prices shocks the market, and risk-averse home buyers shy away because of the volatility. This leads to even deeper declines in prices, which only exacerbates buyers’ hesitations to purchase a home. Falling home values are also significant because most families rely on a mortgage to purchase their homes. Typically accounting for approximately 75% of the family’s total debt, a mortgage makes the purchase of a home a highly leveraged proposition that is extremely sensitive to changes in the value of the housing market. A small drop in home prices can translate into a large loss in the family’s net worth years after the purchase. This is precisely what has happened to many families. From 2007 through 2010, median family net worth plunged nearly 39%. While declines in financial or business assets contributed to the drop, the collapse of home prices had a far more devastating impact on family net worth. This situation was especially true for families living in the West, whose median net worth took a nosedive of more than 55%, largely because of the crash in home prices. Also hard-hit were families whose homes accounted for a greater share of their assets. For example, the median net worth for a family headed by an individual between the ages of 35 and 44 fell more than 54% from 2007 through 2010, according to the Federal Reserve. If the recent drop in home prices had been an anomaly, the risk in home equity could perhaps be managed as other rare misfortunes are. Families could pick up the pieces and move on. But volatility in home prices seems to be part and parcel of home ownership. During the mid-1980s, the oil-patch crisis in Texas drove down home prices. Decreases in defense spending in the late 1980s and early 1990s led to a downturn in the economy of Southern California, which eventually rippled into the housing market. The early and mid-1990s also saw drops in home prices in New England. Across the U.S., more than 50% of home buyers in the early 1990s found themselves in markets that experienced a decline in home prices over the five years following purchase, according to Ian Ayres, Townsend professor at Yale Law School, and Barry Nalebuff, Steinbach professor at Yale School of Management. Deteriorating housing prices can transform what seems like a rosy picture of financial health into a torn remnant of security, as unrealized wealth a family thinks it possesses vanishes into thin air. Retirement, a child’s education or other long-planned activities may need to be postponed, because there is often little or no time to readjust savings and spending to compensate for the loss in wealth. But what if a homeowner could be protected from a drop in a home’s equity? Families insure their homes against fire, their cars against collision, their personal property against theft. In a day and age when forecasting home prices has become as fickle as predicting the weather, home equity insurance would be a way to reduce some of the risk in a loss of value for a family’s largest investment. A home equity insurance product could pay a homeowner if a drop in the housing market were to occur when a house is sold. The payout might be based on a housing price index that is tied to local home pricing trends. For example, an individual who buys a home for $250,000 and decides to sell it five years later when market prices have declined 4% would receive $10,000 based on the change in the index at the time of sale. For a nominal monthly premium, homeowners could insure against a possible decline in home equity for years after purchasing a house. While an index-based approach does not track the changes in the price of an individual home, it is granular enough to compensate homeowners for price changes within a reasonable local area. Index-based payments also would discourage homeowners from slacking off on the maintenance of their properties because payment of a claim would be limited to the decline in the housing index for the areas of their residence. If a home is allowed to deteriorate and then sold in a rising market or even a mild slump, the homeowner would probably receive little or no compensation. Nor could they simply take a low-ball offer just to move the sale, then try to collect the difference from the insurer. But for homeowners who look at the purchase of a home as a path to economic security, home equity insurance provides a means to that end. Homeowners can’t reverse the losses they sustained in the housing market during the crisis, but with home equity insurance they would have a viable way of protecting themselves against a future decline in the values on what is likely their largest investment: their homes. Behind the scenes of home equity insurance In purchasing the coverage, a consumer would be, in fact, buying a put on home prices. In this sense, home equity products would straddle the lines between insurance and security markets, but the techniques used to manage home equity risk are effectively the same as for many other insurance products. Making home equity insurance affordable for consumers involves understanding the real estate cycle, which is characterized by prolonged periods of stable or rising prices that are often followed by severe downturns, when home equity claims would spike. The challenge for insurers is to funnel the experience of this wide spectrum of events into a premium that consumers will find reasonable—a process that is somewhat similar to rate-making for other lines. But there are some important differences. Because of the line’s volatility, insurers would need to consider the full range of losses over the product’s entire cycle rather than a typical year’s losses in developing a rate. This is because a normal loss year does not fully reflect the risk inherent in a deep downturn in the real estate market or, for that matter, the potentially high profitability during a booming market. For this reason, as with similarly volatile catastrophic coverage, losses need to be more carefully incorporated in rate-making for home equity insurance than they are for other lines with more stable risk profiles. Balancing the extremes to arrive at a reasonable rate can be a tricky process without a considerable amount of experience in estimating volatility. Because of the small sample size of historical losses in home equity, as well as the immense uncertainty underlying house prices (which would be the primary driver of claims frequency for this coverage), pricing this type of coverage would need to rely on predictive forecasts of future home prices, rather than allowing past experience alone to dictate prices. This is true with most catastrophic coverage, of which home equity insurance would be no exception. This potentially introduces even more uncertainty, in the form of model risk, which would need to be managed and incorporated into the price. And like other insurance coverages, product features can be designed to control for moral hazard. An index-based approach that uses trends in metropolitan statistical areas (MSAs) to determine claims payments can control for the variability of individual home prices that might occur because of poor maintenance or speculation. Aside from specific product features, home equity insurance could subject insurers to steep losses during a severe downturn in the housing market. But insurers are no strangers to the possibility of catastrophic loss. In home equity insurance, as with most other insurance lines, diversification is key. Despite reports of unprecedented price drops in certain markets, even the recent downturn in housing was muted in some areas of the country. While housing markets in California, Florida, Nevada, New York and the District of Columbia, among other locations, tumbled from the end of 2006 through the beginning of 2008, nearly two dozen other markets continued to see a rise in housing prices, according to Eli Beracha, assistant professor of finance, East Carolina University, and Mark Hirschey, Anderson W. Chandler professor of business at the University of Kansas. In nearly a dozen other states, the decline was modest or negligible. And, as with other lines of business, insurers can reinsure a portion of their risk. Contracts, however, will have to demonstrate that the risk assumed by the reinsurer is commensurate with the premium ceded by the insurer. With the launch of housing futures and options trading by the Chicago Mercantile Exchange (CME) in May 2006, the use of options and futures has also become a more feasible technique for managing home equity risk. Using standardized contracts, insurers can hedge their home equity risks by selling housing futures or buying puts on housing future contracts. Trading gains can be used to offset claims payments. The CME housing futures contracts are based on tradable S&P/Case-Shiller HPIs that are updated monthly rather than quarterly, as are the Case-Shiller standard indexes. Futures and options are offered for Boston, Chicago, Denver, Las Vegas, Los Angeles, Miami, New York, San Diego, San Francisco, Washington, D.C., and a 10-city composite. Because options aren’t sold for every MSA, an insurer would need to model its investments based on its mix of business throughout the country to achieve a good match between its risk and its hedging strategy. But it would be possible to reasonably hedge an insurer’s risk, while providing a product that offers homeowners a new way to protect their investments. Public awareness of the risk of home ownership has rarely been greater than it is now, and all the pieces are in place to design and offer a product to meet consumers’ needs. Insurers only need to take the next step.

Leighton Hunley

Profile picture for user LeightonHunley

Leighton Hunley

Leighton Hunley is a financial consultant in the Milwaukee office of Milliman. He joined the firm in 2002 and has held the position of consultant within Milliman’s credit risk practice since 2007. Leighton’s areas of expertise include mortgage guaranty insurance, student loans, home equity insurance and debt service analyses.

5 Misperceptions About 'Opt-Out'

Much of the discussion on the Oklahoma Option is accurate but not complete. It is easy to absorb a part of the story.

The Oklahoma Option, which went into effect early in 2014, is ramping up. More employers and insurers are electing to participate. And more workers’ compensation professionals around the country want to understand the “opt-out” concept. They are thinking about their assets in Texas or Oklahoma or about the feasibility of introducing opt-out into other states. Much of the discussion about opt-out is accurate but not complete. It is easy to absorb a part of the story. Consider the following five common ways in which it’s very easy to be half-right. One:  For employers, opt-out is basically about saving money. Conversations with corporate risk managers have shown me that saving on claims costs is usually not the only, nor even the leading, reason why employers choose to opt out. Employers appear to have two complementary goals with opt-out. One is to reduce claims costs, with a target in the range of 30% to 50%.  The other is to simplify management headaches over a benefit program perceived to be excessively complicated and rife with risk of misbehavior. Corporate risk managers, while not questioning the need for a separate work injury benefit system, are quick to say that injured employees can easily slack off in recovering from injury. They say doctors may not be motivated to get their patients back to work. Permanent partial disability benefits, in particular, are often cited as subject to gaming. In sum, risk managers see a lot of moral hazard that opt-out appears to them to severely curtail. Brokers and risk advisers, on the other hand, appear to be more comfortable talking about reducing the cost of risk. It’s interesting to listen and respond to both messages. Two: Employees of the opting-out employer benefit from better medical care. Advocates of opt-out often make this claim. (There are NO pro-labor advocates of opt-out, at least yet.) The claim might be justified, were employers to measure quality of medical care and show, perhaps, that care under opt-out adheres more closely to evidence-based care guidelines. The quality-of-care argument stems from opt-out employers negotiating service terms with well-regarded medical providers and avoiding providers and types of care they consider questionable. The employer might decide to ban chiropractic care, for instance, and use teaching hospital-based surgeons who otherwise may not treat injured workers. Medical care quality can be in the eye of the beholder. especially along the dimension of doctor skills in listening to and communicating with her patients. What is not measured so judgmentally is the package of disability benefits accorded to opt-out employees. Inspection of the typical package in Texas ERISA plans suggests that the common opt-out benefit is superior to the workers’ compensation system when one compares after-tax income of a cohort of opt-out cases with the same cohort in the conventional system if (a big if) the injuries are well-managed for timely return to work. Opt-out benefit packages in Texas ERISA plans typically pay indemnity benefits from the first day of disability. They do not impose caps on weekly benefits. The large majority of workers who use at least one day of disability return to work within a brief period. For them, these benefit features amount to real money. This assumes that the median duration of injury for all with at least one shift of lost time is about two weeks, and the median duration for all with disabilities lasting at least a week is about five weeks. Compensation for injuries with long durations appear to be much less favorable to opt-out workers, if you assume that duration of disability will be the same in each system. However, one does not have to assume that durations will be the same. Three: Opt-out claims management is a variant of workers’ comp adjusting. A striking aspect of opt-out claims management is that some leading opt-out claims managers do not hire former workers’ compensation adjusters. Opt-out managers may even decide to not hire people with any claims experience, in any line of insurance. It’s useful to consider whether out-out claims management is more similar to absence or even productivity management than to workers’ compensation. The opt-out claims adjuster has to have the skills and confidence to use a lot more discretion than that to which workers’ compensation adjusters are acculturated. She needs to understand the employer’s benefit plan, not workers’ compensation. She needs to act quickly and with initiative. Over time, opt-out claims management may, at least for the larger employer, merge with absence benefit and ADA accommodation management, further removing it from affinity with workers’ compensation in the conventional sense. Four: Negligence liability is a major stumbling block. Opt-out employers in Texas are subject to negligence liability. An injured worker can seek economic and non-economic damages from her employer if the employer is found in any way negligent for the injury. As a defense attorney in Dallas told me, one will likely find in every severe injury some degree, however slight, of employer action or non-action that arrives in the courtroom dressed by the plaintiff attorney in negligence clothes. Large Texas employers usually insure for this exposure and require their employees to sign mandatory arbitration agreements to cover negligence complaints. Some don’t; for instance, Ben E. Keith, a prominent member of the opt-out community incurred in 2012 an $8 million jury award over a severely scarifying injury a worker suffered on one of his first days with the company. The company had not adopted mandatory arbitration. The Oklahoma Option extends the exclusive remedy protections of the workers’ compensation system. What will happen if the Oklahoma Supreme Court nullifies that protection? The answer depends in part on how one views the Texas experience with negligence liability. Many opt-out employers in Texas who know several jurisdictional systems appear to feel comfortable managing this exposure. Five:  States can mandate minimum benefits for opt-out, as Oklahoma has. The Oklahoma Option requires the employer to offer same or greater benefits than the workers’ compensation system. A lot of details need to be worked out, but there will remain a fundamental problem with the Oklahoma Option minimum benefit requirement. At first blush, it conflicts with a substantial body of court decisions that have confirmed the exemption of ERISA plans from state interference. At the very heart of ERISA plans is their immunity from state legislatures, regulatory agencies and courts. Oklahoma law offers a means to opt-out without setting up an ERISA plan, but, given the success of ERISA plans in Texas, it is likely that employers, at least the large ones, will elect to go the ERISA route in Oklahoma. Not being literate in legal discourse, I am the last person to suggest how this apparent conflict will be resolved, but resolved it must be and in federal court. And I expect to see some interesting arguments in favor of retaining Oklahoma’s requirement in its original or a revised form. A California case, Golden Gate Restaurant Association v. City and County of San Francisco, offers a clue. As one commentator noted, in San Francisco, a city ordinance requires employers to make certain expenditures toward employee healthcare. Employers have a number of options for compliance, including paying a specified amount to a public insurance program that their uninsured employees can access at discounted premium rates or setting up an ERISA-governed health insurance plan of their own. At the last iteration of litigation, the local government prevailed. The Oklahoma Option’s chances to withstand a court challenge over its benefit requirements may well depend on how the benefits are construed (this is not clear to me) and the manner in which Oklahoma employers are free to elect opt-out over the conventional system. Getting the story right The workers’ compensation community needs to prompt a full, transparent discussion of opt-out to get to a complete story, for which chapters are being written every day.

Peter Rousmaniere

Profile picture for user PeterRousmaniere

Peter Rousmaniere

Peter Rousmaniere is a journalist and consultant in the field of risk management, with a special focus on work injury risk. He has written 200 articles on many aspects of prevention, injury management and insurance. He was lead author of "Workers' Compensation Opt-out: Can Privatization Work?" (2012).

New Way to Audit Digital Assets

"Keyless" data integrity standards let insurers identify the cause of a breach and mitigate the risk of escalation -- in real time.

sixthings

In the real world, it would be considered reasonable and appropriate to require an independent audit of digital assets to be insured. In cyberspace, this is more challenging. Insurers have to rely on the insured to tell the truth about what assets have been affected by a breach. Integrity standards for data enable insurance companies to conduct an independent audit of what digital assets exist (e.g., client data, intellectual property) prior to a breach, thus preventing fraudulent claims. One aspect of a data integrity standard is keyless signature infrastructure, known as KSI. KSI is a disruptive new technology standard that can effectively address some of the issues insurers face in the rapidly emerging cyber liability domain. It can enable mutual auditability of information systems to allow stakeholders to know the cause of a breach, mitigate the risk of breach escalation in real time and provide indemnification against subrogation and other legal claims. The concept of a digital signature for electronic data is very straightforward: a cryptographic algorithm is run on the data, generating a "fingerprint of the data"; a tag or keyless signature for the data that can then be used at a later date to make certain assertions, such as signing time, signing entity (identity) and data integrity. KSI offers the first Internet-scale digital signature system for electronic data using only hash-function-based cryptography. The main innovations are:

  1. Adding the distributed delivery infrastructure designed for scale
  2. No longer requiring cryptographic keys for signature verification
  3. Being able to independently verify the properties of any data signed by the technology without trusting the service provider or enterprise that implements the technology

Other features include:

  • Unlike digital certificates, keyless signatures never expire; the historical provenance of the signed data is preserved for the lifetime of the data, and people are not required in the signing process.
  • Use of keyless signatures strengthens legal non-repudiation for data at rest.
  • There are no keys to be compromised or to revoke. This fundamentally changes the security paradigm. It is important to understand that if data integrity relies on secrets like keys or trusted personnel, when these trust anchors are exploited there becomes an unlimited liability for the data protected by those trust anchors. This occurs because there is no way to determine what has happened to the data signed by those private keys or maintained by those trusted personnel. Evidence can be eliminated; data changes can occur without oversight; and log/event files can be altered. The exploiters can provide the picture they want you to see. Keyless signatures remedy this problem.
  •  During a breach, active integrity can be provided with cyber alarms and correlated to other network events by auditors, network operations center and security operations center(s). Active Integrity means real-time, continuous monitoring and verification of data signed with keyless signatures. With active integrity, real-time understanding is achieved as to the coherence and reliability of technical security controls and whether the digital asset has integrity.
  • Underwriting cyber policies becomes much simpler and more efficient because there is transparent evidence certifying the integrity of the data, the technical security controls protecting the information and rules governing the transmission, modification, or state of the insured asset(s).

A “managed security service” resulting from the implementation of KSI marks a new era for insurers. As they seek organizational intelligence of digital assets to make real-time policy adjustments, they are also making concrete conclusions about the insured asset risks, threat, exposure and cyber landscapes affecting clients. Claims processing and disputes become simpler as the technology preserves the forensic traceability and historical provenance of the digital asset, enabling rapid determination of when and how a breach or manipulation occurred and who or what was involved. Hackers and malicious insiders cannot cover their tracks. Moreover, proving negligence is now possible. Negligent acts may be quickly detected and proven in the event the service provider does not comply with the contracts maintained in force with the enterprise.   Most breaches today go unnoticed until long after they occur and the damage has been done. Active integrity involves continuous verification of the integrity of data in storage using keyless signatures. It is equivalent to having an alarm on your physical property and a motion detector on every asset that cannot be disabled by insiders. Because of the volatile nature of electronic data, any hacker knows how to delete or manipulate logs to cover his tracks and attribute his activity to an innocent party, which is why attribution of crimes on the internet is so difficult. Integrity is the gaping security hole. A loss of integrity is what leads to data breaches, introduced by malware, viruses or malicious insiders. Public key infrastructure (PKI) will never be the solution to integrity or usable for large-scale authentication of data at rest. The forensic evidence of keyless signatures makes legal indemnification issues easy to resolve, highlighting who, what, where and when a digital asset was touched, modified, created or transmitted. This places the onus on the “use” of data and not collection, providing auditability across service providers and the internet. Privacy is maintained, but there is also transparency and accountability for how data is used. Every action can be traced back to the original source that is legally responsible. This simplifies service-level agreements, pinpoints liability in the event of accidental or malicious compromise, and indemnifies independent data providers from legal claims. This article is an excerpt from an EY report titled "Cyber Insurance, Security and Data Integrity; Part 1: Insights into cyber security and risk -- 2014." For the full report, click here


Shaun Crawford

Profile picture for user ShaunCrawford

Shaun Crawford

Shaun Crawford leads Ernst & Young's $1.4 billion global insurance business. He has been in the financial services industry for 27 years, having worked both in consulting or line management with the majority of European life assurers and U.K. retail banks at some point.

How to Protect Your Mobile Data

Beware of "Free Wi-Fi." Cyber thugs set up sites known as "evil twins" that can steal your signal and leave you vulnerable.

sixthings
Beware of “Free Wi-Fi” or “Totally Free Internet,” as this offer probably is too good to be true. These offers are likely set up by thieves to trick you into getting on a malicious website. AT&T and Xfinity have provided many hotspots for travelers to get free Wi-Fi all over the country. Sounds great, right? However, these services make it a piece of cake for thieves to gain access to your online activities and snatch private information. AT&T sets mobile devices to automatically connect to “attwifi” hotspots. The iPhone can switch this feature off. However, some Androids lack this option. Cyber thugs can set up fake hotspots called “evil twins,” which they can call “attwifi,” that your smartphone may automatically connect to. For Xfinity’s wireless hotspot, you log into a web page and input your account ID and password. Once you’ve connected to a particular hotspot, it will remember you if you want to connect again later in that day, at any “xfinitywifi” hotspot and automatically get you back on. If someone creates a phony Wi-Fi hotspot and calls it “xfinitywifi,” smartphones that have previously connected to the real Xfinity network could connect automatically to the phony hotspot—without the user's knowing, without requiring a password. None of this means that security is absent or weak with AT&T’s and Xfinity’s networks. There’s no intrinsic flaw. It’s just that they’re so common that they’ve become vehicles for crooks. Smartphones and Wi-Fi generate probe requests. When you turn on the device’s Wi-Fi adapter, it will search for any network that you’ve ever been connected to—as long as you never “told” your device to disregard it. The hacker can set the attack access point to respond to every probe request. Your device will then try to connect to every single Wi-Fi network it was ever connected to, at least for that year. This raises privacy concerns because the SSIDs that are tied with these probe requests can be used to track the user’s movements. An assault can occur at any public Wi-Fi network. These attacks can force users to lose their connections from their existing Wi-Fi and then get connected to the attacker’s network. Two ways to protect yourself: #1 Turn off “Automatically connect to WiFi” in your mobile device, if you have that option. #2 Use Hotspot Shields software to encrypt all your data on your laptop, tablet or mobile device.

Robert Siciliano

Profile picture for user RobertSiciliano

Robert Siciliano

Robert Siciliano is CEO of IDTheftSecurity.com. He is fiercely committed to informing, educating and empowering Americans so they can be protected from violence and crime in the physical and virtual worlds. Media outlets, executives in the C-Suite of leading corporations, meeting planners and community leaders turn to him to get the straight talk they need to stay safe in a world in which physical and virtual crime is commonplace.