Tag Archives: bingham

Claims Litigation: a Better Outcome?

Insurance companies have historically struggled with the challenges posed by claims litigation and the threat of attorney involvement in multiple lines of business. According to the Insurance Information Institute, 39 cents of every dollar spent in loss costs in commercial multi-peril went toward defense costs or containment. For medical professional liability, the number increases to 43 cents, and for product liability it is as high as 77 cents. For workers’ compensation (WC), where the employee gives up the right to sue the employer for injuries that happen in the workplace, that number amounts to 13 cents.

In 2014, the California Workers’ Compensation Institute performed an analysis of attorney involvement in California WC claims. Over the six-year period studied, attorneys were involved in 12% of all claims (including medical-only cases), 38% of lost-time claims and 80% of permanent disability claims. Although the report discussed multiple efforts by lawmakers to reform California’s WC laws to help reduce costs, the report noted: “Despite those efforts, the litigation rate has nearly doubled for all workers’ compensation claims, and more than tripled for claims involving lost time.”

With such large dollars at risk, it’s no wonder that companies are investing in claims system technology and the use of advanced analytics to help reduce the impact of litigation spend on their bottom line. This article will share how advanced analytics and data mining can be used early in the life cycle of a claim to help identify litigation-prone claims and triage them appropriately.

Setting the Stage

Cases with heavy litigation expenditures typically involve various parties connected in a complex way with differing and sometimes opposing incentives. The ultimate costs of litigation are driven by numerous, factors including the duration of the settlement discussions and trial, if applicable, cost of medical experts, discovery, depositions, attorney fees, responsiveness of the plaintiff attorney, impact of high/low agreements, the appeals process and more.

Therefore, insurance litigation comes with a number of challenges that have historically made it difficult to predict litigation outcomes (e.g. dismiss, defend, settle, alternative dispute resolution, probability of winning, etc.). Traditional approaches have tended to focus on historical reporting and backward-looking data analyses to understand litigation rates, costs, trends, etc. However, such “hindsight”-focused measures are reactive in nature. In many situations, it has been difficult to segment litigation outcomes, especially in the early days of a claim’s lifecycle when an adjuster can make a real difference in the trajectory of a claim. For that reason, a number of innovative insurers have begun shifting to more predictive and forward-looking solutions, including predictive analytics.

See also: Power of ‘Claims Advocacy’  

The Inspiration for Litigation Analytics

Insurance companies have largely been using data analytics to attack claim severity in lines such as WC, medical professional liability, general liability and auto liability bodily injury. By matching claim complexity with the appropriate resource skillset as early as first notice of loss (FNOL), a great deal of efficiencies have been introduced to help reduce claim durations and costs. Claim predictive models have helped insurers better segment and triage high severity workers’ compensation and bodily injury claims, driving up to 10-point reduction in claims spend.

Models focuse on claim severity can naturally be extended to other business areas including medical management, special investigative unit (SIU) referrals and litigation management. We have seen such claim cost models be used by extension in these other areas as more severe claims also tend to be the most complex. For example, the most expensive 10% of bodily injury claims as predicted by these severity models can turn out to be as much as six times more likely to go to litigation and be more expensive to litigate. In WC, the most expensive 10% of claims can turn out to be as much as three times more likely to go to litigation and be even more expensive to litigate. Clearly, there is plenty of segmentation power to be gained – even more so if the models are specifically developed to predict litigation.

Data Used

Data is the first building block of any analytics journey. The ability of actuaries and data scientists to effectively identify litigation-prone claims can be attributed to the power of advanced analytics, the growth of big data and inexpensive computing power and storage. The data used in developing litigation models is similar to that of claim-severity models. They include internal and external third party data, structured and unstructured data, direct pull fields and synthetically created variables. The large number and diversity of the data sources used, sometimes numbering in excess of a thousand potential candidate variables, provide unique information for segmentation and analysis, thus helping to answer the question: which combination of complex patterns seem to make a claim more prone to litigation?

Some of the data factors typically used in litigation models are quite intuitive and include claimant age and gender, accident jurisdiction, claim history, etc. Unstructured data such as the description of the injury and accident narrative are often valuable sources of information that may help to uncover indicators and behavioral clues that bear a strong correlation to future litigation likelihood. Text mining can be used to delve into such unstructured free form data and help identify co-morbidities that significantly drive up claim severity. Additionally, third party data commenting on the individual’s lifestyle and habits add a layer of information about the claimant that further helps to segment the litigation propensity of the claim.

Analytics Techniques Used

A number of modeling techniques can be used to predict the likelihood for a claim to move to litigation. There are a number of techniques that generally perform well if used in a robust end-to-end modeling process that actively involves the end users from day 1. From multivariate predictive modeling and machine learning techniques to neural networks, various methodologies are available to identify the most predictive variables. However, and as we noted in the article titled “The Challenges of Implementing Advanced Analytics,” it is important to balance building a high precision statistical model with being able to interpret and consume its results. Our experience has shown that it is more valuable to leverage less complex models that are easily interpretable to the end-users than going after highly precise and complex models that are hard to consume and understand.

Models are typically trained on historical data with a defined target variable (i.e. what the model is trying to predict). Example target variables could be a binary 0-1 field (indicating if a claim has indeed moved to litigation “1” or not “0”), litigation dollars explaining how expensive are the claims that are already in litigation, a proxy for each, or a combination of both. Models are also validated on a holdout sample of claims to assess the robustness of the model.

Not surprisingly, models could be built and developed leveraging data available at FNOL or day 1, helping insurers take expedited business actions and make important decisions early in the lifecycle of the claim. As additional data becomes available through time, these models benefit from added information to make their prediction in the weeks and months that follow.

See also: 2 Steps to Transform Claims, Legal Group  

Claims Systems Are Differentiators

With the newest claims systems being implemented, insurance companies are achieving better claim outcomes and spending less on loss adjustment expense. The days of claims systems being only record keeping solutions are passé. The newest technology helps claimants directly verify the status of their claim regardless of the time of day or person’s location, through self-service portals and intuitive websites. But, these capabilities are not just for “external” system users alone. “Internal” system users can now leverage advanced analytics and spend less time on administrative tasks (e.g., manually populating spreadsheets), shifting their focus to working with insureds and improving their claims experience.

Litigation Models in Action

A number of models can be built to identify which claims could be more complex and involve litigation. As an example, an insurance company could build a model that answers the following questions: Of the claims that go to litigation, which ones are likely to be most expensive? If the model returns a high score, it means that the claim has a high likelihood of costing the insurance company a lot of money in litigation expenses. Therefore, it would suggest that the most experienced internal resources and attorneys should be focused on this claim.

Data used and target variables

For the case study at hand, a population of more than 10,000 bodily injury claims spanning multiple accident years was studied. For each claimant, many characteristics and factors about the claim, claimant, accident, injury, suit details (if the claim is litigated) were collected and recorded in a database. External third party data such as the vehicle identification number (VIN) and geo-demographic and behavioral data at the household and census block level were also added to capture more information.

The target variable (i.e. what the model is trying to predict) was calculated as all dollars spent on litigation, including attorney fees and expenses. A predictive model was then built employing a standard train, test, validation methodology.

Model results and output

The resulting models exhibited strong segmentation across the holdout sample. For example, the litigation costs for the highest-scoring 10% of claims were almost double the average population, while the lowest-scoring 10% of claims had litigation costs that were less than half the cost of the average claim. This strong segmentation is even more impressive considering it was realized at day 1, not weeks or months into the life of the claim.

The model contained about 30 predictive variables, some of which were intuitive and readily available (e.g., claimant age and gender, accident location and type – whether parking lot or intersection, etc.). The model also included information sourced from third party vendors (e.g., census employment statistics) and proxies for behavioral factors (e.g., the distance between the accident location and claimant’s residence, lag of time before reporting a claim, etc.). External geo-demographic data about the claimant were also beneficial (e.g., population density in the zip code of residence), in addition to data available from the National Highway Traffic Safety Administration (NHTSA) regarding fatal accidents statistics about the accident Zip code, etc.

Bringing Models to Life

Building a predictive model like the one described above is important but only beneficial if the model helps change behaviors, decisions and actions. The insights derived from these models help insurance companies take direct actions on their claim triage strategies, attorney selection and defense strategies. Business rules can be carefully crafted to help claim examiners in their decision-making process. When an adjuster understands that a high-scoring claim has a higher risk of moving to litigation and costing more, defense strategies can be adjusted accordingly. From assignment of external defense counsel, to settle or defend decisions based on case dynamics, insurance companies can alter their event management, resource allocation and escalation decisions earlier in the lifecycle of the claim.

See also: Rethinking the Claims Value Chain  

Carpe Diem With Analytics

The claim insurance landscape is becoming more complex, competitive, fast-moving and disrupted. There is little doubt that the adoption of big data, data science and analytics is important to becoming more agile in this environment, helping insurance companies make better decisions within days of receiving a claim. With the underwriting cycle indicating another period of softening rates, and interest rates hovering at record low levels, tapping savings in litigation spend might just be what the doctor ordered for insurance companies brave enough to seize the opportunity. As Larry Winget said in his book It’s Called Work for a Reason, “Knowledge is not power; the implementation of knowledge is power.” The knowledge and analytics exist today to improve litigation costs. We believe the time has come to implement that knowledge.

As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting.

This communication contains general information only, and none of Deloitte Touche Tohmatsu Limited, its member firms, or their related entities (collectively, the “Deloitte Network”) is, by means of this communication, rendering professional advice or services. Before making any decision or taking any action that may affect your finances or your business, you should consult a qualified professional adviser. No entity in the Deloitte Network shall be responsible for any loss whatsoever sustained by any person who relies on this communication.

drug

How to Help Reverse the Opioid Epidemic

Across the U.S., the number of reported events exemplifying the opioid and heroin epidemics continues to skyrocket. U.S. Government Publishing Office data shows that the usage of both prescribed stimulants and prescribed opiates increased by a factor of 19 in just two decades since 1994(1). On Dec. 18, 2015, the U.S. Centers for Disease Control and Prevention (CDC) released a report showing drug overdose deaths reached record highs in 2014, fueled in large part by the abuse of narcotic painkillers and heroin. In 2014, more than 47,000 Americans died from drug overdoses, an increase of more than 14% from 2013. About 61% of those deaths involved the use of opioids. From 2000 to 2014, the report noted that nearly half a million people have died from overdoses in the U.S. In 2014, there were approximately one and a half times more drug overdose deaths than deaths from motor vehicle crashes!(2)

A very worrisome statistic and trend…

For workers’ compensation insurers, opioid use in treating chronic pain has also exploded over the past two decades. Although there appear to be some signs that opioid use is finally cresting, insurers still have a long way to go in helping to ensure that physicians and the injured workers they treat are fully educated on the pros and cons of using opioids with various types of injuries and pain. As the Risk & Insurance article “Paying for Detox – The Opioid Epidemic Is Addressed by Detoxification Programs” notes, some workers’ compensation insurers have been funding tapering and detoxification programs to help dependent or addicted patients wean themselves off the very medications that were designed to ease their pain(3). Unfortunately, recidivism is common, with experts noting that it can take several attempts to wean someone off narcotics.

This article will highlight some of the challenges in front of us and share some innovative ideas on potential ways to help prevent opioid dependency and addiction before the habits requiring tapering and detoxification programs are ever formed.

The Challenge in Front of Us

In January 2011, USA Today shared a powerful story about David Fridovich, a three-star Green Beret general who has become an advocate for warning soldiers about the epidemic of chronic pain and the use of narcotic pain relievers sweeping through the U.S. military(4). Much like others across the country who have suffered a severe back injury, the general began taking narcotics for chronic pain in 2006. Over time, the general became addicted to narcotics. During one 24-hour period the general took five dozen pain pills. After going through a detoxification program, the general has been helping other soldiers avoid the complications he faced because he was unaware of the addictive nature of the pills he was taking.

In a recent book about the opioid and heroin epidemic in the U.S., Dream Land author Sam Quinones shares his research on the history of how we ended up where we are today. From a workers’ compensation perspective, the author shared a story about a prison guard who had injured his back during a fight with an inmate. The doctor, who took the guard off of work for six months, also prescribed opioids to be taken twice a day for 30 days. After becoming severely addicted, the guard said, “It really humbles you. You think you’re doing stuff the way it’s supposed to be done. You’re trusting the doctor. After a while, you realize this isn’t right, but there really isn’t anything you can do about it. You’re stuck. You’re addicted.”

Both stories illustrate how the use of painkillers can lead to dependency and addiction without warning. They also highlight the critical role prescribing physicians play in educating patients about the warning signs and addictive nature of opioid prescriptions. As part of this education process, prescribing guidelines and analytics can play an important role in driving better outcomes.

Opioid Prescribing Guidelines

For workers’ compensation insurers, it is critical to understand the opioid prescribing guidelines that underlie the way physicians are treating injured workers. The more the insurers can help educate physicians on best practices, the better off insurance companies may be in helping to prevent any issues that may arise because of unnecessary or excessive opioid prescribing.

The CDC worked with the National Drug Institute, Substance Abuse and Mental Health Services Administration and the Office of the National Coordinator for Health Information Technology to review existing opioid prescribing guidelines for chronic pain. Their review and analysis of eight prescribing guidelines highlighted a number of important provider actions, such as the review of pain history, medical and family history, pregnancy, prescription drug monitoring programs (PDMP), urine drug screening, evaluations of alternatives to opioids, rational documentation, tapering plans, referrals for medication assisted treatment, evidence review, conflicts of interest and more(5). In January, Kentucky Attorney General Andy Beshear announced his support for national guidelines for prescribing opiates for chronic pain, stating: “In Kentucky, we face a crushing epidemic of addiction. One of my core missions as attorney general is to better address the drug problem faced by our Kentucky families and workforce.”(6) In his speech, the attorney general mentions that he is joining other state attorneys general in voicing support for the CDC guidelines for prescribing opiates for chronic pain.

California’s “Division of Workers’ Compensation Guideline for the Use of Opioids to Treat Work-Related Injuries” documented treatment protocols for three specific pain categories:

  1. Opioids for acute pain (pain lasting as much as four weeks from onset)
  2. Opioids for subacute pain (one to three months)
  3. Opioids for chronic pain and chronic opioid treatment (three months or more)(7)

The guidelines state that, in general, opioids are not indicated for mild injuries such as acute strains, sprains, tendinitis, myofascial pain and repetitive strain injuries. Just as important, the guidelines clearly warn physicians to consider and document relative contraindications (e.g., depression, anxiety, past substance abuse, etc.). The document provides an abbreviated treatment protocol for the three pain categories that address important topics like prescribing a limited supply of opioids, documentation, accessing California’s PDMP, monitoring opioid use, evaluating the use of non-opioid treatments, completing opioid use, educating patients on opioid usage and potential adverse effects, responsibly storing and disposing of opioids, tracking pain level, screening for the risk of addiction, testing urine for drugs and more.

At the end of the day, it is important for workers’ compensation insurers and physician employees to clearly understand the opioid prescribing guidelines that help physicians achieve a proper balance between treating workers’ pain and keeping them safe from any adverse impacts of excessive opioid usage. With more insurance companies leveraging early physician peer-to-peer outreach to open a dialogue between the insurance company physician and the treating physician, knowing prescribing guidelines and sharing that knowledge will be more important than ever in improving outcomes and return to work.

Screen Shot 2016-02-29 at 7.55.09 PM

The Inspiration for Using Analytics

For more than a decade, Deloitte Consulting’s Advanced Analytics & Modeling practice has been developing claim predictive solutions designed to help insurance companies, self-insureds and third-party administrators better segment and triage predicted high-severity from low-severity claims, enabling business decisions and actions that can help drive loss cost savings of as much as 10% of an organization’s annual claims spending. (See Claims Magazine articles “Analytics on the Cloud: Transforming the Way Claims Leverages Advanced Analytics “(2011)(8), “Enhancing Workers’ Comp Predictive Modeling With Injury Groupings” (2012)(9), “Reaping the Financial Rewards of End-to-End Claims Analytics” (2014)(10) and “The Challenges of Implementing Advanced Analytics “(2014).(11) A large part of the claims modeling success is attributed to gaining actionable insights as early as first notice of loss before adverse chain reactions can set in, and shortly thereafter with the three-point contact investigation where additional information is learned about the patient’s history and co-morbidities.

The authors, having observed the success of predicting claims complexity outcomes early in the claim’s lifecycle, became excited about the application of similar models to help identify early warning signs of future excessive opioid usage by injured workers. With as much as 60% of workers’ compensation spending going toward medical costs, one-fifth of that related to prescription drugs(12), we believed the use of predictive models… combined with physician peer-to-peer outreach and proper prescribing guidelines… could help workers’ compensation insurers improve the lives of the injured workers while significantly reducing medical expenditures. The following sections explain the analytics journey undertaken to help move the needle on this issue.

Defining the Target Variable: Predicting Future Excess

An important part of any analytics journey is defining the target variable (i.e., what we are trying to understand and predict). Excessive opiates usage is difficult to ascertain, as higher consumption may indeed be necessary for the most severe injuries. Therefore, various tests on the most appropriate target variables were conducted to probe these hypotheses. Many versions of opioid supply days were tested (i.e., ultimate total supply days across all opiates drugs prescribed to, and consumed by, the injured worker). Variations of opiates prescription counts were also considered (i.e., ultimate count of opiates prescriptions through the lifecycle of the claims). Similarly, supply units were analyzed (i.e., ultimate sum of all individual opiates pills prescribed to, and consumed by, the injured worker from the day of the injury until the claim closure). Figure 1 illustrates the calculation of total supply days for three different opiates that were prescribed to, and consumed by, the injured worker over the duration of his workers’ compensation claim:

fig1

Figure 1. Supply Day Illustration

Methodology and Data Considered

Using predictive analytics and data science, a number of algorithms were built, tested, iterated and fine-tuned to better understand those like-injury cohorts (i.e., same injury sustained) that consumed more opiates than their corresponding peers who managed to consume a lower amount. Various thresholds of “excess” were analyzed by injury and venues, thus controlling for differences that affect the prescription base.

By testing these algorithms, it was determined that segmentation was similar across the different target variables. However, total supply days seemed to exhibit the most robustness from a modeling perspective and had intuitive interpretability (i.e., number of days an injured worker consumes opioids).

The algorithms used more than eight years of lost time workers’ compensation claims to accumulate enough data credibility. Claims were selected for various injury groups where opiates were prescribed and consumed for at least one prescription. The data was organized for a longitudinal study observing a claimant over time and quantifying her consumption of opiates. The comparison to this usage to like-injury counterparts over thousands of cases and using hundreds of attributes is what helped the model shed light on claimants who consumed excessive amounts of opioids relative to the entire population.

Over the years, Deloitte healthcare practitioners and claims professionals used ICD-9 codes that describe a disease or condition, as well as National Council on Compensation (NCCI) nature of injury and body part codes, to create more than 70 proprietary injury groups that are factored into the model to provide enhanced segmentation within like injury claims.(13) For illustration purposes in this article, we presented results for the injury group representing medium- and high-complexity spinal disorders (e.g., ICD-9 codes 722.0 – displacement of cervical intervertebral disc without myelopathy, 722.10 – displacement of lumbar intervertebral disc without myelopathy, 724.9 – other unspecified back disorders, etc.). We selected medium- and high-complexity spinal disorder claims because they are significantly more severe than the average workers’ compensation claim, and, as expected, these claimants typically have more prescriptions filled by their physicians. In addition, the models aren’t run on just any injury group. For example, an injury group containing low-complexity injuries such as finger cuts and minor open wounds would not be part of our analysis. Claimants with these types of low-complexity injuries do not require opioids, given the nature of injury, so it would not make sense to include these injury groups in the model.

Predictive variables

The information attributes used to understand excessive consumption were sourced from similar data sources used in developing our claim-severity models. They are large in number and varied in terms of coverage. They include claimant data (e.g., claimant age, gender, job classification, years of employment, wage, claim filing lag, cause and nature of injury, etc.), prior claims data (e.g., prior frequency and type of claims), employer information (e.g., financial characteristics, years in business, etc.), injury circumstance (e.g. location, type, body part injured), three-point contact information (e.g., co-morbidities, early medical services) as well as other standard external third-party data sources (e.g. lifestyle, behavioral, geo-demographic).

Modeling Results

The lift curves shown in Figure 2 illustrate the segmentation achieved by using multivariate equations to predict total supply days. Each claim below was scored using the model, which generated scores from 1 to 100, with lower scores corresponding to smaller predicted supply days and higher scores corresponding to larger predicted supply days. This score is represented on the x-axis of Figure 2, where each “decile” refers to a group of claims that compose 10% of the data. The actual supply days are tracked and plotted on the y-axis in the appropriate decile.

fig2
Figure 2. Lift Curve – GLM model

As one can see from Figure 2, injured workers studied who are predicted to fall in decile 10 have more than 18 times the supply days as workers predicted to fall in decile 1. Injured workers studied who scored in decile 10 consume, on average, more than three and a half years of opioid supply days! This very large and widespread segmentation suggests that individuals sustaining the same injury can still vary significantly in their future consumption of opioids… and this variation ranges from a couple months to more than three and a half years.

In Figure 3, we compare two 24-year-old male claimants with very similar injuries but drastically different predicted outcomes.

fig3
Figure 3. Similar Injuries, Drastically Different Outcomes

As one can see from Figure 3, the claimant scoring in decile 10 has a number of variables that correlate with the potential for excessive opioid use. Given the combination of co-morbidities, worker health, reporting lags, employer business conditions and additional attributes collected on the individual from external sources (e.g. lifestyle and behavioral data), it is possible for the insurance company to identify and analyze the early drivers that may lead to future excessive opioid the first few days after receiving notice of the claim.

With more than 60 predictive variables in the model (e.g., co-morbidities, prior claims history, job classes, injury causes, business characteristics, claim characteristics, etc.), the most influential categories and reason codes driving the score represent “eyeglasses” for the insurance company physician. The model helps the insurance company physician weigh together multiple pieces of information but doesn’t replace his judgement. Analogously, many of us wear eyeglasses to read a dinner menu, but those eyeglasses do not order the food for us.

Armed with a plethora of facts and the opioid prescribing guidelines, a physician can open a dialogue with the treating physician to help guide the discussion in a direction that best benefits the injured worker. The physician, using the prediction from the model, can tailor appropriate decisions and actions – from low touch or regular prognosis for the first claimant above, to a much more closely managed case for the second individual.

Figure 4 provides a drill-down into the actual versus predicted supply days achieved in the highest-scoring 30% of medium- to high-complexity spinal disorder claims for the train/test data and validation data. Using the train/test/validation approach, the models were trained and enhanced using approximately 70% of the claims data. The validation results shown below were derived from the remaining 30% of the claims data that was held in “cold storage.” Using this kind of blind-test validation data helps ensure that the model’s estimated “lift” (i.e., segmentation power) is true and unbiased.

fig4
Figure 4. Highest Score Drill-Down

Approximately 60% of claims scoring in deciles 8, 9 and 10 exceed one year in supply days. For a quarter of the claims, the injured workers take in excess of four years in supply days of opioids. At the far end of the spectrum, roughly 4% of medium- to high-complexity spinal disorder claims scoring in deciles 8, 9 and 10 will exceed a decade’s worth of opioids in supply days.

One Last Check

In addition to the generalized linear models (GLMs) discussed above, focused on predicting the actual supply days, we also ran a logistic regression model focused on predicting which claimants would take more than a year’s supply of opioids. Using classical statistical measures of precision (i.e., how many of the positively classified results are relevant), recall (i.e., how accurate the model is at detecting the positives) and specificity (i.e., how good the model is at avoiding false alarms), we achieved the following results: a precision of 59%, a recall of 64% and a specificity of 72%.(14) As one last test of the logistic regression model’s segmentation power, we calculated the receiver operating characteristic (ROC) curve.  At almost 80%, it represented a good model from a statistical perspective. Although illustrative, we prefer the GLM model presented above.

Behavioral Economics and Nudges

All across the country, physicians and medical boards are spreading the word about the responsible prescribing of opioids. State and federal agencies are toughening criminal and administrative penalties for doctors and clinics that traffic in prescription drugs. Governors across the country are forming opioid working groups that include senior Health and Human Services professionals, attorneys general, drug courts, hospital professionals, elected officials and more.

Research shows that a number of factors can help insurance companies better understand the severity of claims early on in the life cycle of a claim. Two studies by the National Council on Compensation Insurance, Inc. (NCCI) highlight the effect of obesity on workers’ compensation claims. According to “Reserving in the Age of Obesity,” a Nov. 1, 2010, NCCI study by Chris Laws and Frank Schmid, the ratio in the medical costs per claim of obese to nonobese claimants deteriorates over time from a ratio of 2.8 at the end of one year, to 4.5 at the end of three years, to 5.3 at the end of five years.(15) In a following study from May 29, 2012, “Indemnity Benefit Duration and Obesity,” authors Frank Schmid, Chris Laws and Mathew Montero found the duration of obese claimants is more than five times the duration of nonobese claimants, after controlling for primary International Classification of Diseases (ICD)-9 code, injury year, state, industry, gender and age for temporary total and permanent total indemnity benefit payments.(16) Deloitte’s claim predictive models have shown that the number of medical conditions at the time of injury plays a significant role in determining the ultimate severity and potential for excess opioid usage (e.g., claims with three or more existing medical conditions are 12 times more costly than claims with no existing medical conditions).

With energy and momentum building around addressing the opioid epidemic, insurance companies can leverage behavioral economics and data-driven nudges to help treating physicians improve outcomes and return to work. Leveraging prescribing guidelines and the model results and reason codes that help explain the top five drivers behind the model prediction, insurance company physicians can be more strategic in shaping the discussions they have with treating physicians. For the highest-scoring claims, the insurance company may want to use a mix of peer-to-peer contact and data-driven nudges (e.g., “did you know that 95% of physicians we work with follow the state prescribing guidelines and only prescribe 30 days of opioids for this type of claim,” ”for injuries of this type, physicians we work with usually prescribe less than x milligrams of strength,” etc.). For lower-scoring claims, the insurance company may touch base with the treating physician but skip any reference to data-driven nudges.

Screen Shot 2016-02-29 at 7.56.29 PM

Conclusion

In the end, it is important for workers’ compensation insurers and their medical professionals to clearly understand opioid prescribing guidelines and the internal and external factors that could affect the opioid usage and habits of their injured workers. A Business Insurance white paper titled “Opioid Abuse and Workers’ Comp – How to Tackle a Growing Problem,” described the challenge well: “Monitoring or managing opioid abuse is another key step for workers’ comp managers. It’s not enough to simply dive into the data and look for claimants who appear to be using lots of opioids. Nor is preventing doctors from prescribing opioids a desirable action. The goal is to find claimants who are struggling with a problem they never intended to have, and support those claimants in solving that problem.”(17)

However, our hope is that through the use of predictive analytics (i.e., the ability to identify, in the first few days of receiving a claim, individuals most likely to become high consumers of opioids), prescribing guidelines and physician peer-to-peer outreach, we can help increase insurers’ and treating physicians’ awareness as they work to help prevent injured workers from struggling with dependency and addiction before the behaviors or habits ever form.

As former British Prime Minister Benjamin Disraeli once said, “What we anticipate seldom occurs; what we least expect generally happens.” The science and passion exists today to better anticipate opioid trends and help prevent opioid dependency and addiction before it happens.

As used in this document, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting.

This communication contains general information only, and none of Deloitte Touche Tohmatsu Limited, its member firms, or their related entities (collectively, the “Deloitte Network”) is, by means of this communication, rendering professional advice or services. Before making any decision or taking any action that may affect your finances or your business, you should consult a qualified professional adviser. No entity in the Deloitte Network shall be responsible for any loss whatsoever sustained by any person who relies on this communication.

Copyright © 2016 Deloitte Development LLC. All rights reserved.

[1] James W. Harris, PhD, CSO Vatex Explorations LLC, www.gpo.gov

[2] http://www.cdc.gov/media/releases/2015/p1218-drug-overdose.html, http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6450a3.htm?s_cid=mm6450a3_w

[3] http://www.riskandinsurance.com/paying-detox/

[4] http://usatoday30.usatoday.com/news/military/2011-01-27-1Adruggeneral27_CV_N.htm

[5] http://www.cdc.gov/drugoverdose/prescribing/common-elements.html

[6] http://harlandaily.com/news/6473/cdc-guidelines-will-help-ky-with-rx-drug-abuse

[7] http://www.dir.ca.gov/dwc/ForumDocs/Opioids/OpioidGuidelinesPartA.pdf

[8] http://www.propertycasualty360.com/2011/02/22/leveraging-analytics-in-workers-comp-claims-handli

[9] http://www.propertycasualty360.com/2012/07/23/enhance-workers-comp-predictive-modeling-with-inju

[10] http://www.propertycasualty360.com/2014/02/03/reaping-the-financial-rewards-of-end-to-end-claims

[11] http://www.propertycasualty360.com/2014/10/01/the-challenges-of-implementing-advanced-analytics

[12] www.ncci.com

[13] http://www.propertycasualty360.com/2012/07/23/enhance-workers-comp-predictive-modeling-with-inju

[14] Precision measures the ratio of true predicted positives to the ratio of true predictive positives plus false predicted positives. Recall, also referred to as sensitivity, measures the ratio of true predicted positives to the ratio of true predicted positives plus false predicted negatives. Specificity measures the ratio of true predicted negatives to the ratio of true predicted negatives plus false predicted positives.

[15] https://www.ncci.com/Articles/Documents//II_research-age-of-obesity.pdf

[16] https://www.ncci.com/Articles/Documents/II_Obesity-2012.pdf

[17] http://www.businessinsurance.com/article/99999999/WP05/120509952

3 Steps to Improve Cyber Security

In the recent science fiction film Inception, protagonist Dominic Cobb infiltrated his victim’s dreams to gain access to business secrets and confidential data. He would then use this knowledge to influence things in his (or his client’s) favor. Cobb’s success depended on his ability to manipulate victims through greater understanding of their human vulnerabilities. Just like Cobb, cyber crime perpetrators begin by identifying their targets’ vulnerabilities and gathering intelligence required to breach their systems. Armed with this intelligence, they navigate their targets’ complex systems, establish covert presence and often remain undetected for a long time.It is clear that the growth in cyber crime has continued, if not accelerated, in the financial services industry. U.S. financial services companies lost on average $23.6 million from cybersecurity breaches in 2013, which represents the highest average loss across all industries. This number is 44% higher than in 2012, when the industry was ranked third, after the defense and utilities and energy industries. While this trend is not to be ignored, these actual losses are sometimes not meaningful to firms’ income statements. The potentially greater impact from cyber crime is on customer and investor confidence, reputational risk and regulatory impact that together add up to substantial risks for financial services companies. A recent global survey of corporate C-level executives and board members revealed that cyber risk is now the world’s third corporate-risk priority overall in 2013. Interestingly, the same survey from 2011 ranked cybersecurity as only the 12th-highest priority.

In Inception, although Cobb succeeded in conning most of his victims, he faced stiff resistance from Mr. Fischer, whose strong automated self-defense mechanisms jeopardized the attackers’ plans several times. However, every time Cobb’s team faced an obstacle, they persevered, improvised and launched a new attack. Real-life cyber attacks are, of course, far more complex in many ways than the challenges and responses between Cobb and Fischer. That said, the film does provide an interesting analogy that in many ways illustrates the problems that financial services companies face when dealing with cyber crime.

The interplay between attacker and victim is, indeed, a cat-and-mouse game in which each side perpetually learns and adapts, leveraging creativity and knowledge of the other’s motives to develop new offensive tactics and defensive postures. The relatively static compliance or policy-centric approaches to security found in many financial services companies may be long outdated. The question is whether today’s industry can create a dynamic, intelligence-driven approach to cyber risk management not only to prevent, but also detect, respond to and recover from the potential damage that results from these attacks. As such, transformation into a secure, vigilant and resilient cyber model will have to be considered to effectively manage risks and drive innovation in the cyber world.

The evolving cyber threat landscape

Although cyber attackers are aggressive and likely to relentlessly pursue their objectives, financial services companies are not passive victims. The business and technology innovations that financial services companies are adopting in their quest for growth, innovation and cost optimization are, in turn, presenting heightened levels of cyber risks. These innovations have likely introduced new vulnerabilities and complexities into the financial services technology ecosystem. For example, the continued adoption of Web, mobile, cloud and social media technologies has likely increased opportunities for attackers. Similarly, the waves of outsourcing, offshoring and third-party contracting driven by a cost-reduction objective may have further diluted institutional control over IT systems and access points. These trends have resulted in the development of an increasingly boundary-less ecosystem within which financial services companies operate, and thus a much broader “attack surface” for the threat actors to exploit.

Cyber risk is no longer limited to financial crime

Complicating the issue further is that cyber threats are fundamentally asymmetrical risks, in the sense that oftentimes small groups of highly skilled individuals with a wide variety of motivations and goals have the potential to exact disproportionately large amounts of damage. Yesterday’s cyber risk management focus on financial crime was — and still is — essential. However, in discussions with our clients, we hear that they are now targets of not only financial criminals and skilled hackers but also increasingly of larger, well-organized threat actors, such as hactivist groups driven by political or social agendas and nation-states, to create systemic havoc in the markets. An illustrative cyber threat landscape for the banking sector suggests the need for financial services firms to consider a wide range of actors and motives when designing a cyber risk strategy. This requires a fundamentally new approach to the cyber risk appetite and the corresponding risk-control environment.

The speed of attack is increasing while response times are lagging

Threat actors are increasingly deploying a wider array of attack methods to keep one step ahead of financial services firms. For example, criminal gangs and nation-states are combining infiltration techniques in their campaigns, increasingly leveraging malicious insiders. As reported in a Deloitte Touche Tohmatsu Limited (DTTL) survey of global financial services executives, many financial services companies are struggling to achieve a level of cyber risk maturity required to counter the evolving threats. Although 75% of global financial services firms believed that their information security program maturity is at level three or higher, only 40% of the respondents were very confident that their organization’s information assets were protected from an external attack. And that is for the larger, relatively more sophisticated financial services companies. For mid-tier and small firms, the situation may be much worse, both because resources are typically scarcer and because attackers may see them as easier targets. In a similar vein, the Snowden incident has perhaps increased attention on insider threats, as well.

Multipronged approach can supplement traditional technologies that may now be inadequate

Given that 88% of attacks are successful in less than a day, it might be tempting to think taht the solution may be found in increased investment in tools and technologies to prevent these attacks from being successful. However, the lack of threat awareness and response suggests that more preventative technologies are, alone, likely to be inadequate. Rather, financial services companies can consider adopting a multipronged approach that incorporates a more comprehensive program of cyber defense and response measures to deal with the wider array of cyber threats.

Financial services firms have traditionally focused their investments on becoming secure. However, this approach is no longer adequate in the face of the rapidly changing threat landscape. Put simply, financial services companies should consider building cyber risk management programs to achieve three essential capabilities: the ability to be secure, vigilant and resilient.

— Enhancing security through a “defense-in-depth” strategy

A good understanding of known threats and controls, industry standards and regulations can guide financial services firms to secure their systems through the design and implementation of preventative, risk-intelligent controls. Based on leading practices, financial services firms can build a “defense-in-depth” approach to address known and emerging threats. This involves a number of mutually reinforcing security layers both to provide redundancy and potentially slow down the progression of attacks in progress, if not prevent them.

— Enhancing vigilance through effective early detection and signaling systems

Early detection, through the enhancement of programs to detect both the emerging threats and the attacker’s moves, can be an essential step toward containing and mitigating losses. Incident detection that incorporates sophisticated, adaptive, signaling and reporting systems can automate the correlation and analysis of large amounts of IT and business data, as well as various threat indicators, on an enterprise-wide basis. Financial services companies’ monitoring systems should work 24/7, with adequate support for efficient incident handling and remediation processes.

— Enhancing resilience through simulated testing and crisis management processes

Resilience may be more critical as destructive attack capabilities gain steam. Financial services firms have traditionally planned for resilience against physical attacks and natural disasters; cyber resilience can be treated in much the same way. Financial services companies should consider their overall cyber resilience capabilities across several dimensions. First, systems and processes can be designed and tested to withstand stresses for extended periods. This can include assessing critical online applications for their level of dependencies on the cyber ecosystem to determine vulnerabilities. Second, financial services firms can implement good playbooks to implement triage for attacks and rapidly restore operations with minimal service disruption. Finally, robust crisis management processes can be built with participation from various functions including business, IT, communications, public affairs and other areas within the organization.

For the full report on which this article is based, click here.

Kevin Bingham is sharing this excerpt on behalf of the report’s authors, his colleagues Vikram Bhat and Lincy Francis Therattil. They can be reached through him.

The Moment of Truth for Telematics Is Near

Personal auto carriers are rapidly approaching a moment of truth when it comes to usage-based insurance (UBI) programs, in which a driver’s behavior is monitored via a telematics device. That goes both for insurers that have already launched such products, as well as those that have remained on the sidelines for a variety of reasons.

Early adopters of UBI are gaining a wealth of first-hand experience and insights that stand to provide a long-lasting competitive edge against insurers that until now have been undecided about whether or when to follow suit, as well as those either unwilling or unable to do so. The trailblazers are rapidly collecting a critical mass of data that can be analyzed to reveal driver behaviors that provide a basis for greater precision in underwriting and pricing.

For example, current rating methods would likely rate two drivers identically if they had the same credit scores, automobiles and demographics and lived in areas with similar geographic profiles. However, what if we knew through telematics observation that one of the insured persons drives her car one-tenth as much as the other, or at less risky times of the day? In that case, an insurer would be in a position to potentially leverage this new experiential information and underwrite the respective risks posed by the two drivers differently, as well as price coverage more accurately.

Having such first-hand driving data at their underwriters’ disposal could give existing UBI carriers a considerable leg up over those not using telematics, should the nonusers remain on the sidelines for long. For example, standard carriers could lose profitable policyholders who are cherry-picked by UBI-capable insurers that have acquired the capability to discern driver risk more granularly. Trying to catch up with the frontrunners in the UBI race is also likely to be costly—even more so as time goes on and the early birds get a bigger head start.

Of course, early adopters still face many challenges in executing a viable telematics program. For one, widespread consumer acceptance is no certainty, given privacy concerns for some and skepticism among others as to whether having their driving so closely scrutinized will benefit them in the end, or perhaps even be used against them in a number of ways—and not just by their auto insurer. Indeed, a January 2014 survey by the Deloitte Center for Financial Services exploring consumer use of mobile devices in financial services reveals that about half of the overall driving population is not open to the idea of UBI—at least for the moment.

In addition, while regulators have been supportive in the early stages of telematics development, down the road their acceptance may depend on a number of factors, including the eventual impact on rates for those who fail to meet whatever standards are attributed to “less risky” drivers. There may also be regulatory resistance if drivers face higher prices just because they choose not to be monitored, for whatever reason.

Wherever a carrier stands on the subject, we may have already reached the point of no return when it comes to telematics and UBI. The genie is out of the bottle. The industry as a whole is not likely to go back to relying only on its traditional methods of assessing auto risks. A growing number of carriers will likely adopt behavioral-based telematics as a way to at least supplement traditional underwriting factors.

Indeed, before too long the use of sensory technologies that permit behavioral underwriting by insurers is likely to be expanded beyond auto insurance into homeowners, life and health coverages, and perhaps even non-auto commercial lines as well, such as workers’ compensation. Smart homes, biometric monitoring, wearable technologies and the Internet of Things are all developing trends that could support and accelerate such initiatives.

But even if UBI is merely part of the natural evolution of auto insurance underwriting in an increasingly data-driven age, carriers of all stripes will likely need a strategy to respond to those that embrace telematics. Some will decide to go along for the ride, while the rest will have to figure out alternative routes to survive and prosper.

How big is the market for UBI products?

For a variety of reasons, UBI programs based on telematics data-gathering will probably not be for every driver. Indeed, our general hypothesis that only certain segments will permit their driving to be monitored by insurers was validated by Deloitte’s recent survey, which examined mobile technology experience, perceptions and expectations among financial services consumers. The survey, conducted in January 2014, drew 2,193 respondents representing a wide variety of demographic groups, broken down by age and income, split evenly in terms of gender.

Respondents were asked about their willingness to be monitored by auto insurers through an app on their smartphone, as opposed to having to install an additional piece of equipment into their vehicle, or having a car in which such equipment was already included by the manufacturer.

While most drivers who have signed up for telematics programs are currently monitored by a special device that’s part of their vehicle, going forward it’s likely that such technology could be largely displaced by a mobile app. Not only would the use of smartphones for telematics monitoring lower insurer costs for device distribution and retrieval as well as data transmission, the technology would also enable consumers to get more immediate feedback.

The survey identified three distinct groups among respondents when asked whether they would agree to allow an insurer to track their driving experience through their mobile device if it meant they would be eligible for premium discounts based on their performance. They were:

Eager beavers: More than one in four said they would allow such monitoring, without stipulating any specific minimum discount in return.

Fence sitters: The same percentage of respondents were a bit more cautious, noting they might get on board with UBI if the price was right, given a high enough discount to make it worth their while.

Naysayers: A little less than half said they would not be interested in having their driving monitored under any circumstances.

Among those who were open to the idea of telematics monitoring, about one in five expect a discount of 10% or less, with the vast majority anticipating 6% to 10%. About half expect between 11% and 20% (with 27% anticipating between 11% and 15%), while nearly one-third think they would be entitled to discounts greater than 20%.

When broken down by various demographic factors, age was the biggest differentiator. Nearly two-thirds of respondents aged 21–29 were willing to give UBI a go, compared with only 44% of those 60 or older. More than twice as many in the 21–29 age category than in the 60-or-older group (35% vs. 15%) said yes to telematics without stipulating a particular discount. This trend was somewhat less pronounced but still significant when comparing respondents under 30 with those in the 46–59 segment, among whom only 24% would allow monitoring with no stipulated discount.

Younger respondents were also less likely to expect a discount of greater than 20%—26% of the under-30 crowd compared with 38% of those aged 46–59. This could be because fewer older consumers are open to the idea of monitoring in the first place (perhaps out of “Big Brother” concerns, or the fact that they did not grow up in a fully Web-connected environment), and therefore would demand a bigger financial incentive before allowing an insurer to monitor their driving. Or it could be that the older segment, making more money on average than the youngest segment, is less likely to be won over by a relatively small discount—at least in dollar terms.

Income was not a differentiating factor, which was surprising considering that one might expect those with less discretionary funds to place more emphasis on how much they would save on their auto insurance premiums by signing on to a UBI program. Yet, only about 30% of both the highest (above $100,000) and lowest (below $50,000) income groups surveyed said the size of the discount would determine whether they would allow their driving to be monitored. Expectations about the size of the discount in return for signing on were also similar across income segments.

However, given the fact that higher-income consumers are generally considered potentially more valuable to insurers, seeing a significant segment of that coveted group open to the idea of UBI without worrying about the size of the discount could be a positive factor for telematics marketers.

While gender did not make a major difference in whether a respondent would allow insurers to monitor their driving, women did generally expect a higher discount for doing so, with 59% anticipating a rate break of 16% or higher (including 34% who expect more than 20%) compared with 48% among men (with 28% looking for a discount of 20% or higher).

What are the implications?

Looking at the big picture, with nearly half of the respondents in this survey indicating that UBI is not for them, a bifurcated market may eventually develop, with those who choose to be monitored representing a separate class of drivers who are underwritten in a different way, supplementing at first and perhaps later supplanting traditional pricing factors. In the end, serving the “naysayers” may become a specialty market niche for some carriers.

Still, this research, along with our interviews with insurer executives and media reports of UBI programs being tested or rolled out across the country appears to indicate that there is indeed a significant consumer segment ready, willing and able to at least test-drive telematics-based auto insurance programs. But that doesn’t mean the road to achieving growth and profitability through telematics is without speed bumps, potholes and other potential hazards.

Kevin Bingham is sharing this excerpt on behalf of the authors, Sam Friedman and Michelle Canaan, who can be reached through him. The full report, Overcoming Speed Bumps on the Way to Telematics, can be found here.

The FIO Report on Insurance Regulation

The December 2013 issuance of the Federal Insurance Office (FIO) report, How to Modernize and Improve the System of Insurance Regulation in the United States, may in hindsight be regarded as more momentous an occasion for the industry and its regulation than the muted initial reaction might suggest. History’s verdict most likely will depend on the effectiveness of the follow-up to the report by both the executive and legislative branches, but current trends in financial services regulation may serve to increase the importance and influence over time of the FIO even in the face of inaction in Washington.

Insurance regulation has traditionally been the near-exclusive province of the states, a right jealously guarded by the states and secured by Congress in 1945 after the Supreme Court ruled insurance could be regulated by the federal government under the Commerce Clause of the Constitution.

Any fear that the FIO report would call for an end to state regulation proved unfounded, but industry members might be well-advised to prepare for the eventualities that may result as the FIO uses both the soft power of the bully pulpit and the harder power of the federal government to achieve its aims. As the designated U.S. insurance representative in international forums that more and more mold financial services regulation, and as an arbiter of standards that could be imposed on the states, the FIO and this report should not be ignored.

Having met with the FIO’s leadership team, we believe there are concerns that uniformity at the state level cannot be achieved without federal involvement. We further believe the FIO plans to work to translate its potential into an actual impact in the near future, making a clear-eyed understanding of the report and what it may herald for insurers a prudent and necessary step in regulatory risk management.

The concerns

The biggest surprise about the FIO report may well have been that there were no surprises. There were no strident calls for a wholesale revamp of the regulatory system, and praise for the state regulatory system was liberally mingled among the criticisms.

The lack of any real blockbusters in the details of the FIO report may seem to lend implicit support to those who foresee a continuation of the status quo in insurance regulation. But, taken as a whole, this report and the regulatory atmosphere in which it has been released should be considered a subtle warning of changes that may yet come.

The report may quietly help to usher in an acceleration of the current evolution of insurance regulation. The result could be a regulatory climate that offers more consistency and clarity for insurers and reduces the cost of regulation. The result could also be a regulatory climate that offers more stringent regulatory requirements and increases both the cost of compliance and capital requirements. Most likely, the result could be a hybrid of both.

Either way, preparing to influence and cope with any possible changes portended in the report would be preferable to ignoring the portents.

Part of the disconnect between the short-term reception and the long-term impact of this report may be because of the implicit FIO recognition in the report of the lack of political will needed to enforce any real changes in current U.S. insurance regulation, most especially any that would require increased expenditures or personnel at the federal level. In our current economic and political environment, plugging gaps in state regulation by using measures that would require federal dollars may quite reasonably be construed to be off the table.

But the difference between identified problems and feasible solutions may offer an opportunity. States, industry and other stakeholders could act together to bring needed reform to the insurance regulatory system in a way that adds uniform national standards to regulation, reduces the possibility of regulatory arbitrage and maintains the national system of state-based regulation, all while recognizing the industry’s strengths and needs and not burdening the industry with unnecessary, onerous regulation.

There is much to praise in the current state regulatory system. A generally complimentary federal report on the insurance industry and the fiscal crisis of the past decade noted, “The effects of the financial crisis on insurers and policyholders were generally limited, with a few exceptions…The crisis had a generally minor effect on policyholders…Actions by state and federal regulators and the National Association of Insurance Commissioners (NAIC), among other factors, helped limit the effects of the crisis.”

While the financial crisis demonstrated the effectiveness of the current insurance regulation in the U.S., it is also evident that, as in any enterprise, there are areas for improvement. There are niches within the industry – financial guaranty, title and mortgage insurance come to mind – where regulatory standards and practices have proven less than optimal.

There are also national concerns that affect the industry. The lack of consistent disciplinary and enforcement standards across the states for agents, brokers, insurers and reinsurers is one obvious concern. Similarly, the inconsistent use of permitted practices and other solvency-related regulatory options could lead to regulatory arbitrage. At a time when insurance regulators in the U.S. call for a level playing field with rivals internationally, these regulatory differences represent an example of possible unlevel playing fields at home that deserve regulatory attention and correction.

A Bloomberg News story in January 2014, for example, quoted one insurer as planning to switch its legal domicile from one state to another because the change would allow, according to a spokeswoman for the company, a level playing field with rivals related to reserves, accounting and reinsurance rules.

For insurers operating within the national system of state-based regulation, one would hope that that level playing field would cross domiciles, and no insurer would be disadvantaged because of its domicile in any of the 56 jurisdictions.

But perhaps one of the greatest challenges to the state-based system of regulation is the added cost of that regulation, partly engendered by duplicative requests for information and regulatory structures that have not been harmonized among states. How to respond to that may represent the biggest gap in the FIO report. It may also be the biggest opportunity for both insurers and regulators to rationalize the current regulatory system and ensure the future of state-based regulation.

Cost

The FIO report notes that the cost per dollar of premium of the state-based insurance regulatory system “is approximately 6.8 times greater for an insurer operating in the United States than for an insurer operating in the United Kingdom.” It quotes research estimating that our state-based system increases costs for property-casualty insurers by $7.2 billion annually and for life insurers by $5.7 billion annually.

According to the report, “regulation at the federal level would improve uniformity, efficiency and consistency, and it would address concerns with uniform supervision of insurance firms with national and global activities.”

Yet the report does not recommend the replacement of state-based regulation with federal regulation, but with a hybrid system of regulation that may remain primarily state-based, but does include some federal involvement.

At least one rationale for this is clearly admitted in the report. As it says, “establishing a new federal agency to regulate all or part of the $7.3 trillion insurance sector would be a significant undertaking … (that) would, of necessity, require an unequivocal commitment from the legislative and executive branches of the U.S. government.”

The result of that limitation is a significant difference between diagnosis and prescription in the FIO report. Having diagnosed the cost of the state-based regulatory system as an unnecessary $13 billion burden on policyholders, the FIO's policy recommendations may possibly be characterized as, for the most part, the policy equivalent of “take two aspirin and call me in the morning.”

Still, as the Dodd-Frank Act showed, even Congress can muster the will to impose regulatory solutions if a crisis becomes acute enough and broad enough. Unlikely as that may now seem, the threat of federal radical surgery should not be what is required for states to move toward addressing the recommendations of the FIO report.

Indeed, actions of the NAIC over the past few years have addressed much of what is in the FIO report. Now the NAIC, industry and other stakeholders can take the opportunity provided by the report to work to resolve some of the issues identified in it. The possible outcome of an even greater federal reluctance to become involved in insurance regulation would only be a side benefit. The real goal should be a regulatory system that is more streamlined, less duplicative, more responsive, more cost-efficient and more supportive of innovation.

Kevin Bingham has shared this article on behalf of the authors of the white paper on which it is based: Gary Shaw, George Hanley, Howard Mills, Richard Godfrey, Steve Foster, Tim Cercelle, Andrew N. Mais and David Sherwood. They can reached through him. The white paper can be downloaded here