Tag Archives: federal government

20 Work Comp Issues to Watch in 2016

In an “Out Front Ideas with Kimberly and Mark” webinar broadcast on Jan. 12, 2016, we discussed our thoughts around the issues that the workers’ compensation industry should have on its radar for 2016. What follows is a summary of 20 issues that we expect to affect our industry this year.

  1. Election Cycle

Everyone knows that this is a presidential election year. But election time also means governor and insurance commissioner seats are available. State insurance commissioners are elected in 11 states and appointed in the other 39. In the coming election, there are 12 gubernatorial seats and five insurance commissioner positions to be decided. The workers’ compensation industry needs to be paying attention to these elections because the insurance commissioners can have significant influence over procedures, policies and enforcement in their states.

  1. Viability of Workers’ Compensation

It is important for all of us to consider the continuing viability of workers’ compensation. Is the grand bargain still doing what it was established to do? There is a growing debate around the gaps and shortcomings of workers’ compensation. Our industry needs to engage in a critical analysis of these issues.

  1. Federalization

In October 2015, 10 high-ranking Democrats on key Senate and House committees sent a letter to the Department of Labor asking it to conduct a critical review of state workers’ compensation systems. Some are concerned that this is a sign we could see federal government involvement in state workers’ compensation systems.

In some ways, the federal government is already involved in workers’ compensation. For instance, OSHA has a tremendous impact on workers’ compensation. Medicare Secondary Payer Compliance is another example of federal law affecting the system.

Recent criticisms of workers’ compensation have focused on the vast benefit differences between states. There is also growing concern that workers who are permanently disabled are pushed off workers’ compensation and onto Social Security disability. With Social Security raising solvency concern, lawmakers will be receptive to discussions on how to keep workers’ compensation from shifting long-term claims to the federal government.

This is a substantial issue to watch in the coming years, and there is a significant chance that the federal government will suggest minimum benefit recommendations to the states at some point. This could especially affect states that have hard caps on the total amount of indemnity benefits that an injured worker can receive.

  1. Affordable Care Act

The Affordable Care Act (ACA) will continue to be a subject of discussion in 2016.

The implementation date of the high-cost, employer-sponsored health plans tax, dubbed the “Cadillac tax,” was recently delayed from 2018 to 2020. It imposes an excise tax of 40% on health plans whose value is more than $10,200 for individual coverage and $27,500 for a family. Regardless of the delay, employer-sponsored benefit plans have evolved over the past five years in preparation to avoid the additional tax. The formerly rich benefit plans were dropped in an effort to provide benefit plans within ACA’s requirements and often replaced by higher-deducible plans with reduced benefits.

NCCI and WCRI have both conducted studies on how the ACA has affected workers’ compensation. Results have not conclusively tied treatment delays or actual cost shifting to workers’ compensation. We believe continuing studies by these organizations and others are important to evaluate the impact of ACA on workers’ compensation.

Other issues that should be monitored include consolidation of health systems, providers and insurers. In 2015, there was more than $700 billion of consolidation in the healthcare marketplace. This is driven, in part, by the ACA, because scale and size assist providers with efficiency, purchasing power and the need to provide a continuum of care.

Another issue where the ACA could affect workers’ compensation is changing reimbursement models. Medicare is looking to shift into a value-based reimbursement model, and many state fee schedules are based on Medicare rates.

Although not specifically related to ACA, a healthcare topic to keep an eye on is drug pricing. Drug pricing will continue to be a topic within the media, PBMs, employer benefit managers, health plan experts and the political arena. Prescription drug pricing increased more than 10% in 2015, and this trend is expected to continue. This has an impact on the cost of workers’ compensation claims.

  1. Holes in Workers’ Compensation

What many people do not realize is that workers’ compensation protections are not available to all workers within the U.S. In 14 states, smaller employers with five employees or fewer do not have to secure coverage. In 17 states, there is no legal requirement for coverage of agricultural workers. Half of the states do not require coverage for domestic workers, and five states specifically exclude coverage for these employees. There are also states that create exceptions for certain types of workers, such as state employees in Alabama. Finally, we have seen from court cases around the country that occupational diseases that take several years to develop are often barred by the statute of limitations, leaving workers with no recourse for benefits.

These holes are yet one more thing that critics point to when talking about the inadequacy of workers’ compensation. The occupational disease issue is particularly concerning because it is very easy to question the fairness of barring a claim under the statute of limitations and, at the same time, denying the injured worker the ability to pursue a claim in civil court under the exclusive remedy protections of workers’ compensation. This is another area where we will not be surprised to see the federal government give recommendations.

  1. Blurred Lines Between Workers’ Compensation and Group Health

The employee health model is evolving. Employers are finding the need to provide a consistent healthcare experience for their workforce and plan members. Employers would like to find a model that provides both quality care and consistency for their employees, regardless of whether the need for treatment arises from a work injury or at home. Because a healthy workforce is a productive workforce, employers also feel that there is a need to tie health and productivity together.

We will continue to see health systems build accountable care organizations (ACO) and enter the health plan, insurance and risk-bearing arena with the goal of directly selling to and partnering with employers. ACOs are an attractive model for employers supporting a healthier workforce by extending the culture of health philosophy from work to the home for their employees and their families.

Mental health is a top driver for absence across employers and not simply a health cost concern. Mental healthcare should be as important as physical healthcare and is currently a focus of population health and employer programs. Employers are looking for healthcare models, which consider the person as a whole and offer consistent, engaging behavioral health and wellbeing programs for the workforce.

Workers’ compensation key stakeholders should be a part of the evolving health model discussions and early stage planning so as not to be left in the dark as health models change.

  1. Options to Workers’ Compensation

We all know that Texas has a unique system that allows employers to completely opt out of workers’ compensation benefits. The term “opt-out” refers only to the Texas system. Employers in Oklahoma have an option to workers’ compensation that allows them to develop a private benefit plan that replaces state-mandated workers’ compensation. It is this concept of an option that is looking to spread to other states. Bills on this issue will be reintroduced in Tennessee and South Carolina this year, and other states have begun preliminary discussions.

Some employers feel that they can provide better benefits to their injured workers at a lower cost with these option programs. Others are concerned that these programs lack the controls and oversight of state workers’ compensation. One thing is certain: This issue is not going away any time soon. Perhaps these discussions around options to workers’ compensation can lead to discussions about workers’ compensation reform, including employer medical control, increasing thresholds of compensability and reducing the bureaucracy of the workers’ comp system.

  1. Evolving Claims Model

There are significant discussions around the evolving claims model. The industry realizes that we need to focus more on the injured worker as a consumer. The model needs to focus more on advocacy, but what does this really mean? Should there be a person who assists the injured worker in understanding the claims process, or is there a need to change the culture of our industry to be less adversarial?

Other parts of the evolving model involve who actually touches the claim. Are there elements that could be automated? Should there be more specialization with different individuals performing different tasks instead of the current model where the claims adjuster is a generalist performing multiple tasks across multiple jurisdictions?

The claim handling model also needs to adapt to new technology and the way in which different generations want communication. Some injured workers prefer text instead of e-mail or phone calls. Some like to access claims information in an app on their mobile device or simply, 24/7, as they want it that moment. The model must evolve to take full advantage of new technology and communication methods.

The March 15 “Out Front Ideas with Kimberly and Mark” webinar will focus on the evolving claims model and include guests who are passionate about an advocacy-based design.

  1. Florida Supreme Court

Over the last two years, four cases challenging the constitutionality of various aspects of the Florida workers’ compensation statutes have made it to the state’s Supreme Court. The first of those cases, Padgett, ended in late December when the Supreme Court declined to review it. That case had been thrown out on procedural grounds during the appeal process, so the Court of Appeals and Supreme Court never addressed the underlying constitutional challenge.

There are three cases still to be decided:

  • Westphal, which deals with caps on temporary disability benefits.
  • Castellanos, which addresses limitations on attorney fees.
  • Stahl, which focuses on post MMI medical co-payments and the elimination of permanent partial disability payments.

The expectation is that the Florida Supreme Court will address all of these cases in 2016, but nobody knows when that will occur.

  1. Bureaucracy

Workers’ compensation is one of the most highly regulated lines of insurance, and regulators are increasingly aggressive in pursuing fines and penalties. Every form filed and every payment transaction is an opportunity for a penalty. EDI allows regulators to automate the fines and penalties. Some states perform retrospective audits on activity five to 10 years in the past. The IMR process in California adds administrative cost to claims without necessarily improving outcomes, and states with self-imposed penalties may be driving up the cost of doing business beyond the benefit of the penalty payment. Lobbying is becoming an increasingly important area for payers and service providers to consider.

The significant costs associated with the bureaucracy of workers’ compensation regulations are not improving the outcomes on claims. Most of the money collected from the fines and penalties is paid to the states. The programs may cover the operating costs of state workers’ compensation division and not be paid to the injured worker or medical provider.

This topic is an important issue to watch in 2016 and will be the topic of our Feb. 9 “Out Front Ideas with Kimberly and Mark” webinar.

  1. Regulatory Change

There are four states in particular that we should be keeping an eye on in terms of potential regulatory reforms in 2016:

New York

Employers in New York are continuing to push for additional workers’ compensation reforms to reduce their costs because the savings projected with the last round of reforms never fully materialized. Whether there is enough momentum to get a bill through this year remains to be seen, but the efforts are there.

Florida

In Florida, the situation is going to depend on what the state Supreme Court does with the cases mentioned earlier. If any of those cases punch holes in the constitutionality of the workers’ compensation law, then the legislature is going to need to address this. Again, this is a waiting game.

Illinois

Illinois Gov. Rauner has made it a priority to enact workers’ compensation reforms to reduce employer costs. But his efforts have been blocked by the state legislature, and there is a budget stalemate in the state. There has been much political back-and-forth on this budget and the workers’ compensation reforms. It remains to be seen if the governor has the political muscle to get his legislation passed.

California

Ever since the Schwarzenegger workers’ compensation reforms in 2004, and continuing with SB 863 passed by Gov. Brown, the California legislature has been trying to undermine these workers’ compensation reforms. Every year, multiple bills are passed by the legislature, and every year both Gov. Schwarzenegger and Gov. Brown have vetoed those bills. Gov. Brown is committed to preserving his workers’ compensation reforms, and there are three years left on his term. Once he is gone, there is concern about what could happen with workers’ compensation in California. But, for now, significant change is not expected.

  1. Talent Acquisition

Talent acquisition and retention is probably the biggest issue facing the entire insurance industry. Consider:

  • 25% of insurance industry workforce will retire by 2018 (McKinsey)
  • There are 2.3 million workers in the insurance industry. More than 1 million will retire in the next 10 years, and 400,000 positions will be left open by 2020 (Deloitte and Jackson Group)
  • Workers over the age of 45 represent 48% of the insurance workforce

Are we doing enough with colleges to show the career opportunities in the insurance industry? Although more colleges and universities are offering risk management programs, the reality is that there are very few of these programs nationwide. Our industry needs to support these programs with both grants and internship opportunities.

In workers’ compensation, we need to be looking at the role of the examiner. Are there tasks that we could automate and reduce workload need? Millennials say they want to work with purpose. The role of the claims adjuster is to assist injured workers in their recovery. Could we be doing more to highlight the positive aspects of the claims adjuster role to make it more attractive to millennials?

We also need to be looking at ways to be flexible with work schedules and at whether someone is tied to the home office or able to work from a remote location. Finally, we need to continue to focus on promoting diversity and inclusion within our workforce.

In May, we will be doing an “Out Front Ideas with Kimberly and Mark” webinar devoted to this topic.

  1. Market Conditions

You cannot forecast the coming year for the workers’ compensation industry without talking about rates. Recently, for the first time in years, the Fed increased interest rates. This is good news, but the change is still insignificant and will not have a material impact on the workers’ comp industry. Because investment opportunities are limited for carriers, they continue to be very diligent with their underwriting. What does this mean for rates? Right now, the market is relatively stable. Accounts with good loss histories could see steady to slightly decreased rates, while accounts with poor loss histories will likely see slight increases. Overall, significant rate changes across the nation are not expected in the coming year.

  1. Predictive Analytics

Predictive analytics have been a buzz word in our industry for a number of years. Most data models identify at-risk claims, which may benefit from additional intervention in terms of nurse case management or a more skilled adjuster. The goal of the intervention(s) is to change the trajectory of the claim, to do something different than in similar prior claims, so the result is improved over the past experience. Although most payers reflect having predictive analytics and a variety of models available, there are limited published results on the outcome and effectiveness. Watch in 2016 to see if organizations begin sharing outcomes as a way to market their business or provide industry thought leadership on what is working and should be considered to drive success.

There is a need to evolve predictive analytics and big data models so that some human tasks are automated. Instead of just identifying cases where intervention is necessary, we should also identify claims where minimal intervention is needed. This approach frees resources and allows attention on claims, which will benefit from the touch. Future claims models will benefit from analytics using learning models similar to IBM Watson-type smart analytics.

  1. OSHA

OSHA continues to be a challenge for employers. Going into 2016, OSHA has increased reporting and recordkeeping requirements. It is also increasing its focus on certain industries, including healthcare, and employers are seeing a significant increase in fines. This is an area that is constantly evolving.

Our April 5 “Out Front Ideas with Kimberly and Mark” webinar will focus on these continuing developments and discuss the continuing issues that employers should track.

  1. Utilization Review

There is industry buzz and sidebar conversations around utilization review (UR) and the current approach deployed by employers, payers and service providers. Physicians are asking more than ever how they can help streamline treatment requests, obtain decision outcomes electronically and more quickly and provide timely, appropriate care for patients.

Utilization review should ensure that injured workers receive appropriate care within the right setting and for the correct duration. But what is the right UR model? Should all treatment be subject to UR or select treatment requests? Is UR a process strictly addressing the request for treatment and medical documentation submitted against guidelines of care or collaborative with adjusters, providers and the injured workers? Are denials of care driving up litigation unnecessarily? Do utilization review referral triggers change if the physician providing care is part of a high-performance network or known to be a top-performing physician? These are questions being raised by industry veterans and newcomers alike and are likely worthy of a review and further dialogue.

In the consumer-driven health world where we find ourselves, there is greater interest from injured workers to understand treatment options and outcomes. If not a part of UR, is your case management or claim model providing medical treatment option education, inclusive of outcomes awareness? Transparency is becoming increasingly important to consumers.

  1. Exclusive Remedy

Plaintiff attorneys are always trying to find ways around the exclusive remedy protections of workers’ compensation, and these efforts are becoming increasingly successful. In early January 2016, the District Court of Appeals in California allowed an injured worker to pursue a civil claim against a utilization review provider because the provider failed to warn him about the potential risks of medication withdrawal.

More and more, judges are allowing such litigation to survive a motion to dismiss on summary judgement because of workers’ compensation exclusive remedy protections. This creates enormous costs for employers and carriers, which then must spend hundreds of thousands of dollars or more defending such lawsuits and face the risk of a jury award that could be worth millions. In addition, an employer’s liability award based on the “intentional actions” of the employer may have issues with insurance coverage. The entire industry should be paying close attention to this area of increased litigation around exclusive remedy.

  1. ICD-10

The ICD-10 medical classification came along last year with a lot of hype and a significant amount of work effort to update systems and train teams. There was concern that the new diagnosis codes would result in slowed claims processes and treatment decisions. Thus far, workers’ compensation key stakeholders report little to no impact from the change. This may be because states did not mandate the use of ICD-10 for workers’ compensation and most organizations continue to accept ICD-9. Bill review receipt to pay timeframes have not lengthened, and e-billing rejections did not increase, which were two areas to watch after the ICD-10 go-live.

In 2019, Medicare plans to roll out an incentive-based reimbursement model tied to patient outcomes (MACRA). The American Medical Association believes this will be a significant reimbursement change for physicians. Changes to Medicare reimbursement could impact workers’ compensation because some state fee schedules are Medicare based.

History has proven Medicare does not always follow through with what it says it is going to do in terms of changing reimbursement models, but the MACRA implementation is an issue worth monitoring.

  1. Marijuana

Thus far, New Mexico has been the only state allowing medical marijuana for treatment under workers’ compensation. But as the use of medical marijuana spreads, it is inevitable that we will see other states take on this issue. The answer is simple –if states put something in their statutes barring medical marijuana under workers’ compensation, then that solves the problems. Some medical marijuana states have already indicated that insurance is not responsible covering medical marijuana. State legislators and regulators can stop this before it becomes a legitimate problem.

The bigger issue is employment practice concerns. Many expect the federal government to reclassify marijuana as a Schedule 2 drug, possibly by the end of this current administration. Once that happens, it will no longer be an “illegal” drug. Employers are going to need to adapt and drug test for impairment rather than just testing the presence of the drug. Standards are going to need to be developed on what constitutes “impairment” with marijuana. The science needs to catch up with the realities of this new normal when it comes to marijuana in the U.S.

  1. On-Demand Economy

The on demand economy is creating new concerns about what constitutes an employee/employer relationship. Is an Uber driver an employee of Uber or an independent contractor? What about a repair person you hire through Angie’s List?

While the on-demand economy is a newer dynamic, determining what constitutes independent contractor vs. an employee has been a challenge for the workers’ compensation industry for many years. In July 2015, the Department of Labor issued an interpretive memorandum indicating that the DOL feels “most workers classified as independent contractors are employees under the Fair Labor Standards Act’s broad definitions.”

So perhaps the issue to watch here is not so much the on-demand economy, but instead whether we are going to see the Department of Labor push for fewer and fewer workers to be classified as independent contractors. This could have a significant impact on many industries as well as significantly changing the business model of services like Uber and Lyft.

Why Credit Monitoring Doesn’t Work

Chances are you have received a letter stating that your personal data may have been compromised. Perhaps you were one of the 80 million people with an Anthem health insurance plan. Maybe you were one of the 21 million current or former employees of the federal government, or you could have been one of the 40 million who shopped at Target. There are countless examples where organizations failed to protect sensitive data and then were required to notify the affected individuals.

These notifications typically reveal how the breach happened, what steps are being taken to prevent another incident and what a company is doing to protect you from identity theft. Most organizations offer some form of credit monitoring and ID theft remediation services. Some states are beginning to mandate at least one year of credit monitoring under certain circumstances.

The Limits of Credit Monitoring

Offering credit monitoring seems to be a necessary post-breach strategy, and the very least a company would do. However, a deeper dive into what it does – and what it does not do – is long overdue.

Credit monitoring immediately notifies an individual that an attempt was made to obtain some form of credit in her name. Credit restoration services are usually offered when identity theft occurs. This is a valuable service that restores a victim’s good credit, saves time and alleviates stress.

Credit monitoring does not prevent identity theft. The only way to prevent an identity thief from accessing a victim’s credit is to either place a 90-day fraud alert on a credit file or freeze credit lines.

  • Fraud alerts require potential creditors to contact individuals before opening lines of credit. To activate a fraud alert, individuals are required to notify one of the three bureaus (Equifax, Experian or Trans Union) and to repeat the process every 90 days to maintain the fraud alert status.
  • ƒFreezing credit can be accomplished by contacting all three credit bureaus and requires each one to place a freeze on an individual’s credit file. Each bureau provides a PIN # that can be used to lift the freeze later. There may be a nominal fee based on state of residence, which typically ranges from $5 to $15. Some states may require an additional fee to lift the freeze. A credit freeze may cost less than credit monitoring and identity theft restoration services. In fact, it has been widely reported that the Office of Personnel Management spent $133 million for three years’ credit monitoring for the 21 million individuals affected by their 2015 data breach.

Legal Ramifications of Offering Credit Monitoring

Offering credit monitoring can cost an organization even more than the dollars spent. In Remijas v. Neiman Marcus, the plaintiffs alleged that 350,000 payment cards were affected when hackers gained access to Neiman Marcus networks. Even though a small fraction of the cards were affected by fraudulent activity, the Seventh Circuit Court of Appeals granted the plaintiffs legal standing, allowing the class action to proceed, because card holders had a legitimate fear of future identity theft. Because Neiman Marcus offered credit monitoring to the card holders after the breach, the court concluded that it was conceding that future identity theft was entirely possible.

The state regulatory environment, coupled with recent appellate
court decisions, leaves organizations in a difficult position. States
are beginning to require credit monitoring following a data breach. Organizations that do not offer credit monitoring face scrutiny by attorneys general, potential fines for non-compliance and a public relations fiasco. Yet those that offer credit monitoring will incur significant costs and, as evidenced in Remijas v. Neiman Marcus, may actually hurt their defense in a class action lawsuit.

A Better Way to Protect Your Identity

A more rational approach is needed to identity protection. Organizations and state regulators reacting to data breaches involving sensitive data elements need to address ways to prevent identity theft. As of this writing, organizations cannot legally freeze a consumer’s credit for him, and have little means to prevent identity theft on his behalf. However, with the full support of state officials, a more efficient process to freeze credit can better protect identities and mitigate costs.

Failing ACA Co-Ops? Not a Surprise

During the congressional deliberations that led to the Patient Protection and Affordable Care Act, strong support emerged for a government-run health plan to compete with private carriers. The “public option” failed but did create political space for the concept of consumer-owned, non-profit, health insurance co-operatives. The co-ops found their way into the ACA, but now, as a group, are in big trouble. Eight of the nation’s 23 health co-ops are going out of business, and more may follow.

The Case for Health Co-ops
Then-Sen. Kent Conrad championed health co-operatives during the healthcare reform debate. He saw them as health plans owned by local residents and businesses, modeled after the electrical co-ops in his home state of North Dakota. They would receive start-up money from the federal government but otherwise would compete against private carriers on a level playing field.

Co-op advocates hoped they would bring competition to markets dominated by too-few private carriers. Advocates also expected these non-profits to provide individual consumers and small businesses additional affordable health insurance choices. With focus on the first goal, health co-ops might be in a better place today. Unfortunately, too often they sprung up in states where competition was already strong.

The ACA set up a roughly $6 billion fund to help get “consumer-operated and -oriented plans” up and running. The long-term financial viability of health co-ops was to flow from premiums paid by those they insured and the “Three Rs“-programs established by the ACA “to assist insurers through the transition period, and to create a stable, competitive and fair market for health insurance.” Specifically these were the ACA’s reinsurance, risk adjustment and risk corridor programs.

It’s Tough Being New

A (not so) funny thing happened on the way to the health co-ops’ solvency. Starting a health insurance plan is difficult and failure always an option. (I know. I was executive vice president at start-up SeeChange Health, an insurer that failed last year.) New carriers, by definition, have no track record and no data concerning pricing, provider reimbursements, claim trends and the like. The first foray into the market is an educated guess. Worse, new plans usually have a small membership base. This provides little cushion against the impact of miscalculations or unwelcome surprises.

A new health plan launching in the midst of the industry’s transition to a post-ACA world faced exponentially greater difficulties. In 2013, when most of the health co-ops launched, no one knew what the market would look like in 2014. Exchanges, metallic plan requirements, guarantee issue of individual coverage and more were all happening at once. Were employers going to stop offering coverage? How were competitors going to price their offerings? Would provider networks be broad or narrow? The questions were endless; the answers at the time scarce. In a speech during the lead-up to 2014, I described the situation as carriers “playing chicken on tractors without headlights in a dark cave while blindfolded — at night.”

This is the world into which ACA-seeded health co-ops were born. That they now face serious financial problems should surprise no one. They saw themselves as “low-cost alternatives” in their markets. If they were going to err in setting prices, it was not going to be by setting premiums too high.

Besides, if they priced too low, they were protected by the risk corridor program. As described by the Centers for Medicare & Medicaid Services, which manages the ACA’s financial safety net, the “risk corridors program provides payments to insurance companies depending on how closely the premiums they charge cover their consumers’ medical costs. Issuers whose premiums exceed claims and other costs by more than a certain amount pay into the program, and insurers whose claims exceed premiums by a certain amount receive payments for their shortfall.”

The majority of the nation’s health co-operatives saw claims exceeding premiums. With the co-ops on the “shortfall” side of the equation, the government was to come to their rescue like the proverbial cavalry with the money needed to keep them going.

Except the cavalry is a no-show. Too few carriers had too little claims surplus to cover the too large losses of too many health plans. Only 12.6 cents on the dollar due under the risk corridor program is expected to make it to plans on the shortfall side of the equation, the Centers for Medicare and Medicaid Services (CMS) announced on Oct. 1.

The Math Always Wins

Several of the health co-ops were in financial trouble before this news. Losing millions of dollars in expected relief doomed more. As of today, the dollars-and-cents have failed to add up for CoOportunity Health (the co-op in Iowa and Nebraska), the Kentucky Health Cooperative (which also served West Virginians), Louisiana Health Cooperative, Health Republic Insurance of New York, Health Republic Insurance of Oregon, the Nevada Health CO-OP, Community Health Alliance (a Tennessee co-op) and the Colorado HealthOP. Just to use the Colorado situation as an example, the Colorado HealthOp needed $16.2 million; it expects to receive $2 million.

Do these failures mean health insurance co-ops are a bad idea? Not necessarily. What they point to is that health co-ops may have been better off focusing on bringing competition to markets where there were too few plans, not joining a pack where there were enough. Even then, the collapse of the risk corridor program may have doomed them, but they’d have stood a better chance.

As noted above, Sen. Conrad modeled the health co-operatives on electrical co-ops found in some rural communities. Where too few customers make it unprofitable for traditional utilities to invest in the infrastructure required, consumers, seeking electricity, not profits, come together to extend the grid.

Those implementing the ACA should have followed this model. Instead of funding 23 health co-operatives, the administration should have offered seed money to fewer co-ops located where they would be the alternative in the market, not just another one. This may have allowed them to extend financial support long enough to at least partially offset the risk corridor shortfall. Then, just maybe, we could have avoided the “surprise” of failing health co-ops.

Home Insurers Ignore Opportunity in Flood

Recently, Munich Re announced its plan to step into the U.S. inland flood market to offer a competitive flood coverage endorsement for participating carriers. This is the second notable entry of international capital into an arena dominated by the federal government.

Munich Re is known as a conservative giant of international reinsurance, so it might seem odd that it is joining the National Flood Insurance Program (NFIP) in covering U.S. flood. A quick look at the opportunity shows why the plan makes sense.

U.S. inland flood insurance is an untapped source of non-correlated premium unlike any other in the world. The market is dominated by an incumbent market maker that is in trouble because it offers an inferior product that cannot price risk correctly (this paper nicely summarizes the problems at NFIP). So, here is what the new entrants are seeing:

  1. Contrary to industry beliefs, flood is insurable. The tools are present to accurately segment risk.
  2. Carriers offering flood capacity will differentiate themselves from competitors. This will give them a leg up on the competition in a market that is highly homogeneous. Carriers not offering flood will likely disappear.
  3. The market is massive, with potentially 130 million homes and tens of billions of dollars at stake.

Let’s go into details.

Capital Into a Ripe Market

The U.S. Flood Market

As most readers of Insurance Thought Leadership already know, many carriers have flood on the drawing board right now. The Munich Re announcement was not really a surprise. We all know there will be more announcements coming soon.

Let’s summarize the market reasons for the groundswell of private insurance in U.S. flood.

The most obvious characteristic of the market is the size. For the sake of this post, we’ll just consider homes and homeowner policies. Whether one considers the number of NFIP policies in force as the market size (about 5.4 million policies in 2014), the number of insurable buildings (133 million homes) or something in between, there is clearly a big market. And the NFIP presents itself as the ideal competitor – big, with a mandate not necessarily compatible with business results.

So, there is no doubt that a market exists. Can it be served? Yes, because the risk can be rated and segmented.

Low-Risk Flood Hazard

To be clear: A low-risk-flood property has a profile with losses estimated to be low-frequency and low-severity. In other words: Expected flood events would rarely happen, and not cause much damage if they do. For many readers, joining the words “low-risk” and “flood” together is an oxymoron. We strongly disagree. Common sense and technology can both illustrate how flood risk can be segmented efficiently and effectively into risk categories that include “low.”

Let’s start with common sense. Flood loss occurs because of three possible types of flood: coastal surge, fluvial/river or rain-induced/pluvial (here is more information on the three types of flood). The vast majority of U.S. homeowners are not close enough to coastal or river flooding to have a loss exposure (here is a blog post that explores the distribution of NFIP policies). Thus, the majority of American homeowners are only exposed to excess surface water getting into the home. We’d be willing to wager that most of the ITL readership does not purchase flood insurance, simply because they don’t need it. That is the common-sense way of thinking of low-risk flood exposure.

How does the technology handle this?

There is software available now that can be used to identify low-risk flood locations (as defined by each carrier), supported by the necessary geospatial data and analytics. Historically, this was not the case, but advances in remote sensing and computing capacity (as we explored here) make it entirely reasonable now, with location-based flood risk assessment the norm in several European countries. Distance to water, elevations, localized topographical analyses and flood models can all be used to assess flood risk with a high degree of confidence. In fact, claims are now best used as a handy ingredient in a flood score rather than as a prime indicator of flood risk.

How to Deliver Flood Insurance in the U.S.

Deliver Flood Insurance to What Kind of Market?

Readers must be wondering at the size of market, because we offered two distinctly different possibilities above – is it about 5 million to 10 million possible policies, or 130 million policies? The difference is huge – the difference is between a niche market and a mass market.

The approach taken by flood insurers thus far is for a niche market. The current approach probably has long-term viability in high-risk flood, and the early movers that are now underwriting there are establishing solid market shares, cherry-picking from the NFIP portfolio.

On a large scale, though, the insurance industry’s approach needs to be for a mass market.

Here is a case study describing the mass market opportunity:

  1. The property is in Orange County, CA, where the climate is temperate and dry, almost borderline desert. El Niño might be coming, but that risk can be built in.
  2. Using InsitePro (see image below), you can see that the property is miles and miles away from any coastal areas, rivers or streams. More importantly, the home is elevated against its surroundings, so water flows away from the property, which is deemed low-risk.
  3. The area has no history of flooding, and this particular community has one of the most modern drainage systems in the state.

map

Screenshot of InsitePro, courtesy of Intermap Technologies. FEMA zones in red and orange

  1. Using Google Maps street view, we can estimate that the property is two to three feet above street level, which adds another layer of safety. Also, this view confirms that the area is essentially flat, so the property is not at the bottom of a bathtub.
  2. And, as with most homes in California, this property has no basement, so if water were to get into the house it would need to keep rising to cause further damage.

To an underwriter, it should be clear that this home has minimal risk from flooding. As a sanity check, she could compare losses from flood for this property (and properties like it in the community) to other hazards such as fire, earthquake, wind, lightning, theft, vandalism or internal water damage. How do they compare? What are the patterns?

For this specific home, the NFIP premium for flood coverage is $430, which provides $250,000 in building limit and $100,000 in contents protection. The price includes the $25 NFIP surcharge.

This is a mind-boggling amount of premium for the risk imposed. Consider that for roughly the same price you can get a full homeowners policy that covers all of these perils: fire, earthquake, wind, lightning, theft AND MORE! It is crazy to equate the risk of flood to the risk of all those standard homeowner perils, combined! We provided this example to show that even without all the mapping and software tools available for pricing, what we can quickly conclude is that the NFIP pricing for these low-risk policies is absurdly high. Whatever the price “should” be for these types of risks, can you see that it MUST be a fraction of the price of a traditional homeowner’s policy? Don’t believe that either? Consider that the Lloyd’s is marketing its low-risk flood policies as “inexpensive,” and brokers tell us privately that many base-level policies will be 50% to 75% less expensive than NFIP equivalents.

The news gets even better. There are tens of millions of houses like this case example, with technology now available to quickly find them. These risks aren’t the exception; these risks can be a market in their own right. Let the mental arithmetic commence!

Summary: Differentiate or Die!

The Unwanted Commodity

Most consumers of personal lines products don’t have the time or the ability to evaluate an insurance policy to determine whether it provides good value. Regrettably, most agents and brokers don’t have the time to help them either. So, when shopping for a product that they hope they will never use and that they are incapable of truly understanding, consumers will focus on the one thing they do understand: price.

Competing on price becomes a race to the bottom (yay! – another soft market) and to death. But there is an opportunity here – carriers that compete on personal lines/homeowner insurance with benefits that are immediately apparent (like value, flexibility, service, conditions and, inevitably, price) have a rare chance to stake out significant new business, or to solidify their own share.

The flood insurance market is real, and it’s big enough for carriers to establish a healthy and competitive environment where service and quality will stand out, along with price. Carriers that would like to avoid dinosaur status can remain relevant and competitive, with no departure from insurance fundamentals – rate a risk, price it and sell it. It’s obvious, right?

Which carriers will be decisive and bold and begin to differentiate by offering flood capacity? Which carriers will evolve to keep pace or even lead the pack into the next generation of homeowner products? More importantly, which of you will lose market share and cease to exist in 10 years because you didn’t know what innovation looks like?

Reducing Losses From Extreme Events

The number of presidential disaster declarations in the U.S. has dramatically increased over the past 50 years. Figure 1 depicts the total number of presidential disaster declarations and those that were triggered by flooding events (inland flood and storm surge from hurricanes). This pattern highlights the need to encourage those at risk to invest in loss reduction measures prior to a disaster rather than waiting until after the event occurs. Insurance coupled with other risk management programs can play an important role, as it is designed to spread and reduce risk. Each policyholder pays a relatively small premium to an insurer, which can then cover the large losses suffered by a few. Ideally, those who invest in loss prevention measures are rewarded by having the price of their coverage reduced to reflect their lower expected claims payments.

fig1

Insurance against low-probability, high-consequence (LP-HC) events presents a special challenge for individuals at risk, insurers and regulators, for good reason. Decision-makers have limited experience with these events, and even experts are likely to conclude that there is considerable uncertainty as to the probability of these events occurring and their resulting consequences. As a result, insurance decisions often differ from those recommended by normative models of choice.

Consider the following examples:

Example 1: Most homeowners in flood-prone areas do not voluntarily purchase flood insurance—even when it is highly subsidized—until after they suffer flood damage. If they then do not experience losses in the next few years, they are likely to cancel their policy. Demand for earthquake insurance in California increased significantly after the Northridge earthquake of 1994— the last severe quake in the state; today relatively few homeowners have coverage.

Example 2: Prior to the terrorist attacks of Sept. 1, 2001, actuaries and underwriters did not price the risk associated with terrorism, nor did they exclude this coverage from their standard commercial policies. Their failure to examine the potential losses from a terrorist attack was surprising given the truck bomb that al Qaeda detonated below the North Tower of the World Trade Center in 1993, the 1995 Oklahoma City bombing and other terrorist-related events throughout the world. Following 9/11, most insurance companies refused to offer coverage against terrorism, considering it to be an uninsurable risk.

Example 3: State insurance regulators sometimes have restricted insurers from setting premiums that reflect risk, in part to address equity and fairness issues for those in need of homeowners’ insurance. For example, following Hurricane Andrew in 1992, the Florida insurance commission did not allow insurers to charge risk-based rates and restricted them from canceling existing homeowners’ policies. After the severe hurricanes of 2004 and 2005 in Florida, the state-funded company Citizens Property Insurance Corp., which had been the insurer of last resort, offered premiums in high-risk areas at subsidized rates, thus undercutting the private market. Today, Citizens is the largest provider of residential wind coverage in Florida.

The three examples indicate that insurance today is not effectively meeting two of its most important objectives:

  • providing information to those residing in hazard-prone areas as to the nature of the risks they face;
  • giving incentives to those at risk to undertake loss reduction measures prior to a disaster.

The insurance industry played both of these roles very effectively when the factory mutual companies were founded in the 19th century, as detailed in Box 1. This paper proposes a strategy for insurance to take steps to return to its roots. The examples and empirical data presented here are taken primarily from experience in the U.S.; however, the concepts have relevance to any country that uses insurance to protect its residents and businesses against potentially large losses.

The next three sections explore the rationale for the actions taken by each of the interested parties illustrated in the above three examples by focusing on their decision processes prior to and after a disaster. I then propose two guiding principles for insurance and outline a long-term strategy with roles for the private and public sectors if these principles are implemented. Reforming the National Flood Insurance Program (NFIP) to encourage mitigation for reducing future losses while providing financial protection to those at risk is a target of opportunity that should be seriously considered. The concluding section suggests directions for future studies and research so that insurance can play a central role in reducing losses from extreme events.

fig2

DECISION PROCESSES

Intuitive and deliberative thinking

A large body of cognitive psychology and behavioral decision research over the past 30 years has revealed that individuals and organizations often make decisions under conditions of risk and uncertainty by combining intuitive thinking with deliberative thinking. In his thought-provoking book Thinking, Fast and Slow, Nobel laureate Daniel Kahneman has characterized the differences between these two modes of thinking. Intuitive thinking (System 1) operates automatically and quickly with little or no effort and no voluntary control. It is often guided by emotional reactions and simple rules of thumb that have been acquired by personal experience. Deliberative thinking (System 2) allocates attention to intentional mental activities where individuals undertake trade-offs and recognize relevant interdependencies and the need for coordination.

Choices are normally made by combining these two modes of thinking and generally result in good decisions when individuals have considerable experience as a basis for their actions. With respect to LP-HC events, however, there is a tendency to either ignore a potential disaster or overreact to a recent one, so that decisions may not reflect expert risk assessments. For example, after a disaster, individuals are likely to want to purchase insurance even at high prices, while insurers often consider restricting coverage or even withdraw from the market. In these situations, both parties focus on the losses from a worst-case scenario without adequately reflecting on the likelihood of this event occurring in the future.

Impact of intuitive thinking on consumer behavior

Empirical studies have revealed that many individuals engage in intuitive thinking and focus on short-run goals when dealing with unfamiliar LP-HC risks. More specifically, individuals often exhibit systematic biases such as the availability heuristic, where the judged likelihood of an event depends on its salience and memorability. There is thus a tendency to ignore rare risks until after a catastrophe occurs. This is a principal reason why it is common for individuals at risk to purchase insurance only after a disaster.

Purchase of flood insurance

A study of the risk perception of homeowners in New York City revealed that they underestimate the likelihood of water damage from hurricanes. This may explain why only 20% of those who suffered damage from Hurricane Sandy had purchased flood insurance before the storm occurred.

An in-depth analysis of the entire portfolio of the NFIP in the U.S. revealed that the median tenure of flood insurance was between two and four years, while the average length of time in a residence was seven years. For example, of the 841,000 new policies bought in 2001, only 73% were still in force one year later. After two years, only 49% were in force, and eight years later only 20%. Similar patterns were found for each of the other years in which a flood insurance policy was first purchased.

One reason that individuals cancel their policies is that they view insurance as an investment rather than a protective activity. Many purchase coverage after experiencing a loss from a disaster but feel they wasted their premiums if they have not made a claim over the next few years. They perceive the likelihood of a disaster as so low that they do not pay attention to its potential consequences and conclude they do not need insurance. A normative model of choice, such as expected utility theory, implies that risk-averse consumers should value insurance, as it protects them against large losses relative to their wealth. Individuals should celebrate not having suffered a loss over a period rather than canceling their policy because they have not made a claim. A challenge facing insurers is how to convince their policyholders that the best return on an insurance policy is no return at all.

Purchase of earthquake insurance

Another example that reveals how the availability bias affects the choice process is the decision of California homeowners on whether to purchase earthquake insurance. Surveys of owner-occupied homes in California counties affected by the 1989 Loma Prieta earthquake showed a significant increase in the purchase of coverage. Just prior to the disaster, only 22% of the homes had earthquake insurance. Four years later, 37% had purchased earthquake insurance—a 64% increase.

Similarly, the Northridge earthquake of 1994 led to a significant demand for earthquake insurance. For example, more than two-thirds of the homeowners surveyed in Cupertino county had purchased earthquake insurance in 1995. There have been no severe earthquakes in California since Northridge, and only 10% of those in seismic areas of the state have earthquake insurance today. If a severe quake hits San Francisco in the near future, the damage could be as high as $200 billion, and it is likely that most homeowners suffering damage will be financially unprotected.

Impact of intuitive thinking on insurer behavior

Two factors play an important role in insurers’ behavior with respect to pricing and coverage decisions: the role of experience and the role of ambiguous risk. We examine each of these features in turn.

Role of experience on supply of insurance

When insurers have experienced significant losses from a particular extreme event, there is a tendency for them to focus on worst-case scenarios without adequately considering their likelihood. In some instances, because of extreme losses from hurricanes, floods, earthquakes and terrorist attacks, insurers determined that they could not continue to market coverage in the U.S. without involvement by the public sector. In these situations, either the state or federal government stepped in to fill the void.

Hurricane wind-related losses

Following catastrophic wind losses from hurricanes in Florida, insurers felt they had to significantly raise their homeowners’ premiums. Rather than using catastrophe models to justify rate increases, insurers pointed to their large losses following Hurricane Andrew in 1992 as a basis for demanding higher premiums, without considering the likelihood of another disaster of this magnitude. The insurers were denied these rate increases and reduced their supply of new homeowners’ policies.

By the beginning of 2004, most insurers viewed their Florida rates as being close to adequate except in the highest-risk areas. However, after four major hurricanes battered Florida in 2004 and two more in 2005, many insurers again began to file for major premium increases, and many of them were denied, or approved at lower increases by the regulators. In 2007, the Florida Office of Insurance Regulation (FLOIR) took a position against any further rate increases of homeowners’ insurers and denied requests by all insurers. In December 2008, State Farm asked for a 67% increase in premiums that was denied by the FLOIR, leading the insurer to announce that it would no longer offer homeowners’ coverage in Florida. Five years later (March 2014), State Farm announced that it would again begin offering homeowners and renters insurance in the state on a limited basis.

Flood insurance

Following the severe Mississippi floods of 1927 and continuing through the 1960s, there was a widespread belief among private insurance companies that the flood peril was uninsurable by the private sector for several reasons: Adverse selection would be a problem because only particular areas are subject to the risk; risk-based premiums would be so high that no one would be willing to pay them; and flood losses could be so catastrophic as to cause insolvencies or have a significant impact on surplus. This lack of coverage by the private sector triggered significant federal disaster relief to victims of Hurricane Betsy in 1965 and led to the creation of the NFIP in 1968.

The NFIP subsidized premiums to maintain property values on structures in flood-prone areas; new construction was charged premiums reflecting risk. Even though premiums on existing property were highly subsidized, relatively few homeowners purchased coverage, leading the U.S. Congress to pass the Flood Disaster Protection Act (FDPA) of 1973. This bill required all properties receiving federally backed mortgages to purchase flood insurance. The NFIP has grown extensively in the past 40 years; as of January 2015, it had sold more than 5.2 million policies in 22,000 communities and provided almost $1.3 trillion in coverage. Insurance tends to be concentrated in coastal states, with Florida and Texas alone composing nearly 40% of the entire program (in number of policies, premiums and coverage). After making claims payments from Hurricane Katrina in 2005, the NFIP found itself $18 billion in debt, so that its borrowing authority had to be increased from $1.5 billion to $20.78 billion. To date, the program has borrowed nearly $27 billion from the U.S. Treasury to meet its claims obligations in the aftermath of the 2004, 2005, 2008 and 2012 hurricane seasons.

In July 2012 (three months before Hurricane Sandy), Congress passed and the president signed the Biggert–Waters Flood Insurance Reform Act of 2012 (BW12), which applied the tools of risk management to the increasingly frequent threat of flooding. Among its many provisions, the legislation required that the NFIP produce updated floodplain maps, strengthen local building code enforcement, remove insurance subsidies for certain properties and move toward charging premiums that reflect flood risk.

Soon after becoming law, BW12 faced significant challenges from some homeowners who had reason to complain that the new flood maps overestimated their risk. These residents and other homeowners in flood-prone areas felt that their proposed premium increases were unjustified and that they could not afford the increased premiums that they would face. In March 2014, Congress passed the Homeowner Flood Insurance Affordability Act (HFIAA14), which required the Federal Emergency Management Agency (FEMA) that operates the NFIP to draft an affordability framework based on the recommendations of a National Academy of Sciences’ study that addresses the affordability of flood insurance premiums.

Earthquake insurance

Until the San Fernando earthquake of 1971, few homeowners and businesses in California had purchased earthquake insurance even though coverage had been available since 1916. In 1985, the California legislature passed a law requiring insurers writing homeowners’ policies on one- to four-family units to offer earthquake insurance to these residents. The owners did not have to buy this coverage; the insurers only had to offer it. At that time and still today, banks and financial institutions do not require earthquake insurance as a condition for a mortgage.

The Northridge earthquake of January 1994 caused insured losses of $20.6 billion, primarily to commercial structures. In the three years following Northridge, demand for earthquake insurance by homeowners increased 19% in 1994, 20% in 1995 and 27% in 1996, leading private insurance companies in California to re-evaluate their seismic risk exposures. Insurers concluded that they would not sell any more policies on residential property, as they were concerned about the impact of another catastrophic earthquake on their balance sheets. The California Insurance Department surveyed insurers and found that as many as 90% of them had either stopped or had placed restrictions on the selling of new homeowners’ policies. This led to the formation of a state-run earthquake insurance company—the California Earthquake Authority (CEA)—in 1996.

Terrorism insurance

Following the terrorist attacks of 9/11, most insurers discontinued offering terrorism coverage given the refusal of global reinsurers to provide them with protection against severe losses from another attack. The few that did provide insurance charged extremely high premiums to protect themselves against a serious loss. Prior to 9/11, Chicago’s O’Hare Airport had $750 million of terrorism insurance coverage at an annual premium of $125,000. After the terrorist attacks, insurers offered the airport only $150 million of coverage at an annual premium of $6.9 million. This new premium, if actuarially fair, implies the annual likelihood of a terrorist attack on O’Hare Airport to be approximately 1 in 22 ($6.9 million/$150 million), an extremely high probability. The airport was forced to purchase this policy because it could not operate without coverage.

Concern about high premiums and limited supply of coverage led Congress to pass the Terrorism Risk Insurance Act (TRIA) at the end of 2002 that provided a federal backstop up to $100 billion for private insurance claims related to terrorism. The act was extended in 2005 for two years, in 2007 for seven years and in January 2015 for another six years, with some modification of its provisions each time the legislation was renewed.

In return for federal protection against large losses, TRIA requires that all U.S. primary insurance companies offer coverage against terrorism risk on the same terms and conditions as other perils provided by their commercial insurance policies. Firms are not required to purchase this coverage unless mandated by state law, which is normally the case for workers’ compensation insurance. TRIA also established a risk-sharing mechanism between the insurance industry, the federal government and all commercial policyholders in the U.S. for covering insured losses from future terrorist attacks.

Role of ambiguity

After 9/11, insurers determined that they could not offer terrorism insurance because the uncertainties surrounding the likelihood and consequences of another terrorist attack were so significant that the risk was uninsurable by the private sector alone. Because terrorists are likely to design their strategy as a function of their own resources and their knowledge of the vulnerability of the entity they want to attack, the nature of the risk is continuously evolving. This dynamic uncertainty makes the likelihood of future terrorist events extremely difficult to estimate.

Empirical evidence based on surveys of underwriters reveals that insurers will set higher premiums when faced with ambiguous probabilities and uncertain losses than for a well-specified risk. Underwriters of primary insurance companies and reinsurance firms were surveyed about the prices they would charge to insure a factory against property damage from a severe earthquake when probabilities and losses were well specified and when the probabilities and losses were ambiguous. The premiums the underwriters charged for the ambiguous case were 1.43–1.77 times higher than if underwriters priced a precise risk.

A recent web-based experiment provided actuaries and underwriters in insurance companies with scenarios in which they seek advice and request probability forecasts from different groups of experts and then must determine what price to charge for coverage for flood damage and wind damage from hurricanes. The average premiums that insurers would charge was approximately 30% higher for coverage against either of these risks if the probability of damage was ambiguous rather than well-specified and if the experts were conflicted over their estimates. The data reveal that they would likely charge more in the case of conflict ambiguity (i.e., experts disagree on point estimates) than imprecise ambiguity (i.e., experts agree on a range of probability, recognizing that they cannot estimate the probability of the event precisely).

Impact of intuitive thinking on regulator behavior

Rate regulation and restriction on coverage has had more impact on property insurance than on any other line of coverage, particularly in states that are subject to potentially catastrophic losses from natural disasters.

Homeowners’ insurance in Florida

Following Hurricane Andrew in August 1992, Florida regulators imposed a moratorium on the cancellation and nonrenewal of homeowners’ insurance policies during the coming hurricane season for insurers that wanted to continue to do any business in Florida. In November 1993, the state legislature enacted a bill that these insurers could not cancel more than 10% of their homeowners’ policies in any county in Florida in one year and not cancel more than 5% of their property owners’ policies statewide for each of the next three years. During the 1996 legislative session, this phase-out provision was extended until June 1, 1999.

Early in 2007, Florida enacted legislation that sought to increase regulatory control over rates and roll them back based on new legislation that expanded the reinsurance coverage provided by the Florida Hurricane Catastrophe Fund (FHCF). Insurers were required to reduce their rates to reflect this expansion of coverage, which was priced below private reinsurance market rates. This requirement applies to every licensed insurer even if an insurer does not purchase reinsurance from the FHCF.

Citizens Property Insurance Corp., Florida’s state-funded company, was formed in 2002 and has experienced a significant increase in market share of the residential property market in recent years. Consumers are allowed to purchase a policy from Citizens if a comparable policy would cost 15% more in the private market. The most serious defect of such a system is that it encourages individuals to locate in high-hazard areas, thus putting more property at risk than would occur under a market system. This is the principal reason not to introduce such a system in the first place. Since 2005, there have been no hurricanes causing severe damage in Florida. But should there be a serious disaster that depletes Citizens’ reserves, the additional claims are likely to be paid from assessments (taxes) charged to all homeowners in Florida.

Earthquake insurance in California

As pointed out earlier, when insurers refused to continue to offer earthquake insurance in California, the state formed the CEA. The CEA set the premiums in many parts of the state at higher levels than insurers had charged prior to the Northridge earthquake of 1994. At the same time, the minimum deductible for policies offered through the CEA was raised from 10% to 15% of the insured value of the property. There was no consideration by the state insurers as to how this change would affect the demand for coverage.

This increased price/reduced coverage combination was not especially attractive to homeowners in the state. A 15% deductible based on the amount of coverage in place is actually quite high relative to damages that typically occur. Most homes in California are wood-frame structures that would likely suffer relatively small losses in a severe earthquake. For example, if a house was insured at $200,000, a 15% deductible implies that the damage from the earthquake would have to exceed $30,000 before the homeowner could collect a penny from the insurer. Given that only 10% of homeowners in California have quake insurance today, if a major earthquake were to occur in California next year so that many homes were partially damaged, the uninsured losses could be very high. It is surprising that there has been little interest by private insurers in offering earthquake coverage at competing or lower rates to those offered by the CEA, even though there is no regulation preventing them from doing so.

GUIDING PRINCIPLES

The following two guiding principles should enable insurance to play a more significant role in the management and financing of catastrophic risks.

Principle 1—Premiums should reflect risk

Insurance premiums should be based on risk to provide individuals with accurate signals as to the nature of the hazards they face and to encourage them to engage in cost-effective mitigation measures to reduce their vulnerability. Risk-based premiums should also reflect the cost of capital that insurers need to integrate into their pricing to ensure an adequate return to their investors.

Catastrophe models have been developed and improved over the past 25 years to more accurately assess the likelihood and damages resulting from disasters of different magnitudes and intensities. Today, insurers and reinsurers use the estimates from these models to determine risk-based premiums and how much coverage to offer in hazard-prone areas.

If Principle 1 is applied to risks where premiums are currently subsidized, some residents will be faced with large price increases. This concern leads to the second guiding principle.

Principle 2—Dealing with equity and affordability issues

Any special treatment given to low-income individuals currently residing in hazard-prone areas should come from general public funding and not through insurance premium subsidies. Funding could be obtained from several different sources such as general taxpayer revenue or state government or by taxing insurance policyholders depending on the response to the question, “Who should pay?” It is important to note that Principle 2 applies only to those individuals who currently reside in hazard-prone areas. Those who decide to locate in these regions in the future would be charged premiums that reflect the risk.

Developing long-term strategies for dealing with extreme events

Given the nature of intuitive thinking for LP-HC events, this section proposes strategies for applying the two guiding principles so that insurance in combination with other policy tools can reduce future losses from extreme events. The proposed risk management strategy involves:

  • Choice architecture to frame the problem so that the risks are transparent and key interested parties recognize the importance of purchasing and maintaining insurance while also undertaking protective measures to reduce their losses from the next disaster.
  • Public–private partnerships to assist those who cannot afford to invest in protective measures and to provide financial protection against catastrophic losses for risks that are considered uninsurable by the private sector alone.
  • Multi-year insurance to provide premium stability to policyholders and lower marketing costs to insurers and to reduce cancellation of coverage by those at risk.

Choice architecture

The term choice architecture, coined by Thaler and Sunstein, indicates that people’s decisions often depend in part on how different options are framed and presented. Framing in the context of LP-HC events typically refers to the way in which likelihoods and outcomes are characterized. One can also influence decisions by varying the reference point or by changing the order in which alternatives or their attributes are presented, or by setting one option as the no-choice default option.

Framing the risk

People are better able to evaluate low-probability risks when these are presented via a familiar concrete context. For example, individuals might not understand what a one-in-a-million risk means but can more accurately interpret this figure when it is compared to the annual chance of dying in an automobile accident (1-in-6,000) or lightning striking your home on your birthday (less than one in a billion).

Probability is more likely to be a consideration if it is presented using a longer time frame. People are more willing to wear seat belts if they are told they have a 1-in-3 chance of an accident over a 50-year lifetime of driving, rather than a 1-in-100,000 chance of an accident on each trip they take. Similarly, a homeowner or manager considering earthquake protection over the 25-year life of a home or factory is far more likely to take the risk seriously if told that the chance of at least one severe earthquake occurring during this time is greater than 1-in-5, rather than 1-in-100 in any given year.

Studies have shown that even just multiplying the single-year risk so the numerator is larger— presenting it as 10-in-1,000 or 100-in-10,000 instead of 1-in-100—makes it more likely that people will pay attention to the event. Studies have also found that comparisons of risks— rather than just specifying the probability of a loss or an insurance premium—are much more effective in helping decision-makers assess the need for purchasing insurance.

Another way to frame the risk so that individuals pay attention is to construct a worst-case scenario. Residents in hazard-prone areas who learn about the financial consequences of being uninsured if they were to suffer severe damage from a flood or earthquake would have an incentive to purchase insurance coverage and may refrain from canceling their insurance if they have not made a claim for a few years. One could then provide them with information on the likelihood of the event occurring over the next 25 years rather than just next year.

Insurers could also construct worst-case scenarios and then estimate the likelihood of the event’s occurrence when pricing their insurance policies. They could then determine a premium that reflects their best estimate of their expected loss while at the same time factoring in the uncertainty surrounding the risk.

Default options

Field and controlled experiments in behavioral economics reveal that consumers are more likely to stick with the default option rather than going to the trouble of opting out in favor of some alternative. Many examples of this behavior are detailed in Thaler and Sunstein’s important book, Nudge. To date, this framing technique has been applied to situations where the outcome is either known with certainty, or when the chosen option (such as a recommended 401(k) plan), has a higher expected return than the other options. It is not clear whether people who failed to purchase coverage would reverse course if having insurance against an extreme event was the default option, given the intuitive thinking that individuals employ for these types of risks. More empirical research is needed to more fully understand the role that default options can play with respect to encouraging insurance protection for LP-HC events.

Public–private partnerships

Individuals at risk may be reluctant to invest in cost-effective loss reduction measures when these involve a high, upfront cash outlay. Given budgetary constraints and individuals’ focus on short time horizons, it is difficult to convince them that the expected discounted benefits of the investment over the expected life of the property exceeds the immediate upfront cost. Decision-makers’ resistance is likely to be compounded if they perceive the risk to be below their threshold level of concern. Residents in hazard-prone areas may also be concerned that, if they move in the next few years, the property value of their home will not reflect the expected benefits from investing in loss reduction measures because the new owner will not be concerned about the risk of a disaster.

Mitigation grants and loans

FEMA created the Flood Mitigation Assistance (FMA) program in 1994 to reduce flood insurance claims. FMA is funded by premiums received by the NFIP to support loss reduction measures, such as elevation or relocation of property, flood-proofing commercial structures or demolition and rebuilding of property that has received significant damage from a severe flood.

In July 2014, Connecticut initiated its Shore Up CT program designed to help residential or business property owners elevate buildings or retrofit properties with additional flood protection, or assist with wind-proofing structures on property that is prone to coastal flooding. This state program, the first in the U.S., enables homeowners to obtain a 15-year loan ranging from $10,000 to $300,000 at an annual interest rate of 2 3⁄4%.

More generally, long-term loans to homes and businesses for mitigation would encourage individuals to invest in cost-effective risk-reduction measures. Consider a property owner who could pay $25,000 to elevate his coastal property from three feet below Base Flood Elevation (BFE) to one foot above BFE to reduce storm surge damage from hurricanes. If flood insurance is risk-based, then the annual premium would decrease by $3,480 (from $4,000 to $520). A 15-year loan for $25,000 at an annual interest rate of 2 3⁄4% would result in annual payments of $2,040 so that the savings to the homeowner each year would be $1,440 (that is, $3,480−$2,040).

Means-tested vouchers

One way to maintain risk-based premiums while at the same time addressing issues of affordability is to offer means-tested vouchers that cover part of the cost of insurance. Several existing programs could serve as models for developing such a voucher system: the Food Stamp Program, the Low Income Home Energy Assistance Program (LIHEAP) and Universal Service Fund (USF). The amount of the voucher would be based on current income and determined by a specific set of criteria as outlined in the National Research Council’s report on the affordability of flood insurance. If the property owners were offered a multi-year loan to invest in mitigation measure(s), the voucher could cover not only a portion of the resulting risk-based insurance premium, but also the annual loan cost to make the package affordable. As a condition for the voucher, the property owner could be required to invest in mitigation.

An empirical study of homeowners in Ocean county, NJ, reveals that the amount of the voucher is likely to be reduced significantly from what it would have been had the structure not been mitigated, as shown in Figure 2 for property in a high-hazard flood area (the V Zone) and a lower-hazard area (the A Zone).

fig3

Catastrophe coverage

Insurers’ withdrawal from certain markets because of lack of reinsurance capacity and other risk transfer instruments (e.g. catastrophe bonds) led to the establishment of government-backed programs such as the CEA, NFIP and TRIA.

If insurers were permitted to charge risk-based premiums, they would very likely want to market coverage against earthquakes and floods as long as they were protected against catastrophic losses. State reinsurance facilities could play an important role in this regard if premiums were risk-based using data provided by catastrophe models. One such facility exists today—the FHCF. It was established in 1993 following Hurricane Andrew to supplement private reinsurance and reimburse all insurers for a portion of their losses from catastrophic hurricanes.

TRIA provides protection to insurers against catastrophic losses from future terrorist attacks. American taxpayers will not be responsible for any payments until the total commercial losses from a terrorist attack exceed $60 billion. In other words, insurers will cover the entire losses from future terrorist attacks that are not catastrophic.

Lewis and Murdock proposed that the federal government auction a limited number of catastrophe reinsurance contracts annually to private insurers to provide them with more capacity to handle truly extreme events. The design of such contracts would have to be specified, and a more detailed analysis would have to be undertaken to determine the potential impact of such an auction mechanism on the relevant stakeholders.

Well-enforced regulations and standards

Given the reluctance of individuals to voluntarily purchase insurance against losses, one should consider requiring catastrophic coverage for all individuals who face risk. Social welfare is likely to be improved under the assumption that individuals would have wanted insurance protection had they perceived the risk correctly, not exhibited systematic biases and used simplified decision rules that characterize intuitive thinking. If the public sector were providing protection against catastrophic losses from these extreme events, they could pass regulations requiring insurance coverage for individuals at risk.

Risk-based insurance premiums could be coupled with building codes so that those residing in hazard-prone areas adopt cost-effective loss-reduction measures. Following Hurricane Andrew in 1992, Florida re-evaluated its building code standards, and coastal areas of the state began to enforce high-wind design provisions for residential housing. As depicted in Figure 3, homes that met the wind-resistant standards enforced in 1996 had a claim frequency that was 60% less than that for homes that were built prior to that year. The average reduction in claims from Hurricane Charley (2004) to each damaged home in Charlotte County built according to the newer code was approximately $20,000.

Homeowners who adopt cost-effective mitigation measures could receive a seal of approval from a certified inspector that the structure meets or exceeds building code standards. A seal of approval could increase the property value of the home by informing potential buyers that damage from future disasters is likely to be reduced because the mitigation measure is in place. Evidence from a July 1994 telephone survey of 1,241 residents in six hurricane-prone areas on the Atlantic and Gulf Coasts provides supporting evidence for some type of seal of approval. More than 90% of the respondents felt that local home builders should be required to adhere to building codes, and 85% considered it very important that local building departments conduct inspections of new residential construction.

fig4
Multi-year insurance

As a complement to property improvement loans, insurers could consider designing multi-year insurance (MYI) contracts of three to five years. The insurance policy would be tied to the structure rather than the property owner and carry an annual premium reflecting risk that would remain stable over the length of the contract. Property owners who cancel their insurance policy early would incur a penalty cost in the same way that those who refinance a mortgage have to pay a cancellation cost to the bank issuing the mortgage. With an MYI contract, insurers would have an incentive to inspect the property over time to make sure that building codes are enforced, something they would be less likely to do with annual contracts.

To compare the expected benefits of annual vs multi-year contracts, Jaffee et al. developed a two-period model where premiums reflect risk in a competitive market setting. They show that an MYI policy reduces the marketing costs for insurers over one-period policies and also eliminates the search costs to policyholders should their insurer decide to cancel their coverage at the end of period 1. Should the policyholder learn that the cost of a one-period policy is sufficiently low to justify paying a cancellation cost, it is always optimal for the insurer to sell an MYI policy and for a consumer to purchase it. The insurer will set the cancellation cost at a level that enables it to break even on those policies that the insured decides to let lapse before the maturity date.

Several factors have contributed to the non-marketability of MYI for protecting homeowners’ properties against losses from fire, theft and large-scale natural disasters. Under the current state-regulated arrangements in which many insurance commissioners have limited insurers’ ability to charge risk-based premiums in hazard-prone areas, no insurance company would even entertain the possibility of marketing a homeowner’s policy that was longer than one year. Insurers would be concerned about the regulator clamping down on them now or in the future regarding what price they could charge. Uncertainty regarding costs of capital and changes in risk over time may also deter insurers from providing MYI.

For the private sector to want to market coverage if the above issues are addressed, there needs to be a sufficient demand to cover the fixed and administrative costs of developing and marketing the product. To empirically test the demand for MYI, a web- based experiment was undertaken with adults in the U.S.; most were older than 30 years, so they were likely to have experience purchasing insurance. The individuals participating in the experiment were offered a choice between one-year and two-year contracts against losses from hurricane-related damage. A large majority of the responders preferred the two-year contract over the one-year contract, even when it was priced at a higher level than the actuarially fair price. Introducing a two-year insurance policy into the menu of contracts also increased the aggregate demand for disaster insurance.

Modifying the National Flood Insurance Program

The NFIP provides a target of opportunity to implement a long-term strategy for reducing risk that could eventually be extended to other extreme events. The two guiding principles for insurance would be used in redesigning the rate structure for the program:

  • Premiums would reflect risk based on updated flood maps so that private insurers would have an incentive to market coverage.
  • Means-tested vouchers would be provided by the public sector to those who undertook cost-effective mitigation measures. This would address the affordability issue. Homeowners who invested in loss-reduction measures would be given a premium discount to reflect the reduction in expected losses from floods. Long-term loans for mitigation would encourage investments in cost-effective mitigation measures. Well-enforced building codes and seals of approval would provide an additional rationale for undertaking these loss-reduction measures.
  • An MYI policy tied to the property would deter policyholders from canceling their policies if they did not suffer losses for several years.
  • Reinsurance and risk-transfer instruments marketed by the private sector could cover a significant portion of the catastrophic losses from future floods. Some type of federal reinsurance would provide insurers with protection against extreme losses.

The social welfare benefits of this proposed program would be significant: less damage to property, lower costs to insurers for protecting against catastrophic losses, more secure mortgages and lower costs to the government for disaster assistance.

Directions for future studies and research

In theory, insurance rewards individuals who undertake loss reduction measures by lowering their premiums. For insurance to play this role, premiums have to reflect risk; otherwise, insurers will have no financial incentive to offer coverage or will not want to reduce premiums when those at risk undertake protective measures. Charging risk-based premiums raises questions of affordability for those low-income residents in hazard-prone areas who are currently paying subsidized prices for coverage or have elected to be uninsured because of budget constraints or misperceptions of the risk. In addition, insurers may elect not to offer coverage if they are concerned about the impact that catastrophic losses will have on their balance sheet as evidenced by their decisions not to offer flood, earthquake or terrorism insurance in the U.S. without some type of back-up from the state or federal government. To determine the price of risk-based premiums, there is a need for more accurate data. In the U.S., FEMA is now updating its flood-risk maps as recommended by a Government Accountability Office (GAO) study and by recent federal legislation on the NFIP.

The impact of changing climate patterns on future damage from flooding because of potential sea-level rise and more intense hurricanes also needs to be taken into account. There is evidence that federal agencies and other bodies have underestimated the risks of damage from extreme weather events because of climate change. Hurricane Sandy has stimulated studies on ways that communities can be more prepared for future disaster damage as well as highlighting the need for a suite of policy tools including insurance to address the climate change problem.

Studies are also needed as to ways that other policy tools, such as well-enforced building codes to encourage good construction practices, can complement insurance. Enforcing building codes for all residences in Florida could reduce by nearly half the risk-based prices of insurance under climate change projections with respect to hurricane damage in 2020 and 2040. In this regard, Chile serves an example for the U.S. to emulate. The country passed a law that requires the original construction company to compensate those who suffer any structural damage from earthquakes and other disasters if the building codes were not followed. Furthermore, the original owner of a building is held responsible for damage to the structure for a decade, and a court can sentence the owner to prison. Well-enforced building codes in Chile account for the relatively low death toll from the powerful earthquake (8.8 on moment magnitude scale) that rocked the country on Feb. 27, 2010.

The challenge facing the U.S. today is how to capitalize on the concerns raised by hurricanes Katrina and Sandy and discussions on the renewal of the NFIP in 2017. The case for making communities more resilient to natural disasters by investing in loss reduction measures is critical today given economic development in hazard-prone areas. For risk-based insurance to be part of such a strategy, there is a need for support from key interested parties. These include the real estate agents, developers, banks and financial institutions, residents in hazard-prone areas and public sector organizations at the local, state and federal levels.

The principle of risk-based premiums coupled with concerns regarding affordability and catastrophic losses apply to all countries that use insurance as a policy tool for dealing with risk. Studies on the role that the private and public sectors play with respect to risk sharing of these losses reveal significant differences between countries. Other countries face similar problems and would do well to consider how to develop long-term strategies that have a chance of being implemented because they address short-term concerns.