Tag Archives: Hurricane Andrew

Hurricane Harvey: A Moment of Truth

The first major hurricane to make landfall in the U.S. since hurricanes Dennis, Katrina, Rita and Wilma in 2005, Hurricane Harvey will cause billions of dollars in economic damage and disrupt countless lives. In the wake of massive economic losses and untold human suffering, including loss of life, millions of individuals and businesses will turn to their insurers for help. This will be a make-or-break experience, a real moment of truth.

Insurers will be presented with a golden opportunity to justify the public’s trust and earn the respect of policyholders, regulators, legislators and others in government. But insurers also run the risk of failing to live up to expectations and incurring the wrath of voters and their elected representatives.

See also: Flood Risk: Question Is Where, Not When  

The first test may well be distinguishing damage caused by wind from damage caused by flooding, as virtually all insurance policies exclude losses due to flooding (the exception being those policies issued by the National Flood Insurance Program). Insurers will need to be careful, thorough and fair when settling claims.

Equally important, insurers will need to be perceived as having been so, and communication will be key. Insurers would be well advised to do what they can to make policyholders feel they have been treated with respect, dignity and compassion even when their claims must be denied or settled for some amount less than the claimant sought.

Moreover, insurers would be well advised to settle claims as quickly as possible without unduly sacrificing sound loss adjustment and efforts to weed out fraud and abuse.

Finally, with the media sure to draw attention to heartbreaking stories about human tragedy in Harvey’s aftermath, insurers might benefit from doing what they can to shine a light on their efforts to help individuals and businesses recover. Surely it is worth noting that, as others evacuate, insurers gear up to send large numbers of claim adjusters to work in extremely difficult conditions in hard-hit areas.

Hurricane Harvey will also lead to many other moments of truth. For example, the devastation caused by Harvey may well prove to be the first real test at extreme scale of new insurtech created to improve loss adjustment. Will use of drones, aerial imagery, artificial intelligence, digitalization, big data, predictive analytics and the like prove as beneficial as hoped? Will insurtech entrepreneurs and insurers who have invested in these technologies be vindicated? And, on a more positive note, will experience coping with Harvey reveal new opportunities to use emerging technologies to increase speed, efficiency and fairness?

Insured losses from Hurricane Harvey may also test reinsurance mechanisms, including catastrophe bonds, other insurance-linked securities and sidecars. And what about so-called hedge fund reinsurers, which sought to profit by investing insurance float using strategies like those typically employed by hedge funds? Will they continue to participate as claims mount, or will they instead seek to exit the business? Some past catastrophes triggered significant inflows of fresh capital, as investors sensed opportunities to profit from a turn in reinsurance markets. Such was the case following Katrina, Rita and Wilma in 2005. Will the “fast money” come rushing in again, and, if it does, will it prove to also be “smart money”?

All of the above raises the question, “Will Hurricane Harvey lead to a reset of catastrophe models, pricing for hurricane risk and underwriting?” Some past storms, such as Hurricane Andrew in 1992, convinced insurers that they had previously underestimated hurricane risk and thus led to dramatic resets in coastal property insurance markets, with attendant price increases and availability problems. Whether Harvey brings about such a reset seemingly depends on whether current catastrophe models did an adequate job alerting insurers to the risk of an event like Hurricane Harvey. If so, changes in coastal property insurance markets may be muted. If not, expect price increases and availability problems.

See also: Is Flood Map Due for a Big Data Make-Over?  

Last, and let’s hope least, Hurricane Harvey may test insurers’ enterprise risk management. Prior to Harvey, the property/casualty industry had ample surplus, and most insurers were well capitalized. But surplus was not evenly distributed across insurers, and only the surplus of those insurers that wrote policies covering properties struck by Harvey is available to cover claims from Harvey.

If an insurer only wrote risks in Oregon, its surplus won’t be called upon to cover claims from Harvey. Bottom line, insurers that covered properties affected by Harvey, that were aware of potential losses and that have ample financial resources to cover claims and continue operations can give themselves good grades for enterprise risk management.

On the other hand, Insurers that covered properties affected by Harvey, that were surprised by their losses and that lack the resources to cover claims must give themselves failing grades for enterprise risk management.

And then there is a gray area: insurers that intelligently judged the risk of insolvency to be acceptably small, took a calculated risk and then lost that bet. Though such insurers will fail, it cannot be said that their enterprise risk management failed. Eliminating even the most remote chance of insolvency is not practical. Neither is it economically viable. Sound enterprise risk management consists of: 1) understanding risks; 2) making conscious, intelligent decisions about which risks to take, which risks to avoid, which risks to mitigate and which risks to transfer; and, 3) enforcing controls that keep operations within the bounds established by an enterprise’s appetite for risk.

The Challenges With Catastrophe Bonds

Catastrophe bonds are an increasingly important form of risk transfer for insurers. Cat bonds are a peculiarity of the U.S. reinsurance market, where about 125 to 200 natural disasters occur a year. They were first sold in the mid-1990s after Hurricane Andrew and the Northridge earthquake highlighted the need for a new form of risk transfer. The cat bond market has been growing steadily for the past 10 years, and more than $25 billion in catastrophe bond and insurance-linked securities are currently outstanding, according to Artemis.

Many insurers have moved away from managing their ceded reinsurance program with spreadsheets, which are time-consuming and error-prone, in light of current regulatory and internal demands. More carriers have installed — or are planning to install — a dedicated ceded reinsurance system that provides better controls and audit trails.

See also: Is P2P a Realistic Alternative?

Besides enabling reinsurance managers to keep senior management informed, a system helps carriers comply with the recent Risk Management and Own Risk and Solvency Assessment Model Act (RMORSA). It also generates Schedule F and statutory reporting, an otherwise onerous job. And technology prevents claims leakage (reinsurance claims that fell through the cracks).

Cat bonds add a layer of complexity. The cat bond premium is a “coupon” the insurer pays to the bond buyer. There are many potential losses behind each bond, and the potentially huge recovery amounts to as much as hundreds of millions of dollars for some insurers. Other complexities include a priority deductible, an hours clause, lines of business reinsured or excluded and attachment criteria to automatically identify subject catastrophe amounts. Without technology, tracking all this can be overwhelming.

The ceded reinsurance system can also be used to manage cat bond premiums. From a system perspective, it’s not terribly different. The same analytical split (per line of business and per insurance company in the group) applies to bonds just as it does to reinsurance treaties. With a little tweaking, a solid ceded reinsurance system should be able to handle cat treaties and bonds equally well.

While ceded premium management for cat bonds shouldn’t be difficult, claims present bigger challenges, especially when trying to automatically calculate the ultimate net loss (UNL) because additional factors and rules are often used to determine it.

For instance, it may be necessary to apply a growth-allowance factor, determine the number of policies in force when the catastrophe occurs and calculate growth-limitation factors. This allows the calculation of ceded recoveries in case of a catastrophe. Additionally, the calculation of UNL may be specific for each cat bond — and even for each corresponding peril.

See also: Insurers: the New Venture Capitalists  

Automating all this isn’t necessary because few events trigger those complexities. Once a manual workaround incorporating the audit trail and justification of the subject amounts is done, the reinsurance system can handle the remaining calculations. While it’s not necessary to fully automate all steps to calculate the UNL, it is still better to handle the whole process with an integrated information system than with multiple spreadsheets that are unwieldy and labor-intensive.

Without the right technology, managing cat bonds is daunting. With automation, they can be managed far more effectively.

Reducing Losses From Extreme Events

The number of presidential disaster declarations in the U.S. has dramatically increased over the past 50 years. Figure 1 depicts the total number of presidential disaster declarations and those that were triggered by flooding events (inland flood and storm surge from hurricanes). This pattern highlights the need to encourage those at risk to invest in loss reduction measures prior to a disaster rather than waiting until after the event occurs. Insurance coupled with other risk management programs can play an important role, as it is designed to spread and reduce risk. Each policyholder pays a relatively small premium to an insurer, which can then cover the large losses suffered by a few. Ideally, those who invest in loss prevention measures are rewarded by having the price of their coverage reduced to reflect their lower expected claims payments.

fig1

Insurance against low-probability, high-consequence (LP-HC) events presents a special challenge for individuals at risk, insurers and regulators, for good reason. Decision-makers have limited experience with these events, and even experts are likely to conclude that there is considerable uncertainty as to the probability of these events occurring and their resulting consequences. As a result, insurance decisions often differ from those recommended by normative models of choice.

Consider the following examples:

Example 1: Most homeowners in flood-prone areas do not voluntarily purchase flood insurance—even when it is highly subsidized—until after they suffer flood damage. If they then do not experience losses in the next few years, they are likely to cancel their policy. Demand for earthquake insurance in California increased significantly after the Northridge earthquake of 1994— the last severe quake in the state; today relatively few homeowners have coverage.

Example 2: Prior to the terrorist attacks of Sept. 1, 2001, actuaries and underwriters did not price the risk associated with terrorism, nor did they exclude this coverage from their standard commercial policies. Their failure to examine the potential losses from a terrorist attack was surprising given the truck bomb that al Qaeda detonated below the North Tower of the World Trade Center in 1993, the 1995 Oklahoma City bombing and other terrorist-related events throughout the world. Following 9/11, most insurance companies refused to offer coverage against terrorism, considering it to be an uninsurable risk.

Example 3: State insurance regulators sometimes have restricted insurers from setting premiums that reflect risk, in part to address equity and fairness issues for those in need of homeowners’ insurance. For example, following Hurricane Andrew in 1992, the Florida insurance commission did not allow insurers to charge risk-based rates and restricted them from canceling existing homeowners’ policies. After the severe hurricanes of 2004 and 2005 in Florida, the state-funded company Citizens Property Insurance Corp., which had been the insurer of last resort, offered premiums in high-risk areas at subsidized rates, thus undercutting the private market. Today, Citizens is the largest provider of residential wind coverage in Florida.

The three examples indicate that insurance today is not effectively meeting two of its most important objectives:

  • providing information to those residing in hazard-prone areas as to the nature of the risks they face;
  • giving incentives to those at risk to undertake loss reduction measures prior to a disaster.

The insurance industry played both of these roles very effectively when the factory mutual companies were founded in the 19th century, as detailed in Box 1. This paper proposes a strategy for insurance to take steps to return to its roots. The examples and empirical data presented here are taken primarily from experience in the U.S.; however, the concepts have relevance to any country that uses insurance to protect its residents and businesses against potentially large losses.

The next three sections explore the rationale for the actions taken by each of the interested parties illustrated in the above three examples by focusing on their decision processes prior to and after a disaster. I then propose two guiding principles for insurance and outline a long-term strategy with roles for the private and public sectors if these principles are implemented. Reforming the National Flood Insurance Program (NFIP) to encourage mitigation for reducing future losses while providing financial protection to those at risk is a target of opportunity that should be seriously considered. The concluding section suggests directions for future studies and research so that insurance can play a central role in reducing losses from extreme events.

fig2

DECISION PROCESSES

Intuitive and deliberative thinking

A large body of cognitive psychology and behavioral decision research over the past 30 years has revealed that individuals and organizations often make decisions under conditions of risk and uncertainty by combining intuitive thinking with deliberative thinking. In his thought-provoking book Thinking, Fast and Slow, Nobel laureate Daniel Kahneman has characterized the differences between these two modes of thinking. Intuitive thinking (System 1) operates automatically and quickly with little or no effort and no voluntary control. It is often guided by emotional reactions and simple rules of thumb that have been acquired by personal experience. Deliberative thinking (System 2) allocates attention to intentional mental activities where individuals undertake trade-offs and recognize relevant interdependencies and the need for coordination.

Choices are normally made by combining these two modes of thinking and generally result in good decisions when individuals have considerable experience as a basis for their actions. With respect to LP-HC events, however, there is a tendency to either ignore a potential disaster or overreact to a recent one, so that decisions may not reflect expert risk assessments. For example, after a disaster, individuals are likely to want to purchase insurance even at high prices, while insurers often consider restricting coverage or even withdraw from the market. In these situations, both parties focus on the losses from a worst-case scenario without adequately reflecting on the likelihood of this event occurring in the future.

Impact of intuitive thinking on consumer behavior

Empirical studies have revealed that many individuals engage in intuitive thinking and focus on short-run goals when dealing with unfamiliar LP-HC risks. More specifically, individuals often exhibit systematic biases such as the availability heuristic, where the judged likelihood of an event depends on its salience and memorability. There is thus a tendency to ignore rare risks until after a catastrophe occurs. This is a principal reason why it is common for individuals at risk to purchase insurance only after a disaster.

Purchase of flood insurance

A study of the risk perception of homeowners in New York City revealed that they underestimate the likelihood of water damage from hurricanes. This may explain why only 20% of those who suffered damage from Hurricane Sandy had purchased flood insurance before the storm occurred.

An in-depth analysis of the entire portfolio of the NFIP in the U.S. revealed that the median tenure of flood insurance was between two and four years, while the average length of time in a residence was seven years. For example, of the 841,000 new policies bought in 2001, only 73% were still in force one year later. After two years, only 49% were in force, and eight years later only 20%. Similar patterns were found for each of the other years in which a flood insurance policy was first purchased.

One reason that individuals cancel their policies is that they view insurance as an investment rather than a protective activity. Many purchase coverage after experiencing a loss from a disaster but feel they wasted their premiums if they have not made a claim over the next few years. They perceive the likelihood of a disaster as so low that they do not pay attention to its potential consequences and conclude they do not need insurance. A normative model of choice, such as expected utility theory, implies that risk-averse consumers should value insurance, as it protects them against large losses relative to their wealth. Individuals should celebrate not having suffered a loss over a period rather than canceling their policy because they have not made a claim. A challenge facing insurers is how to convince their policyholders that the best return on an insurance policy is no return at all.

Purchase of earthquake insurance

Another example that reveals how the availability bias affects the choice process is the decision of California homeowners on whether to purchase earthquake insurance. Surveys of owner-occupied homes in California counties affected by the 1989 Loma Prieta earthquake showed a significant increase in the purchase of coverage. Just prior to the disaster, only 22% of the homes had earthquake insurance. Four years later, 37% had purchased earthquake insurance—a 64% increase.

Similarly, the Northridge earthquake of 1994 led to a significant demand for earthquake insurance. For example, more than two-thirds of the homeowners surveyed in Cupertino county had purchased earthquake insurance in 1995. There have been no severe earthquakes in California since Northridge, and only 10% of those in seismic areas of the state have earthquake insurance today. If a severe quake hits San Francisco in the near future, the damage could be as high as $200 billion, and it is likely that most homeowners suffering damage will be financially unprotected.

Impact of intuitive thinking on insurer behavior

Two factors play an important role in insurers’ behavior with respect to pricing and coverage decisions: the role of experience and the role of ambiguous risk. We examine each of these features in turn.

Role of experience on supply of insurance

When insurers have experienced significant losses from a particular extreme event, there is a tendency for them to focus on worst-case scenarios without adequately considering their likelihood. In some instances, because of extreme losses from hurricanes, floods, earthquakes and terrorist attacks, insurers determined that they could not continue to market coverage in the U.S. without involvement by the public sector. In these situations, either the state or federal government stepped in to fill the void.

Hurricane wind-related losses

Following catastrophic wind losses from hurricanes in Florida, insurers felt they had to significantly raise their homeowners’ premiums. Rather than using catastrophe models to justify rate increases, insurers pointed to their large losses following Hurricane Andrew in 1992 as a basis for demanding higher premiums, without considering the likelihood of another disaster of this magnitude. The insurers were denied these rate increases and reduced their supply of new homeowners’ policies.

By the beginning of 2004, most insurers viewed their Florida rates as being close to adequate except in the highest-risk areas. However, after four major hurricanes battered Florida in 2004 and two more in 2005, many insurers again began to file for major premium increases, and many of them were denied, or approved at lower increases by the regulators. In 2007, the Florida Office of Insurance Regulation (FLOIR) took a position against any further rate increases of homeowners’ insurers and denied requests by all insurers. In December 2008, State Farm asked for a 67% increase in premiums that was denied by the FLOIR, leading the insurer to announce that it would no longer offer homeowners’ coverage in Florida. Five years later (March 2014), State Farm announced that it would again begin offering homeowners and renters insurance in the state on a limited basis.

Flood insurance

Following the severe Mississippi floods of 1927 and continuing through the 1960s, there was a widespread belief among private insurance companies that the flood peril was uninsurable by the private sector for several reasons: Adverse selection would be a problem because only particular areas are subject to the risk; risk-based premiums would be so high that no one would be willing to pay them; and flood losses could be so catastrophic as to cause insolvencies or have a significant impact on surplus. This lack of coverage by the private sector triggered significant federal disaster relief to victims of Hurricane Betsy in 1965 and led to the creation of the NFIP in 1968.

The NFIP subsidized premiums to maintain property values on structures in flood-prone areas; new construction was charged premiums reflecting risk. Even though premiums on existing property were highly subsidized, relatively few homeowners purchased coverage, leading the U.S. Congress to pass the Flood Disaster Protection Act (FDPA) of 1973. This bill required all properties receiving federally backed mortgages to purchase flood insurance. The NFIP has grown extensively in the past 40 years; as of January 2015, it had sold more than 5.2 million policies in 22,000 communities and provided almost $1.3 trillion in coverage. Insurance tends to be concentrated in coastal states, with Florida and Texas alone composing nearly 40% of the entire program (in number of policies, premiums and coverage). After making claims payments from Hurricane Katrina in 2005, the NFIP found itself $18 billion in debt, so that its borrowing authority had to be increased from $1.5 billion to $20.78 billion. To date, the program has borrowed nearly $27 billion from the U.S. Treasury to meet its claims obligations in the aftermath of the 2004, 2005, 2008 and 2012 hurricane seasons.

In July 2012 (three months before Hurricane Sandy), Congress passed and the president signed the Biggert–Waters Flood Insurance Reform Act of 2012 (BW12), which applied the tools of risk management to the increasingly frequent threat of flooding. Among its many provisions, the legislation required that the NFIP produce updated floodplain maps, strengthen local building code enforcement, remove insurance subsidies for certain properties and move toward charging premiums that reflect flood risk.

Soon after becoming law, BW12 faced significant challenges from some homeowners who had reason to complain that the new flood maps overestimated their risk. These residents and other homeowners in flood-prone areas felt that their proposed premium increases were unjustified and that they could not afford the increased premiums that they would face. In March 2014, Congress passed the Homeowner Flood Insurance Affordability Act (HFIAA14), which required the Federal Emergency Management Agency (FEMA) that operates the NFIP to draft an affordability framework based on the recommendations of a National Academy of Sciences’ study that addresses the affordability of flood insurance premiums.

Earthquake insurance

Until the San Fernando earthquake of 1971, few homeowners and businesses in California had purchased earthquake insurance even though coverage had been available since 1916. In 1985, the California legislature passed a law requiring insurers writing homeowners’ policies on one- to four-family units to offer earthquake insurance to these residents. The owners did not have to buy this coverage; the insurers only had to offer it. At that time and still today, banks and financial institutions do not require earthquake insurance as a condition for a mortgage.

The Northridge earthquake of January 1994 caused insured losses of $20.6 billion, primarily to commercial structures. In the three years following Northridge, demand for earthquake insurance by homeowners increased 19% in 1994, 20% in 1995 and 27% in 1996, leading private insurance companies in California to re-evaluate their seismic risk exposures. Insurers concluded that they would not sell any more policies on residential property, as they were concerned about the impact of another catastrophic earthquake on their balance sheets. The California Insurance Department surveyed insurers and found that as many as 90% of them had either stopped or had placed restrictions on the selling of new homeowners’ policies. This led to the formation of a state-run earthquake insurance company—the California Earthquake Authority (CEA)—in 1996.

Terrorism insurance

Following the terrorist attacks of 9/11, most insurers discontinued offering terrorism coverage given the refusal of global reinsurers to provide them with protection against severe losses from another attack. The few that did provide insurance charged extremely high premiums to protect themselves against a serious loss. Prior to 9/11, Chicago’s O’Hare Airport had $750 million of terrorism insurance coverage at an annual premium of $125,000. After the terrorist attacks, insurers offered the airport only $150 million of coverage at an annual premium of $6.9 million. This new premium, if actuarially fair, implies the annual likelihood of a terrorist attack on O’Hare Airport to be approximately 1 in 22 ($6.9 million/$150 million), an extremely high probability. The airport was forced to purchase this policy because it could not operate without coverage.

Concern about high premiums and limited supply of coverage led Congress to pass the Terrorism Risk Insurance Act (TRIA) at the end of 2002 that provided a federal backstop up to $100 billion for private insurance claims related to terrorism. The act was extended in 2005 for two years, in 2007 for seven years and in January 2015 for another six years, with some modification of its provisions each time the legislation was renewed.

In return for federal protection against large losses, TRIA requires that all U.S. primary insurance companies offer coverage against terrorism risk on the same terms and conditions as other perils provided by their commercial insurance policies. Firms are not required to purchase this coverage unless mandated by state law, which is normally the case for workers’ compensation insurance. TRIA also established a risk-sharing mechanism between the insurance industry, the federal government and all commercial policyholders in the U.S. for covering insured losses from future terrorist attacks.

Role of ambiguity

After 9/11, insurers determined that they could not offer terrorism insurance because the uncertainties surrounding the likelihood and consequences of another terrorist attack were so significant that the risk was uninsurable by the private sector alone. Because terrorists are likely to design their strategy as a function of their own resources and their knowledge of the vulnerability of the entity they want to attack, the nature of the risk is continuously evolving. This dynamic uncertainty makes the likelihood of future terrorist events extremely difficult to estimate.

Empirical evidence based on surveys of underwriters reveals that insurers will set higher premiums when faced with ambiguous probabilities and uncertain losses than for a well-specified risk. Underwriters of primary insurance companies and reinsurance firms were surveyed about the prices they would charge to insure a factory against property damage from a severe earthquake when probabilities and losses were well specified and when the probabilities and losses were ambiguous. The premiums the underwriters charged for the ambiguous case were 1.43–1.77 times higher than if underwriters priced a precise risk.

A recent web-based experiment provided actuaries and underwriters in insurance companies with scenarios in which they seek advice and request probability forecasts from different groups of experts and then must determine what price to charge for coverage for flood damage and wind damage from hurricanes. The average premiums that insurers would charge was approximately 30% higher for coverage against either of these risks if the probability of damage was ambiguous rather than well-specified and if the experts were conflicted over their estimates. The data reveal that they would likely charge more in the case of conflict ambiguity (i.e., experts disagree on point estimates) than imprecise ambiguity (i.e., experts agree on a range of probability, recognizing that they cannot estimate the probability of the event precisely).

Impact of intuitive thinking on regulator behavior

Rate regulation and restriction on coverage has had more impact on property insurance than on any other line of coverage, particularly in states that are subject to potentially catastrophic losses from natural disasters.

Homeowners’ insurance in Florida

Following Hurricane Andrew in August 1992, Florida regulators imposed a moratorium on the cancellation and nonrenewal of homeowners’ insurance policies during the coming hurricane season for insurers that wanted to continue to do any business in Florida. In November 1993, the state legislature enacted a bill that these insurers could not cancel more than 10% of their homeowners’ policies in any county in Florida in one year and not cancel more than 5% of their property owners’ policies statewide for each of the next three years. During the 1996 legislative session, this phase-out provision was extended until June 1, 1999.

Early in 2007, Florida enacted legislation that sought to increase regulatory control over rates and roll them back based on new legislation that expanded the reinsurance coverage provided by the Florida Hurricane Catastrophe Fund (FHCF). Insurers were required to reduce their rates to reflect this expansion of coverage, which was priced below private reinsurance market rates. This requirement applies to every licensed insurer even if an insurer does not purchase reinsurance from the FHCF.

Citizens Property Insurance Corp., Florida’s state-funded company, was formed in 2002 and has experienced a significant increase in market share of the residential property market in recent years. Consumers are allowed to purchase a policy from Citizens if a comparable policy would cost 15% more in the private market. The most serious defect of such a system is that it encourages individuals to locate in high-hazard areas, thus putting more property at risk than would occur under a market system. This is the principal reason not to introduce such a system in the first place. Since 2005, there have been no hurricanes causing severe damage in Florida. But should there be a serious disaster that depletes Citizens’ reserves, the additional claims are likely to be paid from assessments (taxes) charged to all homeowners in Florida.

Earthquake insurance in California

As pointed out earlier, when insurers refused to continue to offer earthquake insurance in California, the state formed the CEA. The CEA set the premiums in many parts of the state at higher levels than insurers had charged prior to the Northridge earthquake of 1994. At the same time, the minimum deductible for policies offered through the CEA was raised from 10% to 15% of the insured value of the property. There was no consideration by the state insurers as to how this change would affect the demand for coverage.

This increased price/reduced coverage combination was not especially attractive to homeowners in the state. A 15% deductible based on the amount of coverage in place is actually quite high relative to damages that typically occur. Most homes in California are wood-frame structures that would likely suffer relatively small losses in a severe earthquake. For example, if a house was insured at $200,000, a 15% deductible implies that the damage from the earthquake would have to exceed $30,000 before the homeowner could collect a penny from the insurer. Given that only 10% of homeowners in California have quake insurance today, if a major earthquake were to occur in California next year so that many homes were partially damaged, the uninsured losses could be very high. It is surprising that there has been little interest by private insurers in offering earthquake coverage at competing or lower rates to those offered by the CEA, even though there is no regulation preventing them from doing so.

GUIDING PRINCIPLES

The following two guiding principles should enable insurance to play a more significant role in the management and financing of catastrophic risks.

Principle 1—Premiums should reflect risk

Insurance premiums should be based on risk to provide individuals with accurate signals as to the nature of the hazards they face and to encourage them to engage in cost-effective mitigation measures to reduce their vulnerability. Risk-based premiums should also reflect the cost of capital that insurers need to integrate into their pricing to ensure an adequate return to their investors.

Catastrophe models have been developed and improved over the past 25 years to more accurately assess the likelihood and damages resulting from disasters of different magnitudes and intensities. Today, insurers and reinsurers use the estimates from these models to determine risk-based premiums and how much coverage to offer in hazard-prone areas.

If Principle 1 is applied to risks where premiums are currently subsidized, some residents will be faced with large price increases. This concern leads to the second guiding principle.

Principle 2—Dealing with equity and affordability issues

Any special treatment given to low-income individuals currently residing in hazard-prone areas should come from general public funding and not through insurance premium subsidies. Funding could be obtained from several different sources such as general taxpayer revenue or state government or by taxing insurance policyholders depending on the response to the question, “Who should pay?” It is important to note that Principle 2 applies only to those individuals who currently reside in hazard-prone areas. Those who decide to locate in these regions in the future would be charged premiums that reflect the risk.

Developing long-term strategies for dealing with extreme events

Given the nature of intuitive thinking for LP-HC events, this section proposes strategies for applying the two guiding principles so that insurance in combination with other policy tools can reduce future losses from extreme events. The proposed risk management strategy involves:

  • Choice architecture to frame the problem so that the risks are transparent and key interested parties recognize the importance of purchasing and maintaining insurance while also undertaking protective measures to reduce their losses from the next disaster.
  • Public–private partnerships to assist those who cannot afford to invest in protective measures and to provide financial protection against catastrophic losses for risks that are considered uninsurable by the private sector alone.
  • Multi-year insurance to provide premium stability to policyholders and lower marketing costs to insurers and to reduce cancellation of coverage by those at risk.

Choice architecture

The term choice architecture, coined by Thaler and Sunstein, indicates that people’s decisions often depend in part on how different options are framed and presented. Framing in the context of LP-HC events typically refers to the way in which likelihoods and outcomes are characterized. One can also influence decisions by varying the reference point or by changing the order in which alternatives or their attributes are presented, or by setting one option as the no-choice default option.

Framing the risk

People are better able to evaluate low-probability risks when these are presented via a familiar concrete context. For example, individuals might not understand what a one-in-a-million risk means but can more accurately interpret this figure when it is compared to the annual chance of dying in an automobile accident (1-in-6,000) or lightning striking your home on your birthday (less than one in a billion).

Probability is more likely to be a consideration if it is presented using a longer time frame. People are more willing to wear seat belts if they are told they have a 1-in-3 chance of an accident over a 50-year lifetime of driving, rather than a 1-in-100,000 chance of an accident on each trip they take. Similarly, a homeowner or manager considering earthquake protection over the 25-year life of a home or factory is far more likely to take the risk seriously if told that the chance of at least one severe earthquake occurring during this time is greater than 1-in-5, rather than 1-in-100 in any given year.

Studies have shown that even just multiplying the single-year risk so the numerator is larger— presenting it as 10-in-1,000 or 100-in-10,000 instead of 1-in-100—makes it more likely that people will pay attention to the event. Studies have also found that comparisons of risks— rather than just specifying the probability of a loss or an insurance premium—are much more effective in helping decision-makers assess the need for purchasing insurance.

Another way to frame the risk so that individuals pay attention is to construct a worst-case scenario. Residents in hazard-prone areas who learn about the financial consequences of being uninsured if they were to suffer severe damage from a flood or earthquake would have an incentive to purchase insurance coverage and may refrain from canceling their insurance if they have not made a claim for a few years. One could then provide them with information on the likelihood of the event occurring over the next 25 years rather than just next year.

Insurers could also construct worst-case scenarios and then estimate the likelihood of the event’s occurrence when pricing their insurance policies. They could then determine a premium that reflects their best estimate of their expected loss while at the same time factoring in the uncertainty surrounding the risk.

Default options

Field and controlled experiments in behavioral economics reveal that consumers are more likely to stick with the default option rather than going to the trouble of opting out in favor of some alternative. Many examples of this behavior are detailed in Thaler and Sunstein’s important book, Nudge. To date, this framing technique has been applied to situations where the outcome is either known with certainty, or when the chosen option (such as a recommended 401(k) plan), has a higher expected return than the other options. It is not clear whether people who failed to purchase coverage would reverse course if having insurance against an extreme event was the default option, given the intuitive thinking that individuals employ for these types of risks. More empirical research is needed to more fully understand the role that default options can play with respect to encouraging insurance protection for LP-HC events.

Public–private partnerships

Individuals at risk may be reluctant to invest in cost-effective loss reduction measures when these involve a high, upfront cash outlay. Given budgetary constraints and individuals’ focus on short time horizons, it is difficult to convince them that the expected discounted benefits of the investment over the expected life of the property exceeds the immediate upfront cost. Decision-makers’ resistance is likely to be compounded if they perceive the risk to be below their threshold level of concern. Residents in hazard-prone areas may also be concerned that, if they move in the next few years, the property value of their home will not reflect the expected benefits from investing in loss reduction measures because the new owner will not be concerned about the risk of a disaster.

Mitigation grants and loans

FEMA created the Flood Mitigation Assistance (FMA) program in 1994 to reduce flood insurance claims. FMA is funded by premiums received by the NFIP to support loss reduction measures, such as elevation or relocation of property, flood-proofing commercial structures or demolition and rebuilding of property that has received significant damage from a severe flood.

In July 2014, Connecticut initiated its Shore Up CT program designed to help residential or business property owners elevate buildings or retrofit properties with additional flood protection, or assist with wind-proofing structures on property that is prone to coastal flooding. This state program, the first in the U.S., enables homeowners to obtain a 15-year loan ranging from $10,000 to $300,000 at an annual interest rate of 2 3⁄4%.

More generally, long-term loans to homes and businesses for mitigation would encourage individuals to invest in cost-effective risk-reduction measures. Consider a property owner who could pay $25,000 to elevate his coastal property from three feet below Base Flood Elevation (BFE) to one foot above BFE to reduce storm surge damage from hurricanes. If flood insurance is risk-based, then the annual premium would decrease by $3,480 (from $4,000 to $520). A 15-year loan for $25,000 at an annual interest rate of 2 3⁄4% would result in annual payments of $2,040 so that the savings to the homeowner each year would be $1,440 (that is, $3,480−$2,040).

Means-tested vouchers

One way to maintain risk-based premiums while at the same time addressing issues of affordability is to offer means-tested vouchers that cover part of the cost of insurance. Several existing programs could serve as models for developing such a voucher system: the Food Stamp Program, the Low Income Home Energy Assistance Program (LIHEAP) and Universal Service Fund (USF). The amount of the voucher would be based on current income and determined by a specific set of criteria as outlined in the National Research Council’s report on the affordability of flood insurance. If the property owners were offered a multi-year loan to invest in mitigation measure(s), the voucher could cover not only a portion of the resulting risk-based insurance premium, but also the annual loan cost to make the package affordable. As a condition for the voucher, the property owner could be required to invest in mitigation.

An empirical study of homeowners in Ocean county, NJ, reveals that the amount of the voucher is likely to be reduced significantly from what it would have been had the structure not been mitigated, as shown in Figure 2 for property in a high-hazard flood area (the V Zone) and a lower-hazard area (the A Zone).

fig3

Catastrophe coverage

Insurers’ withdrawal from certain markets because of lack of reinsurance capacity and other risk transfer instruments (e.g. catastrophe bonds) led to the establishment of government-backed programs such as the CEA, NFIP and TRIA.

If insurers were permitted to charge risk-based premiums, they would very likely want to market coverage against earthquakes and floods as long as they were protected against catastrophic losses. State reinsurance facilities could play an important role in this regard if premiums were risk-based using data provided by catastrophe models. One such facility exists today—the FHCF. It was established in 1993 following Hurricane Andrew to supplement private reinsurance and reimburse all insurers for a portion of their losses from catastrophic hurricanes.

TRIA provides protection to insurers against catastrophic losses from future terrorist attacks. American taxpayers will not be responsible for any payments until the total commercial losses from a terrorist attack exceed $60 billion. In other words, insurers will cover the entire losses from future terrorist attacks that are not catastrophic.

Lewis and Murdock proposed that the federal government auction a limited number of catastrophe reinsurance contracts annually to private insurers to provide them with more capacity to handle truly extreme events. The design of such contracts would have to be specified, and a more detailed analysis would have to be undertaken to determine the potential impact of such an auction mechanism on the relevant stakeholders.

Well-enforced regulations and standards

Given the reluctance of individuals to voluntarily purchase insurance against losses, one should consider requiring catastrophic coverage for all individuals who face risk. Social welfare is likely to be improved under the assumption that individuals would have wanted insurance protection had they perceived the risk correctly, not exhibited systematic biases and used simplified decision rules that characterize intuitive thinking. If the public sector were providing protection against catastrophic losses from these extreme events, they could pass regulations requiring insurance coverage for individuals at risk.

Risk-based insurance premiums could be coupled with building codes so that those residing in hazard-prone areas adopt cost-effective loss-reduction measures. Following Hurricane Andrew in 1992, Florida re-evaluated its building code standards, and coastal areas of the state began to enforce high-wind design provisions for residential housing. As depicted in Figure 3, homes that met the wind-resistant standards enforced in 1996 had a claim frequency that was 60% less than that for homes that were built prior to that year. The average reduction in claims from Hurricane Charley (2004) to each damaged home in Charlotte County built according to the newer code was approximately $20,000.

Homeowners who adopt cost-effective mitigation measures could receive a seal of approval from a certified inspector that the structure meets or exceeds building code standards. A seal of approval could increase the property value of the home by informing potential buyers that damage from future disasters is likely to be reduced because the mitigation measure is in place. Evidence from a July 1994 telephone survey of 1,241 residents in six hurricane-prone areas on the Atlantic and Gulf Coasts provides supporting evidence for some type of seal of approval. More than 90% of the respondents felt that local home builders should be required to adhere to building codes, and 85% considered it very important that local building departments conduct inspections of new residential construction.

fig4
Multi-year insurance

As a complement to property improvement loans, insurers could consider designing multi-year insurance (MYI) contracts of three to five years. The insurance policy would be tied to the structure rather than the property owner and carry an annual premium reflecting risk that would remain stable over the length of the contract. Property owners who cancel their insurance policy early would incur a penalty cost in the same way that those who refinance a mortgage have to pay a cancellation cost to the bank issuing the mortgage. With an MYI contract, insurers would have an incentive to inspect the property over time to make sure that building codes are enforced, something they would be less likely to do with annual contracts.

To compare the expected benefits of annual vs multi-year contracts, Jaffee et al. developed a two-period model where premiums reflect risk in a competitive market setting. They show that an MYI policy reduces the marketing costs for insurers over one-period policies and also eliminates the search costs to policyholders should their insurer decide to cancel their coverage at the end of period 1. Should the policyholder learn that the cost of a one-period policy is sufficiently low to justify paying a cancellation cost, it is always optimal for the insurer to sell an MYI policy and for a consumer to purchase it. The insurer will set the cancellation cost at a level that enables it to break even on those policies that the insured decides to let lapse before the maturity date.

Several factors have contributed to the non-marketability of MYI for protecting homeowners’ properties against losses from fire, theft and large-scale natural disasters. Under the current state-regulated arrangements in which many insurance commissioners have limited insurers’ ability to charge risk-based premiums in hazard-prone areas, no insurance company would even entertain the possibility of marketing a homeowner’s policy that was longer than one year. Insurers would be concerned about the regulator clamping down on them now or in the future regarding what price they could charge. Uncertainty regarding costs of capital and changes in risk over time may also deter insurers from providing MYI.

For the private sector to want to market coverage if the above issues are addressed, there needs to be a sufficient demand to cover the fixed and administrative costs of developing and marketing the product. To empirically test the demand for MYI, a web- based experiment was undertaken with adults in the U.S.; most were older than 30 years, so they were likely to have experience purchasing insurance. The individuals participating in the experiment were offered a choice between one-year and two-year contracts against losses from hurricane-related damage. A large majority of the responders preferred the two-year contract over the one-year contract, even when it was priced at a higher level than the actuarially fair price. Introducing a two-year insurance policy into the menu of contracts also increased the aggregate demand for disaster insurance.

Modifying the National Flood Insurance Program

The NFIP provides a target of opportunity to implement a long-term strategy for reducing risk that could eventually be extended to other extreme events. The two guiding principles for insurance would be used in redesigning the rate structure for the program:

  • Premiums would reflect risk based on updated flood maps so that private insurers would have an incentive to market coverage.
  • Means-tested vouchers would be provided by the public sector to those who undertook cost-effective mitigation measures. This would address the affordability issue. Homeowners who invested in loss-reduction measures would be given a premium discount to reflect the reduction in expected losses from floods. Long-term loans for mitigation would encourage investments in cost-effective mitigation measures. Well-enforced building codes and seals of approval would provide an additional rationale for undertaking these loss-reduction measures.
  • An MYI policy tied to the property would deter policyholders from canceling their policies if they did not suffer losses for several years.
  • Reinsurance and risk-transfer instruments marketed by the private sector could cover a significant portion of the catastrophic losses from future floods. Some type of federal reinsurance would provide insurers with protection against extreme losses.

The social welfare benefits of this proposed program would be significant: less damage to property, lower costs to insurers for protecting against catastrophic losses, more secure mortgages and lower costs to the government for disaster assistance.

Directions for future studies and research

In theory, insurance rewards individuals who undertake loss reduction measures by lowering their premiums. For insurance to play this role, premiums have to reflect risk; otherwise, insurers will have no financial incentive to offer coverage or will not want to reduce premiums when those at risk undertake protective measures. Charging risk-based premiums raises questions of affordability for those low-income residents in hazard-prone areas who are currently paying subsidized prices for coverage or have elected to be uninsured because of budget constraints or misperceptions of the risk. In addition, insurers may elect not to offer coverage if they are concerned about the impact that catastrophic losses will have on their balance sheet as evidenced by their decisions not to offer flood, earthquake or terrorism insurance in the U.S. without some type of back-up from the state or federal government. To determine the price of risk-based premiums, there is a need for more accurate data. In the U.S., FEMA is now updating its flood-risk maps as recommended by a Government Accountability Office (GAO) study and by recent federal legislation on the NFIP.

The impact of changing climate patterns on future damage from flooding because of potential sea-level rise and more intense hurricanes also needs to be taken into account. There is evidence that federal agencies and other bodies have underestimated the risks of damage from extreme weather events because of climate change. Hurricane Sandy has stimulated studies on ways that communities can be more prepared for future disaster damage as well as highlighting the need for a suite of policy tools including insurance to address the climate change problem.

Studies are also needed as to ways that other policy tools, such as well-enforced building codes to encourage good construction practices, can complement insurance. Enforcing building codes for all residences in Florida could reduce by nearly half the risk-based prices of insurance under climate change projections with respect to hurricane damage in 2020 and 2040. In this regard, Chile serves an example for the U.S. to emulate. The country passed a law that requires the original construction company to compensate those who suffer any structural damage from earthquakes and other disasters if the building codes were not followed. Furthermore, the original owner of a building is held responsible for damage to the structure for a decade, and a court can sentence the owner to prison. Well-enforced building codes in Chile account for the relatively low death toll from the powerful earthquake (8.8 on moment magnitude scale) that rocked the country on Feb. 27, 2010.

The challenge facing the U.S. today is how to capitalize on the concerns raised by hurricanes Katrina and Sandy and discussions on the renewal of the NFIP in 2017. The case for making communities more resilient to natural disasters by investing in loss reduction measures is critical today given economic development in hazard-prone areas. For risk-based insurance to be part of such a strategy, there is a need for support from key interested parties. These include the real estate agents, developers, banks and financial institutions, residents in hazard-prone areas and public sector organizations at the local, state and federal levels.

The principle of risk-based premiums coupled with concerns regarding affordability and catastrophic losses apply to all countries that use insurance as a policy tool for dealing with risk. Studies on the role that the private and public sectors play with respect to risk sharing of these losses reveal significant differences between countries. Other countries face similar problems and would do well to consider how to develop long-term strategies that have a chance of being implemented because they address short-term concerns.

Riding Out the Storm: the New Models

In our last article, When Nature Calls, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. Those massive losses were a direct result of an industry overconfident in its ability to gauge the frequency and severity of catastrophic events. Insurers were using only history and their limited experience as their guide, resulting in a tragic loss of years’ worth of policyholder surplus.

The turmoil of this period cannot be overstated. Many insurers went insolvent, and those that survived needed substantial capital infusions to continue functioning. Property owners in many states were left with no affordable options for adequate coverage and, in many cases, were forced to go without any coverage at all. The property markets seized up. Without the ability to properly estimate how catastrophic events would affect insured properties, it looked as though the market would remain broken indefinitely.

Luckily, in the mid 1980s, two people on different sides of the country were already working on solutions to this daunting problem. Both had asked themselves: If the problem is lack of data because of the rarity of recorded historical catastrophic events, then could we plug the historical data available now, along with mechanisms for how catastrophic events behave, into a computer and then extrapolate the full picture of the historical data needed? Could we then take that data and create a catalog of millions of simulated events occurring over thousands of years and use it to tell us where and how often we can expect events to occur, as well as how severe they could be? The answer was unequivocally yes, but with caveats.

In 1987, Karen Clark, a former insurance executive out of Boston, formed Applied Insurance Research (now AIR Worldwide). She spent much of the 1980s with a team of researchers and programmers designing a system that could estimate where hurricanes would strike the coastal U.S., how often they would strike and ultimately, based on input insurance policy terms and conditions, how much loss an insurer could expect from those events. Simultaneously, on the West Coast at Stanford University, Hemant Shah was completing his graduate degree in engineering and attempting to answer those same questions, only he was focusing on the effects of earthquakes occurring around Los Angeles and San Francisco.

In 1988, Clark released the first commercially available catastrophe model for U.S. hurricanes. Shah released his earthquake model a year later through his company, Risk Management Solutions (RMS). Their models were incredibly slow, limited and, according to many insurers, unnecessary. However, for the first time, loss estimates were being calculated based on actual scientific data of the day along with extrapolated probability and statistics in place of the extremely limited historical data previously used. These new “modeled” loss estimates were not in line with what insurers were used to seeing and certainly could not be justified based on historical record.

Clark’s model generated hurricane storm losses in the tens of billions of dollars while, up until that point, the largest insured loss ever recorded did not even reach $1 billion! Insurers scoffed at the comparison. But all of that quickly changed in August 1992, when Hurricane Andrew struck southern Florida.

Using her hurricane model, Clark estimated that insured losses from Andrew might exceed $13 billion. Even in the face of heavy industry doubt, Clark published her prediction. She was immediately derided and questioned by her peers, the press and virtually everyone around. They said her estimates were unprecedented and far too high. In the end, though, when it turned out that actual losses, as recorded by Property Claims Services, exceeded $15 billion, a virtual catastrophe model feeding frenzy began. Insurers quickly changed their tune and began asking AIR and RMS for model demonstrations. The property insurance market would never be the same.

So what exactly are these revolutionary models, which are now affectionately referred to as “cat models?”

Regardless of the model vendor, every cat model uses the same three components:

  1. Event Catalog – A catalog of hypothetical stochastic (randomized) events, which informs the modeler about the frequency and severity of catastrophic events. The events contained in the catalog are based on millions of years of computerized simulations using recorded historical data, scientific estimation and the physics of how these types of events are formed and behave. Additionally, for each of these events, associated hazard and local intensity data is available, which answers the questions: Where? How big? And how often?
  2. Damage Estimation – The models employ damage functions, which describe the mathematical interaction between building structure and event intensity, including both their structural and nonstructural components, as well as their contents and the local intensity to which they are exposed. The damage functions have been developed by experts in wind and structural engineering and are based on published engineering research and engineering analyses. They have also been validated based on results of extensive damage surveys undertaken in the aftermath of catastrophic events and on billions of dollars of actual industry claims data.
  3. Financial Loss – The financial module calculates the final losses after applying all limits and deductibles on a damaged structure. These losses can be linked back to events with specific probabilities of occurrence. Now an insurer not only knows what it is exposed to, but also what its worst-case scenarios are and how frequently those may occur.

Screenshot-2014-11-13-14.50.41

When cat models first became commercially available, industry adoption was slow. It took Hurricane Andrew in 1992 followed by the Northridge earthquake in 1994 to literally and figuratively shake the industry out of its overconfidence. Reinsurers and large insurers were the first to use the models, mostly due to their vast exposure to loss and their ability to afford the high license fees. Over time, however, much of the industry followed suit. Insurers that were unable to afford the models (or who were skeptical of them) could get access to all the available major models via reinsurance brokers that, at that time, also began rolling out suites of analytic solutions around catastrophe model results.

Today, the models are ubiquitous in the industry. Rating agencies require model output based on prescribed model parameters in their supplementary rating questionnaires to understand whether or not insurers can economically withstand certain levels of catastrophic loss. Reinsurers expect insurers to provide modeled loss output on their submissions when applying for reinsurance. The state of Florida has even set up a commission, the Florida Commission on Loss Prevention Methodology, which consists of “an independent body of experts created by the Florida Legislature in 1995 for the purpose of developing standards and reviewing hurricane loss models used in the development of residential property insurance rates and the calculation of probable maximum loss levels.”

Models are available for tropical cyclones, extra tropical cyclones, earthquakes, tornados, hail, coastal and inland flooding, tsunamis and even for pandemics and certain types of terrorist attacks. The first set of models started out as simulated catastrophes for U.S.-based perils, but now models exist globally for countries in Europe, Australia, Japan, China and South America.

In an effort to get ahead of the potential impact of climate change, all leading model vendors even provide U.S. hurricane event catalogs, which simulate potential catastrophic scenarios under the assumption that the Atlantic Ocean sea-surface temperatures will be warmer on average. And with advancing technologies, open-source platforms are being developed, which will help scores of researchers working globally on catastrophes to become entrepreneurs by allowing “plug and play” use of their models. This is the virtual equivalent of a cat modeling app store.

Catastrophe models have provided the insurance industry with an innovative solution to a major problem. Ironically, the solution itself is now an industry in its own right, as estimated revenues from model licenses now annually exceed $500 million (based on conversations with industry experts).

But how have the models performed over time? Have they made a difference in the industry’s ability to help manage catastrophic loss? Those are not easy questions to answer, but we believe they have. All the chaos from Hurricane Andrew and the Northridge earthquake taught the industry some invaluable lessons. After the horrific 2004 and 2005 hurricane seasons, which ravaged Florida with four major hurricanes in a single year, followed by a year that saw two major hurricanes striking the Gulf Coast – one of them being Hurricane Katrina, the single most costly natural disaster in history – there were no ensuing major insurance company insolvencies. This was a profound success.

The industry withstood a two-year period of major catastrophic losses. Clearly, something had changed. Cat models played a significant role in this transformation. The hurricane losses from 2004 and 2005 were large and painful, but did not come as a surprise. Using model results, the industry now had a framework to place those losses in proper context. In fact, each model vendor has many simulated hurricane events in their catalogs, which resemble Hurricane Katrina. Insurers knew, from the models, that Katrina could happen and were therefore prepared for that possible, albeit unlikely, outcome.

However, with the universal use of cat models in property insurance comes other issues. Are we misusing these tools? Are we becoming overly dependent on them? Are models being treated as a panacea to vexing business and scientific questions instead of as the simple framework for understanding potential loss?

Next in this series, we will illustrate how modeling results are being used in the industry and how overconfidence in the models could, once again, lead to crisis.

When Nature Calls: the Need for New Models

The Earth is a living, breathing planet, rife with hazards that often hit without warning. Tropical cyclones, extra-tropical cyclones, earthquakes, tsunamis, tornados and ice storms: Severe elements are part of the planet’s progression. Fortunately, the vast majority of these events are not what we would categorize as “catastrophic.” However, when nature does call, these events can be incredibly destructive.

To help put things into perspective: Nearly 70% (and growing) of the entire world’s population currently lives within 100 miles of a coastline. When a tropical cyclone makes landfall, it’s likely to affect millions of people at one time and cause billions of dollars of damage. Though the physical impact of windstorms or earthquakes is regional, the risk associated with those types of events, including the economic aftermath, is not. Often, the economic repercussions are felt globally, both in the public and private sectors. We need only look back to Hurricane Katrina, Super Storm Sandy and the recent tsunamis in Japan and Indonesia to see what toll a single catastrophe can have on populations, economies and politics.

However, because actual catastrophes are so rare, property insurers are left incredibly under-informed when attempting to underwrite coverage and are vulnerable to catastrophic loss.

Currently, insurers’ standard actuarial practices are unhelpful and often dangerous because, with so little historical data, the likelihood of underpricing dramatically increases. If underwriting teams do not have the tools to know where large events will occur, how often they will occur or how severe they will be when they do occur, then risk management teams must blindly cap their exposure. Insurers lacking the proper tools can’t possibly fully understand the implications of thousands of claims from a single event. Risk management must place arbitrary capacity limits on geographic exposures, resulting in unavoidable misallocation of capital.

However, insurers’ perceived success from these arbitrary risk management practices, combined with a fortunate pause in catastrophes lasting multiple decades created a perfect storm of profit, which lulled insurers into a false sense of security. It allowed them to grow to a point where they felt invulnerable to any large event that may come their way. They had been “successful” for decades. They’re obviously doing something right, they thought. What could possibly go wrong?

Fast forward to late August 1992. The first of two pivotal events that forced a change in the attitude of insurers toward catastrophes was brewing in the Atlantic. Hurricane Andrew, a Category 5 event, with top wind speeds of 175 mph, would slam into southern Florida and cause, by far, the largest loss to date in the insurance industry’s history, totaling $15 billion in insured losses. As a result, 11 consistently stable insurers became insolvent. Those still standing either quickly left the state or started drastically reducing their exposures.

The second influential event was the 1994 earthquake in Northridge, CA. That event occurred on a fault system that was previously unknown, and, even though it measured only a 6.7 magnitude, it generated incredibly powerful ground motion, collapsing highways and leveling buildings. Northridge, like Andrew, also created approximately $15 billion in insured losses and caused insurers that feared additional losses to flee the California market altogether.

Andrew and Northridge were game changers. Across the country, insurers’ capacity became severely reduced for both wind and earthquake perils as a result of those events. Where capacity was in particularly short supply, substantial rate increases were sought. Insurers rethought their strategies and, in all aspects, looked to reduce their catastrophic exposure. In both California and Florida, quasi-state entities were formed to replace the capacity from which the private market was withdrawing. To this day, Citizens Property Insurance in Florida and the California Earthquake Authority, so-called insurers of last resort, both control substantial market shares in their respective states. For many property owners exposed to severe winds or earthquakes, obtaining adequate coverage simply isn’t within financial reach, even 20 years removed from those two seminal events.

How was it possible that insurers could be so exposed? Didn’t they see the obvious possibility that southern Florida could have a large hurricane or that the Los Angeles area was prone to earthquakes?

What seems so obvious now was not so obvious then, because of a lack of data and understanding of the risks. Insurers were writing coverage for wind and earthquake hazards before they even understood the physics of those types of events. In hindsight, we recognize that the strategy was as imprudent as picking numbers from a hat.

What insurers need is data, data about the likelihood of where catastrophic events will occur, how often they will likely occur and what the impact will be when they do occur. The industry at that time simply didn’t have the ability to leverage data or experience that was so desperately needed to reasonably quantify their exposures and help them manage catastrophic risk.

Ironically, well before Andrew and Northridge, right under property insurers’ noses, two innovative people on opposite sides of the U.S. had come to the same conclusion and had already begun answering the following questions:

  • Could we use computers to simulate millions of scientifically plausible catastrophic events against a portfolio of properties?
  • Would the output of that kind of simulation be adequate for property insurers to manage their businesses more accurately?
  • Could this data be incorporated into all their key insurance operations – underwriting, claims, marketing, finance and actuarial – to make better decisions?

What emerged from that series of questions would come to revolutionize the insurance industry.