Tag Archives: Northridge Earthquake

The Challenges With Catastrophe Bonds

Catastrophe bonds are an increasingly important form of risk transfer for insurers. Cat bonds are a peculiarity of the U.S. reinsurance market, where about 125 to 200 natural disasters occur a year. They were first sold in the mid-1990s after Hurricane Andrew and the Northridge earthquake highlighted the need for a new form of risk transfer. The cat bond market has been growing steadily for the past 10 years, and more than $25 billion in catastrophe bond and insurance-linked securities are currently outstanding, according to Artemis.

Many insurers have moved away from managing their ceded reinsurance program with spreadsheets, which are time-consuming and error-prone, in light of current regulatory and internal demands. More carriers have installed — or are planning to install — a dedicated ceded reinsurance system that provides better controls and audit trails.

See also: Is P2P a Realistic Alternative?

Besides enabling reinsurance managers to keep senior management informed, a system helps carriers comply with the recent Risk Management and Own Risk and Solvency Assessment Model Act (RMORSA). It also generates Schedule F and statutory reporting, an otherwise onerous job. And technology prevents claims leakage (reinsurance claims that fell through the cracks).

Cat bonds add a layer of complexity. The cat bond premium is a “coupon” the insurer pays to the bond buyer. There are many potential losses behind each bond, and the potentially huge recovery amounts to as much as hundreds of millions of dollars for some insurers. Other complexities include a priority deductible, an hours clause, lines of business reinsured or excluded and attachment criteria to automatically identify subject catastrophe amounts. Without technology, tracking all this can be overwhelming.

The ceded reinsurance system can also be used to manage cat bond premiums. From a system perspective, it’s not terribly different. The same analytical split (per line of business and per insurance company in the group) applies to bonds just as it does to reinsurance treaties. With a little tweaking, a solid ceded reinsurance system should be able to handle cat treaties and bonds equally well.

While ceded premium management for cat bonds shouldn’t be difficult, claims present bigger challenges, especially when trying to automatically calculate the ultimate net loss (UNL) because additional factors and rules are often used to determine it.

For instance, it may be necessary to apply a growth-allowance factor, determine the number of policies in force when the catastrophe occurs and calculate growth-limitation factors. This allows the calculation of ceded recoveries in case of a catastrophe. Additionally, the calculation of UNL may be specific for each cat bond — and even for each corresponding peril.

See also: Insurers: the New Venture Capitalists  

Automating all this isn’t necessary because few events trigger those complexities. Once a manual workaround incorporating the audit trail and justification of the subject amounts is done, the reinsurance system can handle the remaining calculations. While it’s not necessary to fully automate all steps to calculate the UNL, it is still better to handle the whole process with an integrated information system than with multiple spreadsheets that are unwieldy and labor-intensive.

Without the right technology, managing cat bonds is daunting. With automation, they can be managed far more effectively.

6 Lessons From Katrina, 10 Years On

In December 2005, just three months after Katrina savaged the Gulf Coast, we edited On Risk and Disaster, a book on the key lessons that the storm so painfully taught. The book was very different from most of the post-mortems that focused on the country’s lack of preparedness for the storm’s onslaught. It focused sharply on how to reduce the risk of future disasters—and how to understand how to help those who suffer most from them.

One of the most important findings highlighted by the book’s 19 expert contributors was that the storm affected lower-income residents far more than others. Reducing the exposure to potential damage before disasters occur, especially in the most hazard-prone areas, is one of the most important steps we can take. To achieve this objective in low-income areas, residents often need help to invest in measures to reduce their losses. Failing to learn these lessons will surely lead to a repeat of the storm’s awful consequences.

Now, 10 years after Katrina struck, six lessons from the book loom even larger.

1. Disasters classified as low-probability, high-consequence events have been increasing in likelihood and severity.

From 1958 to 1972, the number of annual presidential disaster declarations ranged between eight and 20. From 1997 through 2010, they ranged from 50 to 80. The National Oceanic and Atmospheric Administration reported that the number of severe weather events—those that cause $1 billion in damage or more—has increased dramatically, from just two per year in the 1980s to more than 10 per year since 2010. That trend is likely to continue.

2. Most individuals do not purchase insurance until after suffering a severe loss from a disaster—and then often cancel their policies several years later.

Before the 1994 Northridge earthquake in California, relatively few residents had earthquake insurance. After the disaster, more than two-thirds of the homeowners in the area voluntarily purchased coverage. In the years afterward, however, most residents dropped their insurance. Only 10% of those in seismically active areas of California now have earthquake insurance, even though most people know that the likelihood of a severe quake in California today is even higher than it was 20 years ago. Moreover, most homeowners don’t keep their flood insurance policies. An analysis of the National Flood Insurance Program in the U.S. revealed that homeowners typically purchased flood insurance for two to four years but, on average, they owned their homes for about seven years. Of 841,000 new policies bought in 2001, only 73% were still in force one year later, and, after eight years, the number dropped to just 20%. The flood risk, of course, hadn’t changed; dropping the policies exposed homeowners to big losses if another storm hit.

3. Individuals aren’t very good at assessing their risk.

A study on flood risk perception of more than 1,000 homeowners who all lived in flood-prone areas in New York City examined the degree to which people living in these areas assessed their likelihood of being flooded. Even allowing a 25% error margin around the experts’ estimates, most people underestimated the risk of potential damage; a large majority of the residents in this flood-prone area (63%) underestimated the average damage a flood would cause to their house. It is likely that “junk science,” including claims that climate change isn’t real, leads many citizens to problems in assessing the risks they face.

4. We need more public-private partnerships to reduce the costs of future disasters.

Many low-income families cannot afford risk-based disaster insurance and often struggle to recover from catastrophes like Katrina. One way to reduce future damages from disasters would be to assist those in hazard-prone areas with some type of means-tested voucher if they invest in loss-reduction measures, such as elevating their home or flood-proofing its foundation. The voucher would cover both a portion of their insurance premium as well as the annual payments for home-improvement loans to reduce their risk. A program such as this one would reduce future losses, lower the cost of risk-based insurance and diminish the need for the public sector to provide financial disaster relief to low-income families.

5. Even if we build stronger public-private partnerships, individuals expect government help if they suffer severe damage.

Just before this spring’s torrential Texas rains, there was a huge battle in the Texas state legislature about whether local governments ought to be allowed to engage in advance planning to mitigate the risks from big disasters. Many of the forces trying to stop that effort were among the first to demand help when floodwaters devastated the central part of the state. Even the strongest believers in small government expect help to come quickly in times of trouble. We are a generous country, and we surely don’t want that to change. But jumping in after disasters strike is far more expensive than taking steps in advance to reduce risks. Everyone agrees that the cost curve for disaster relief is going up too fast and that we need to aggressively bend it back down.

6. Hurricanes tend to grab our attention—but there are other big risks that are getting far less attention.

Hurricanes are surely important, but winter storms, floods and earthquakes are hugely damaging, too. Too often, we obsess over the last catastrophe and don’t see clearly the other big risks that threaten us. Moreover, when big disasters happen, it really doesn’t matter what caused the damage. Coast Guard Adm. Thad Allen, who led the recovery effort after Katrina, called the storm “a weapon of mass destruction without criminal intent.” The lesson is that we need to be prepared to help communities bounce back when disasters occur, whatever their cause; to help them reduce the risk of future disasters; and to be alert to those who suffer more than others.

The unrest that rocked Baltimore following Freddie Gray’s death reminds us that Adm. Allen’s lesson reaches broadly. The riots severely damaged some of the city’s poorest neighborhoods and undermined the local economy, with an impact just as serious as if the area had been flooded by a hurricane. Many of the same factors that bring in the government after natural disasters occurred here as well: a disproportionate impact on low-income residents, most of whom played no part in causing the damage; the inability to forecast when a random act, whether a storm surge or a police action, could push a community into a downward spiral; and the inability of residents to take steps before disasters happen to reduce the damage they suffer.

Conclusion

Big risks command a governmental response. Responses after disasters, whatever their cause, cost more than reducing risks in advance. Often, the poor suffer the most. These issues loom even larger in the post-Katrina years.

Natural disasters have become more frequent and more costly. We need to develop a much better strategy for making communities more resilient, especially by investing—in advance—in strategies to reduce losses. We need to pay much more attention to who bears the biggest losses when disasters strike, whatever their cause. We need to think about how to weave integrated partnerships involving both government and the private and nonprofit sectors. And we need to understand that natural disasters aren’t the only ones our communities face.

Sensible strategies will require a team effort, involving insurance companies, real estate agents, developers, banks and financial institutions, residents in hazard-prone areas as well as governments at the local, state and federal levels. Insurance premiums that reflect actual risks coupled with strong economic incentives to reduce those risks in advance, can surely help. So, too, can stronger building codes and land use regulations that reduce the exposure to natural disasters.

If we’ve learned anything in the decade since Katrina, it’s that we need to work much harder to understand the risks we face, on all fronts. We need to think about how to reduce those risks and to make sure that the least privileged among us don’t suffer the most. Thinking through these issues after the fact only ensures that we struggle more, pay more and sow the seeds for even more costly efforts in the future.

This article was first published on GovEx and was written with Donald Kettl and Ronald J. Daniels. Kettl is professor of public policy at the University of Maryland and a nonresident senior fellow at the Brookings Institution and the Volcker Alliance. Daniels is the president of Johns Hopkins University.

Riding Out the Storm: the New Models

In our last article, When Nature Calls, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. Those massive losses were a direct result of an industry overconfident in its ability to gauge the frequency and severity of catastrophic events. Insurers were using only history and their limited experience as their guide, resulting in a tragic loss of years’ worth of policyholder surplus.

The turmoil of this period cannot be overstated. Many insurers went insolvent, and those that survived needed substantial capital infusions to continue functioning. Property owners in many states were left with no affordable options for adequate coverage and, in many cases, were forced to go without any coverage at all. The property markets seized up. Without the ability to properly estimate how catastrophic events would affect insured properties, it looked as though the market would remain broken indefinitely.

Luckily, in the mid 1980s, two people on different sides of the country were already working on solutions to this daunting problem. Both had asked themselves: If the problem is lack of data because of the rarity of recorded historical catastrophic events, then could we plug the historical data available now, along with mechanisms for how catastrophic events behave, into a computer and then extrapolate the full picture of the historical data needed? Could we then take that data and create a catalog of millions of simulated events occurring over thousands of years and use it to tell us where and how often we can expect events to occur, as well as how severe they could be? The answer was unequivocally yes, but with caveats.

In 1987, Karen Clark, a former insurance executive out of Boston, formed Applied Insurance Research (now AIR Worldwide). She spent much of the 1980s with a team of researchers and programmers designing a system that could estimate where hurricanes would strike the coastal U.S., how often they would strike and ultimately, based on input insurance policy terms and conditions, how much loss an insurer could expect from those events. Simultaneously, on the West Coast at Stanford University, Hemant Shah was completing his graduate degree in engineering and attempting to answer those same questions, only he was focusing on the effects of earthquakes occurring around Los Angeles and San Francisco.

In 1988, Clark released the first commercially available catastrophe model for U.S. hurricanes. Shah released his earthquake model a year later through his company, Risk Management Solutions (RMS). Their models were incredibly slow, limited and, according to many insurers, unnecessary. However, for the first time, loss estimates were being calculated based on actual scientific data of the day along with extrapolated probability and statistics in place of the extremely limited historical data previously used. These new “modeled” loss estimates were not in line with what insurers were used to seeing and certainly could not be justified based on historical record.

Clark’s model generated hurricane storm losses in the tens of billions of dollars while, up until that point, the largest insured loss ever recorded did not even reach $1 billion! Insurers scoffed at the comparison. But all of that quickly changed in August 1992, when Hurricane Andrew struck southern Florida.

Using her hurricane model, Clark estimated that insured losses from Andrew might exceed $13 billion. Even in the face of heavy industry doubt, Clark published her prediction. She was immediately derided and questioned by her peers, the press and virtually everyone around. They said her estimates were unprecedented and far too high. In the end, though, when it turned out that actual losses, as recorded by Property Claims Services, exceeded $15 billion, a virtual catastrophe model feeding frenzy began. Insurers quickly changed their tune and began asking AIR and RMS for model demonstrations. The property insurance market would never be the same.

So what exactly are these revolutionary models, which are now affectionately referred to as “cat models?”

Regardless of the model vendor, every cat model uses the same three components:

  1. Event Catalog – A catalog of hypothetical stochastic (randomized) events, which informs the modeler about the frequency and severity of catastrophic events. The events contained in the catalog are based on millions of years of computerized simulations using recorded historical data, scientific estimation and the physics of how these types of events are formed and behave. Additionally, for each of these events, associated hazard and local intensity data is available, which answers the questions: Where? How big? And how often?
  2. Damage Estimation – The models employ damage functions, which describe the mathematical interaction between building structure and event intensity, including both their structural and nonstructural components, as well as their contents and the local intensity to which they are exposed. The damage functions have been developed by experts in wind and structural engineering and are based on published engineering research and engineering analyses. They have also been validated based on results of extensive damage surveys undertaken in the aftermath of catastrophic events and on billions of dollars of actual industry claims data.
  3. Financial Loss – The financial module calculates the final losses after applying all limits and deductibles on a damaged structure. These losses can be linked back to events with specific probabilities of occurrence. Now an insurer not only knows what it is exposed to, but also what its worst-case scenarios are and how frequently those may occur.

Screenshot-2014-11-13-14.50.41

When cat models first became commercially available, industry adoption was slow. It took Hurricane Andrew in 1992 followed by the Northridge earthquake in 1994 to literally and figuratively shake the industry out of its overconfidence. Reinsurers and large insurers were the first to use the models, mostly due to their vast exposure to loss and their ability to afford the high license fees. Over time, however, much of the industry followed suit. Insurers that were unable to afford the models (or who were skeptical of them) could get access to all the available major models via reinsurance brokers that, at that time, also began rolling out suites of analytic solutions around catastrophe model results.

Today, the models are ubiquitous in the industry. Rating agencies require model output based on prescribed model parameters in their supplementary rating questionnaires to understand whether or not insurers can economically withstand certain levels of catastrophic loss. Reinsurers expect insurers to provide modeled loss output on their submissions when applying for reinsurance. The state of Florida has even set up a commission, the Florida Commission on Loss Prevention Methodology, which consists of “an independent body of experts created by the Florida Legislature in 1995 for the purpose of developing standards and reviewing hurricane loss models used in the development of residential property insurance rates and the calculation of probable maximum loss levels.”

Models are available for tropical cyclones, extra tropical cyclones, earthquakes, tornados, hail, coastal and inland flooding, tsunamis and even for pandemics and certain types of terrorist attacks. The first set of models started out as simulated catastrophes for U.S.-based perils, but now models exist globally for countries in Europe, Australia, Japan, China and South America.

In an effort to get ahead of the potential impact of climate change, all leading model vendors even provide U.S. hurricane event catalogs, which simulate potential catastrophic scenarios under the assumption that the Atlantic Ocean sea-surface temperatures will be warmer on average. And with advancing technologies, open-source platforms are being developed, which will help scores of researchers working globally on catastrophes to become entrepreneurs by allowing “plug and play” use of their models. This is the virtual equivalent of a cat modeling app store.

Catastrophe models have provided the insurance industry with an innovative solution to a major problem. Ironically, the solution itself is now an industry in its own right, as estimated revenues from model licenses now annually exceed $500 million (based on conversations with industry experts).

But how have the models performed over time? Have they made a difference in the industry’s ability to help manage catastrophic loss? Those are not easy questions to answer, but we believe they have. All the chaos from Hurricane Andrew and the Northridge earthquake taught the industry some invaluable lessons. After the horrific 2004 and 2005 hurricane seasons, which ravaged Florida with four major hurricanes in a single year, followed by a year that saw two major hurricanes striking the Gulf Coast – one of them being Hurricane Katrina, the single most costly natural disaster in history – there were no ensuing major insurance company insolvencies. This was a profound success.

The industry withstood a two-year period of major catastrophic losses. Clearly, something had changed. Cat models played a significant role in this transformation. The hurricane losses from 2004 and 2005 were large and painful, but did not come as a surprise. Using model results, the industry now had a framework to place those losses in proper context. In fact, each model vendor has many simulated hurricane events in their catalogs, which resemble Hurricane Katrina. Insurers knew, from the models, that Katrina could happen and were therefore prepared for that possible, albeit unlikely, outcome.

However, with the universal use of cat models in property insurance comes other issues. Are we misusing these tools? Are we becoming overly dependent on them? Are models being treated as a panacea to vexing business and scientific questions instead of as the simple framework for understanding potential loss?

Next in this series, we will illustrate how modeling results are being used in the industry and how overconfidence in the models could, once again, lead to crisis.

When Nature Calls: the Need for New Models

The Earth is a living, breathing planet, rife with hazards that often hit without warning. Tropical cyclones, extra-tropical cyclones, earthquakes, tsunamis, tornados and ice storms: Severe elements are part of the planet’s progression. Fortunately, the vast majority of these events are not what we would categorize as “catastrophic.” However, when nature does call, these events can be incredibly destructive.

To help put things into perspective: Nearly 70% (and growing) of the entire world’s population currently lives within 100 miles of a coastline. When a tropical cyclone makes landfall, it’s likely to affect millions of people at one time and cause billions of dollars of damage. Though the physical impact of windstorms or earthquakes is regional, the risk associated with those types of events, including the economic aftermath, is not. Often, the economic repercussions are felt globally, both in the public and private sectors. We need only look back to Hurricane Katrina, Super Storm Sandy and the recent tsunamis in Japan and Indonesia to see what toll a single catastrophe can have on populations, economies and politics.

However, because actual catastrophes are so rare, property insurers are left incredibly under-informed when attempting to underwrite coverage and are vulnerable to catastrophic loss.

Currently, insurers’ standard actuarial practices are unhelpful and often dangerous because, with so little historical data, the likelihood of underpricing dramatically increases. If underwriting teams do not have the tools to know where large events will occur, how often they will occur or how severe they will be when they do occur, then risk management teams must blindly cap their exposure. Insurers lacking the proper tools can’t possibly fully understand the implications of thousands of claims from a single event. Risk management must place arbitrary capacity limits on geographic exposures, resulting in unavoidable misallocation of capital.

However, insurers’ perceived success from these arbitrary risk management practices, combined with a fortunate pause in catastrophes lasting multiple decades created a perfect storm of profit, which lulled insurers into a false sense of security. It allowed them to grow to a point where they felt invulnerable to any large event that may come their way. They had been “successful” for decades. They’re obviously doing something right, they thought. What could possibly go wrong?

Fast forward to late August 1992. The first of two pivotal events that forced a change in the attitude of insurers toward catastrophes was brewing in the Atlantic. Hurricane Andrew, a Category 5 event, with top wind speeds of 175 mph, would slam into southern Florida and cause, by far, the largest loss to date in the insurance industry’s history, totaling $15 billion in insured losses. As a result, 11 consistently stable insurers became insolvent. Those still standing either quickly left the state or started drastically reducing their exposures.

The second influential event was the 1994 earthquake in Northridge, CA. That event occurred on a fault system that was previously unknown, and, even though it measured only a 6.7 magnitude, it generated incredibly powerful ground motion, collapsing highways and leveling buildings. Northridge, like Andrew, also created approximately $15 billion in insured losses and caused insurers that feared additional losses to flee the California market altogether.

Andrew and Northridge were game changers. Across the country, insurers’ capacity became severely reduced for both wind and earthquake perils as a result of those events. Where capacity was in particularly short supply, substantial rate increases were sought. Insurers rethought their strategies and, in all aspects, looked to reduce their catastrophic exposure. In both California and Florida, quasi-state entities were formed to replace the capacity from which the private market was withdrawing. To this day, Citizens Property Insurance in Florida and the California Earthquake Authority, so-called insurers of last resort, both control substantial market shares in their respective states. For many property owners exposed to severe winds or earthquakes, obtaining adequate coverage simply isn’t within financial reach, even 20 years removed from those two seminal events.

How was it possible that insurers could be so exposed? Didn’t they see the obvious possibility that southern Florida could have a large hurricane or that the Los Angeles area was prone to earthquakes?

What seems so obvious now was not so obvious then, because of a lack of data and understanding of the risks. Insurers were writing coverage for wind and earthquake hazards before they even understood the physics of those types of events. In hindsight, we recognize that the strategy was as imprudent as picking numbers from a hat.

What insurers need is data, data about the likelihood of where catastrophic events will occur, how often they will likely occur and what the impact will be when they do occur. The industry at that time simply didn’t have the ability to leverage data or experience that was so desperately needed to reasonably quantify their exposures and help them manage catastrophic risk.

Ironically, well before Andrew and Northridge, right under property insurers’ noses, two innovative people on opposite sides of the U.S. had come to the same conclusion and had already begun answering the following questions:

  • Could we use computers to simulate millions of scientifically plausible catastrophic events against a portfolio of properties?
  • Would the output of that kind of simulation be adequate for property insurers to manage their businesses more accurately?
  • Could this data be incorporated into all their key insurance operations – underwriting, claims, marketing, finance and actuarial – to make better decisions?

What emerged from that series of questions would come to revolutionize the insurance industry.