Tag Archives: James Rice

San Andreas — The Real Horror Story

For the past two weeks, the disaster movie “San Andreas” has topped the box office, taking in more than $200 million worldwide. The film stars Dwayne “The Rock” Johnson, who plays a helicopter rescue pilot who, after a series of cataclysmic earthquakes on the San Andreas fault in California, uses his piloting skills to save members of his family. It’s an action-packed plot sure to keep audiences on the edge of their seats.

As insurance professionals who specialize in quantifying catastrophic loss, we can’t help but think of the true disaster that awaits California and other regions in the U.S. when “the big one” actually does occur.

The real horror starts with the fact that 90% of California residents DO NOT maintain earthquake insurance. The “big one” is likely to produce economic losses in either the San Francisco or Los Angeles metropolitan areas in excess of $400 billion. With so little of this potential damage insured, thousands of families will become homeless, and countless businesses will be affected – many permanently. The cost burden for the cleanup, rescue, care and rebuilding will likely be borne by the U.S. taxpayer. The images of the carnage will make the human desperation we saw in both Hurricane Katrina and Superstorm Sandy pale by comparison.

The reasons given for such low take-up of earthquake insurance generally fall into two categories: (1) Earthquake risk is too volatile, too difficult to insure and, as a result, (2) is too expensive for most homeowners.

Is California earthquake risk too volatile to insure?

No.

The earthquake faults in California, including the Hayward, the Calaveras and the San Andreas faults. are the most studied and understood fault systems in the world. The U.S. Geological Survey (USGS) publishes updated frequency and severity likelihood every six years for the entire U.S. This means that estimation of potential earthquake losses, while not fully certain, can be reasonably achieved in the same manner that we can currently estimate potential losses from perils such as tornados and hurricanes. In fact, the catastrophe (CAT) models agree that it’s likely that on a dollar-for-dollar exposure basis, losses from Florida hurricanes that make landfall are more severe and more frequent over time than California earthquakes, yet nearly 100% of Florida homeowners actually maintain windstorm insurance. If hurricane risk in Florida isn’t too volatile for insurers to cover, then earthquake risk in California should follow that same path.

Isnt earthquake coverage expensive?

Again, the answer is a resounding no.

The California Earthquake Authority (CEA), the largest writer of earthquake insurance in the U.S., has a premium calculator that quotes mobile homes, condos, renters and homeowners insurance. For example, a $500,000 single-family home in Orange County, CA, can be insured for about $800 a year, or roughly the same price as a traditional fire insurance policy. To protect a $500,000 home, an $800 investment is hardly considered expensive.

The real question should be: Are California homeowners getting good value? CEA policies carry very high deductibles — typically in the 10% to 15% range — and the price is “expensive” when the high deductibles are considered. As one actuary once explained it to us, “With that kind of deductible, I’ll likely never use the coverage, so like everyone else I’ll cross my fingers and hope the ‘big one’ doesn’t happen in my lifetime.”

It’s this lack of value that’s the single biggest impediment preventing millions of California homeowners from purchasing earthquake insurance. It’s also an area that has much room for improvement.

How can we as an industry raise the value proposition of earthquake coverage? Consider the following:

  1. The industry can make better use of technology, especially the CAT models. California is earthquake country, but it’s also a massive state. This map shows that the high-risk areas mostly follow the San Andreas fault and the branches off that fault. There are many lower-risk areas in California, and the CAT models can be used to distinguish the high risk from the low risk. Low risk exposures should demand lower premiums. Even high-risk exposures can be controlled by using the CAT models to manage aggregates and identify the low-risk exposure within the high-risk pools. We expect that CAT models will help us get back to Insurance 101 by helping the industry to better understand exposure to loss, segment risks, correct pricing, manage aggregates and create profitable pools of exposure.
  2. The industry can bundle earthquake risks with other risks to reduce volatility. Earthquake-only writers (and flood as well) are essentially “all in” on one type of risk, to steal a common poker term. Those writers will fluctuate year to year; there will be years with little or no losses, then years with substantial losses. That volatility affects retained losses and also affects reinsurance prices. Having one source of premium means constantly conducting business on the edge of insolvency. Bundling earthquake risks geographically and with other perils reduces volatility. The Pacific Northwest, Alaska, Hawaii and even areas in the Midwest and the Carolinas are all known to be seismically active. In fact, Oklahoma and Texas are now the new hotbed regions of earthquake activity. Demand in those areas exist, so why not package that risk? Reducing volatility will reduce prices and help stabilize the market. We estimate that in parts of California, volatility is the cause of as much as 50% of the CEA premium.

Hollywood has produced yet another action-packed film. But to add a touch of realism, Hollywood screenwriters should consider making the leading actor, The Rock, a true hero – an “insurance super hero” who sells affordable earthquake insurance.

Flood Insurance at the Crossroads

News outlets around the country are broadcasting the horrible scenes from Northern Mexico, Texas and Oklahoma of devastating floods that have killed many. Once tallies are completed, property damage will likely be in the billions of dollars. Once again, a disaster raises interest not only in the insidious nature of catastrophic flooding, but in how the insurance industry, in concert with the federal government, more specifically the National Flood Insurance Program (NFIP), tackles – or sidesteps – the vexing problems associated with this peril.

Stories abound of the heart-breaking losses as a result of flooding; homes are whisked away downstream, people’s prized possessions are destroyed and, most importantly, lives are lost. Amid the recent rampant devastation brought on by the Texas floods, what struck us was one simple statement by a local news correspondent on the scene, who described the victims’ plight: “Some residents are lucky; they have flood insurance.” “Lucky” hardly describes the harsh reality these flood victims are experiencing.

Having flood insurance with the NFIP is akin to having jumbo shrimp, in the infamous description of the oxymoron by comedian George Carlin. To understand why, consider that property damage to a house comes in three varieties: (1) damage to the actual structure, (2) damage to the contents within the structure or (3) expenses associated with not being able to live in the structure as a direct result of a flood claim and having to live elsewhere. The standard HO3 policy form has all three of those potential loss sources adequately covered. That raises the question: What does the NFIP flood policy cover?

Your Building

The maximum the NFIP will pay for the dwelling structure, referred to as Coverage A, is $250,000, even if the dwelling is worth more. There is no amount of additional premium one can pay to get more coverage for this policy. If the dwelling is worth more, the homeowner is forced to purchase another flood insurance policy to cover an amount over and above $250,000.

Your Contents

The maximum the NFIP will pay for losses to contents, referred to as Coverage C, is $100,000, again, even if the homeowner owns more than that amount. The homeowner is still out of luck even if he acquires a second flood policy to cover excess losses to the dwelling, as those types of policies do not generally cover contents. To make matters even worse, if the homeowner is “lucky” enough to have a flood insurance policy through the NFIP and should suffer a flood loss to contents, the content valuation reimbursement will be depreciated. The homeowner will NOT be reimbursed for a new carpet when forced to rip up that damaged 20-year-old carpet and will receive just enough funds from the claim to buy another 20-year-old carpet. In other words, the claim’s valuation basis via the NFIP is the actual cash value (ACV) of the damaged item, not the current replacement cost value (RCV) after applying the policy deductible.

Worse, the homeowner is forced to fill out mountains of paperwork to detail what was damaged and account for when the item was purchased and the cost. Then there are the contents in basements, which can represent a whole separate problem. Try filling out the paperwork a few hundred times over for all a household’s valuables, knowing that, regardless of whether those items are meticulously itemized, the homeowner STILL will not be paid the cost to replace them.

Loss of Use

Should a homeowner have a flood loss and need to live elsewhere while the damage is being repaired, expenses for the Loss of Use, Coverage D, is entirely borne by the homeowner. It doesn’t matter if it’s a small amount of damage requiring a one-day stay at a hotel or extensive damage requiring a new home; the homeowner is responsible to pay for all living expenses out of pocket.

If the NFIP policyholder doesn’t already feel lucky enough, then there are the lingering questions surrounding the NFIP’s solvency.  Both Hurricane Katrina and Superstorm Sandy left the NFIP with few funds to pay claims, and if the homeowner is lucky enough to have flood insurance through the NFIP she will have to wait – oftentimes months!

By now, you get the point. Flood insurance through the NFIP really is not insurance; it’s something else altogether. For starters:

  1. The NFIP is not risk-based. Two homes with very dissimilar flood exposure could pay the exact same rate.
  2. The NFIP has done little to discourage risk-taking, by subsidizing low rates for homes that have had multiple claims payments.
  3. The policies do not meet homeowners’ needs. The coverage gaps are large and the headaches dealing with getting paid are quasi-medieval – certainly not consumer-friendly.

The industry can and must do better. All the tools and resources needed to adequately price and manage risk are present. New models and maps stand ready to evaluate risk, estimate loss costs and aggregate exposure. Abundant excess capital is available, and in many cases is standing on the sidelines looking to jump in the game. What better source of risk-based premium is there than the inland flood exposures now monopolized by the NFIP and, ultimately, the taxpayers? This is the opportunity for growth, innovation and applying commonsense risk management thinking that the industry not only is starving for, but has been praying for the past 30-plus years.

The industry must now ask itself: Does it want to sustain its legacy groupthink by maintaining the status quo, or does it want to remain relevant, now and in the future, and be a part of the solution?

Model Errors in Disaster Planning

“All models are wrong; some are useful.” – George Box

We have spent three articles (article 1, article 2, article 3) explaining how catastrophe models provide a tool for much-needed innovation to the global insurance industry. Catastrophe models have covered for the lack of experience with many losses and let insurers properly price and underwrite risks, manage portfolios, allocate capital and design risk management strategies. Yet for all the practical benefits CAT models have infused into the industry, product innovation has stalled.

The halt in progress is a function of what models are and how they work. In fairness to those who do not put as much stock in the models as a useful tool, it is important to speak of the models’ limitations and where the next wave of innovation needs to come from.

Model Design

Models are sets of simplistic instructions that are used to explain phenomena and provide relevant insight on future events (for CAT models – estimating future catastrophic losses). We humans start using models at very early ages. No one would confuse a model airplane with a real one; however, if a parent wanted to simplify the laws of physics to explain to a child how planes fly, then a model airplane is a better tool than, say, a physics book or computer-aided design software. Conversely, if you are a college student studying engineering or aerodynamics, the reverse is true. In each case, we are attempting to use a tool – models of flight, in this instance – to explain how things work and to lend insight into what could happen based on historical data so that we can merge theory and practice into something useful. It is the constant iteration between theory and practice that allows an airplane manufacturer to build a new fighter jet, for instance. No manufacturer would foolishly build an airplane based on models no matter how scientifically advanced those models are, but those models would be incredibly useful in guiding the designers to experimental prototypes. We build models, test them, update them with new knowledge, test them again and repeat the process until we achieve desired results.

The design and use of CAT models follows this exact pattern. The first CAT models estimated loss by first calculating total industry losses and then proportionally allocating losses to insurers based on assumption of market share. That evolved into calculating loss estimates for specific locations at specific addresses. As technology advanced into the 1990s, model developers harnessed that computing power and were able to develop simulation programs to analyze more data, faster. The model vendors then added more models to cover more global peril regions. Today’s CAT models can even estimate construction type, height and building age if an insurer does not readily have that information.

As catastrophic events occur, modelers routinely compare the actual event losses with the models and measure how well or how poorly the models performed. Using actual incurred loss data helps calibrate the models and also enables modelers to better understand the areas in which improvements must be made to make them more resilient.

However, for all the effort and resources put into improving the models (model vendors spend millions of dollars each year on model research, development, improvement and quality assurance), there is still much work to be done to make them even more useful to the industry. In fact, virtually every model component has its limitations. A CAT model’s hazard module is a good example.

The hazard module takes into account the frequency and severity of potential disasters. Following the calamitous 2004 and 2005 U.S. hurricane seasons, the chief model vendors felt pressure to amend their base catalogs with something to reflect the new high-risk era we were in, that is, taking into account higher-than-average sea surface temperatures. These model changes dramatically affected reinsurance purchase decisions and account pricing. And yet, little followed. What was assumed to be the new normal of risk taking actually turned into one of the quietest periods on record.

Another example was the magnitude 9.0, 2011 Great Tōhuko Earthquake in Japan. The models had no events even close to this monster earthquake in their event catalogs. Every model clearly got it wrong, and, as a result, model vendors scrambled to fix this “error” in the model. Have the errors been corrected? Perhaps in these circumstances, but what other significant model errors exist that have yet to be corrected?

CAT model peer reviewers have also taken issue with actual event catalogs used in the modeling process to quantify catastrophic loss. For example, a problem for insurers is answering the type of question of: What is the probability of a Category 5 hurricane making landfall in New York City? Of course, no one can provide an answer with certainty. However, while no one can doubt the significance of the level of damage an event of that intensity would bring to New York City (Super Storm Sandy was not even a hurricane at landfall in 2012 and yet caused tens of billions of dollars in insured damages), the critical question for insurers is: Is this event rare enough that it can be ignored, or do we need to prepare for an event of that magnitude?

To place this into context, the Category 3, 1938 Long Island Express event would probably cause more than $50 billion in insured losses today, and that event did not even strike New York City. If a Category 5 hurricane hitting New York City was estimated to cause $100 billion in insured losses, then knowing whether this was a 1-in-10,000-year possibility or a 1-in-100-year possibility could mean the difference between solvency and insolvency for many carriers. If that type of storm was closer to a 1-in-100-year probability, then insurers have the obligation to manage their operations around this possibility; the consequences are too grave, otherwise.

Taking into account the various chances of a Category 5 directly striking New York City, what does that all mean? It means that adjustments in underwriting, pricing, accumulated capacity in that region and, of course, reinsurance design all need to be considered — or reconsidered, depending on an insurer’s present position relative to its risk appetite. Knowing the true probability is not possible at this time; we need more time and research to understand that. Unfortunately for insurers, rating agencies and regulators, we live in the present, and sole reliance on the models to provide “answers” is not enough.

Compounding this problem is that, regardless of the peril, errors exist in every model’s event catalog. These errors cannot even be avoided, and the problem escalates where our paucity of historical recordings and scientific experiments limit our industry’s ability to inch us closer and closer to greater certainty.

Earthquake models still lie beyond a comfortable reach of predictability. Some of the largest and most consequential earthquakes in U.S. history have occurred near New Madrid, MO. Scientists are still wrestling with the mechanics of that fault system. Thus, managing a portfolio of properties solely dependent on CAT model output is foolhardy at best. There is too much financial consequence from phenomena that scientists still do not understand.

Modelers also need to continuously assess property vulnerability when it comes to taking into consideration various building stock types with current building codes. Assessing this with imperfect data and across differing building codes and regulations is difficult. That is largely the reason that so-called “vulnerability curves” oftentimes are revised after spates of significant events. Understandably, each event yields additional data points for consideration, which must be taken into account in future model versions. Damage surveys following Hurricane Ike showed that the models underestimated contents vulnerability within large high-rises because of water damage caused by wind-driven rain.

As previously described, a model is a set of simplified instructions, which can be programmed to make various assumptions based on the input provided. Models, therefore, fall into the Garbage In – Garbage out complex. As insurers adapt to these new models, they often need to cajole their legacy IT systems to provide the required data to run the models. For many insurers, this is an expensive and resource-intensive process, often taking years.

Data Quality’s Importance

Currently, the quality of industry data to be used in such tools as CAT models is generally considered poor. Many insurers are inputting unchecked data into the models. For example, it is not uncommon that building construction type, occupancy, height and age, not to mention a property’s actual physical address, are unknown! For each  property whose primary and secondary risk characteristics are missing, the models must make assumptions regarding those precious missing inputs – even regarding where the property is located. This increases model uncertainty, which can lead to inaccurate assessment of an insurer’s risk exposure.

CAT modeling results are largely ineffective without quality data collection. For insurers, the key risk is that poor data quality could lead to a misunderstanding regarding what their exposure is to potential catastrophic events. This, in turn, will have an impact on portfolio management, possibly leading to unwanted exposure distribution and unexpected losses, which will affect both insurers’ and their reinsurers’ balance sheets. If model results are skewed as a result of poor data quality, this can lead to incorrect assumptions, inadequate capitalization and the failure to purchase sufficient reinsurance for insurers. Model results based on complete and accurate data ensures greater model output certainty and credibility.

The Future

Models are designed and built based on information from the past. Using them is like trying to drive a car by only looking in the rear view mirror; nonetheless, catastrophes, whether natural or man-made, are inevitable, and having a robust means to quantify them is critical to the global insurance marketplace and lifecycle.

Or is it?

Models, and CAT models in particular, provide a credible industry tool to simulate the future based on the past, but is it possible to simulate the future based on perceived trends and worst-case scenarios? Every CAT model has its imperfections, which must be taken into account, especially when employing modeling best practices. All key stakeholders in the global insurance market, from retail and wholesale brokers to reinsurance intermediaries, from insurers to reinsurers and to the capital markets and beyond, must understand the extent of those imperfections, how error-sensitive the models can be and how those imperfections must be accounted for to gain the most accurate insight into individual risks or entire risk portfolios. The difference in a few can mean a lot.

The next wave of innovation in property insurance will come from going back to insurance basics: managing risk for the customer. Despite model limitations, creative and innovative entrepreneurs will use models to bundle complex packages of risks that will be both profitable to the insurer and economical to the consumer. Consumers desiring to protect themselves from earthquake risks in California, hurricane risks in Florida and flood risks on the coast and inland will have more options. Insurers looking to deploy capital and find new avenues of growth will use CAT models to simulate millions of scenarios to custom create portfolios optimizing their capacity and create innovative product features to distinguish their products against competitors. Intermediaries will use the models to educate and craft effective risk management programs to maximize their clients’ profitability.

For all the benefit CAT models have provided the industry over the past 25 years, we are only driving the benefit down to the consumer in marginal ways. The successful property insurers of the future will be the ones who close the circle and use the models to create products that make the transfer of earthquake, hurricane and other catastrophic risks available and affordable.

In our next article, we will examine how we can use CAT models to solve some of the critical insurance problems we face.

How CAT Models Lead to Soft Prices

In our first article in this series, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. In the second article, we looked at how, beginning in the mid-1980s, people began developing models that could prevent recurrences of those staggering losses. In this article, we look at how modeling results are being used in the industry.

 

Insurance is a unique business. In most other businesses, expenses associated with costs of operation are either known or can be fairly estimated. The insurance industry, however, needs to estimate expenses for things that are extremely rare or have never happened before. Things such as the damage to a bridge in New York City from a flood or the theft of a precious heirloom from your home or the fire at a factory, or even Jennifer Lopez injuring her hind side. No other industry has to make so many critical business decisions as blindly as the insurance industry. Even in circumstances in which an insurer can accurately estimate a loss to a single policyholder, without the ability to accurately estimate multiple losses all occurring simultaneously, which is what happens during natural catastrophes, the insurer is still operating blindly. Fortunately, the introduction of CAT models greatly enhances both the insurer’s ability to estimate the expenses (losses) associated with a single policyholder and concurrent claims from a single occurrence.

When making decisions about which risks to insure, how much to insure them for and how much premium is required to profitably accept the risk, there are essentially two metrics that can provide the clarity needed to do the job. Whether you are a portfolio manager managing the cumulative risk for a large line of business or an underwriter getting a submission from a broker to insure a factory or an actuary responsible for pricing exposure, what these stakeholders need to minimally know is:

  1. On average, what will potential future losses look like?
  2. On average, what are the reasonable worst case loss scenarios, or the probable maximum loss (PML)?

Those two metrics alone supply enough information for an insurer to make critical business decisions in these key areas:

  • Risk selection
  • Risk-based pricing
  • Capacity allocation
  • Reinsurance program design

Risk Selection

Risk selection includes an underwriter’s determination of the class (such as preferred, standard or substandard) to which a particular risk is deemed to belong, its acceptance or rejection and (if accepted) the premium.

Consider two homes: a $1 million wood frame home and a $1 million brick home both located in Los Angeles. Which home is riskier to the insurer?  Before the advent of catastrophe models, the determination was based on historical data and, essentially, opinion. Insurers could have hired engineers who would have informed them that brick homes are much more susceptible to damage than wood frame homes under earthquake stresses. But it was not until the introduction of the models that insurers could finally quantify how much financial risk they were exposed to. They shockingly discovered that on average brick homes are four times riskier than wood frame homes and are twice as likely to sustain a complete loss (full collapse). This was data not well-known by insurers.

Knowing how two or more different risks (or groups of risks) behave at an absolute and relational level provides a foundation to insurers to intelligently set underwriting guidelines, which work toward their strengths and excludes risks they do not or cannot absorb, based on their risk appetite.

Risk-Based Pricing

Insurance is rapidly becoming more of a commodity, with customers often choosing their insurer purely on the basis of price. As a result, accurate ratemaking has become more important than ever. In fact, a Towers Perrin survey found that 96% of insurers consider sophisticated rating and pricing to be either essential or very important.

Multiple factors go into determining premium rates, and, as competition increases, insurers are introducing innovative rate structures. The critical question in ratemaking is: What risk factors or variables are important for predicting the likelihood, frequency and severity of a loss? Although there are many obvious risk factors that affect rates, subtle and non-intuitive relationships can exist among variables that are difficult, if not impossible, to identify without applying more sophisticated analyses.

Regarding our example involving the two homes situated in Los Angeles, catastrophe models tell us two very important things: what the premium to cover earthquake loss should roughly be and that the premium for masonry homes should be approximately four times larger than wood frame homes.

The concept of absolute and relational pricing using catastrophe models is revolutionary. Many in the industry may balk at our term “revolutionary,” but insurers using the models to establish appropriate price levels for property exposures have a massive advantage over public entities such as the California Earthquake Authority (CEA) and the National Flood Insurance Program (NFIP) that do not adhere to risk-based pricing.

The NFIP and CEA, like most quasi-government insurance entities, differ in their pricing from private insurers along multiple dimensions, mostly because of constraints imposed by law. Innovative insurers recognize that there are literally billions of valuable premium dollars at stake for risks for which the CEA, the NFIP and similar programs significantly overcharge – again, because of constraints that forbid them from being competitive.

Thus, using average and extreme modeled loss estimates not only ensures that insurers are managing their portfolios effectively, but enables insurers, especially those that tend to have more robust risk appetites, to identify underserved markets and seize valuable market share. From a risk perspective, a return on investment can be calculated via catastrophe models.

It is incumbent upon insurers to identify the risks they don’t wish to underwrite as well as answer such questions as: Are wood frame houses less expensive to insure than homes made of joisted masonry? and, What is the relationship between claims severity and a particular home’s loss history? Traditional univariate pricing analysis methodologies are outdated; insurers have turned to multivariate statistical pricing techniques and methodologies to best understand the relationships between multiple risk variables. With that in mind, insurers need to consider other factors, too, such as marketing costs, conversion rates and customer buying behavior, just to name a few, to accurately price risks. Gone are the days when unsophisticated pricing and risk selection methodologies were employed. Innovative insurers today cross industry lines by paying more and more attention to how others manage data and assign value to risk.

Capacity Allocation

In the (re)insurance industry, (re)insurers only accept risks if those risks are within the capacity limits they have established based on their risk appetites. “Capacity” means the maximum limit of liability offered by an insurer during a defined period. Oftentimes, especially when it comes to natural catastrophe, some risks have a much greater accumulation potential, and that accumulation potential is typically a result of dependencies between individual risks.

Take houses and automobiles. A high concentration of those exposure types may very well be affected by the same catastrophic event – whether a hurricane, severe thunderstorm, earthquake, etc. That risk concentration could potentially put a reinsurer (or insurer) in the unenviable position of being overly exposed to a catastrophic single-loss occurrence.  Having a means to adequately control exposure-to-accumulation is critical in the risk management process. Capacity allocation enables companies to allocate valuable risk capacity to specific perils within specific markets and accumulation zones to minimize their exposure, and CAT models allow insurers to measure how capacity is being used and how efficiently it is being deployed.

Reinsurance Program Design

With the advent of CAT models, insurers now have the ability to simulate different combinations of treaties and programs to find the right fit, maximizing their risk and return. Before CAT models, it would require gut instinct to estimate the probability of attachment of one layer over another or to estimate the average annual losses for a per-risk treaty covering millions of exposures. The models estimate the risk and can calculate the millions of potential claims transactions, which would be nearly impossible to do without computers and simulation.

It is now well-known how soft the current reinsurance market is. Alternative capital has been a major driving force, but we consider the maturation of CAT models as having an equally important role in this trend.

First, insurers using CAT models to underwrite, price and manage risk can now intelligently present their exposure and effectively defend their position on terms and conditions. Gone are the days when reinsurers would have the upper hand in negotiations; CAT models have leveled the playing field for insurers.

Secondly, alternative capital could not have the impact that it is currently having without the language of finance. CAT models speak that language. The models provide necessary statistics for financial firms looking to allocate capital in this area. Risk transfer becomes so much more fungible once there is common recognition of the probability of loss between transferor and transferee. No CAT models, no loss estimates. No loss estimates, no alternative capital. No alternative capital, no soft market.

A Needed Balance

By now, and for good reason, the industry has placed much of its trust in CAT models to selectively manage portfolios to minimize PML potential. Insurers and reinsurers alike need the ability to quantify and identify peak exposure areas, and the models stand ready to help understand and manage portfolios as part of a carrier’s risk management process. However, a balance between the need to bear risk and the need to preserve a carrier’s financial integrity in the face of potential catastrophic loss is essential. The idea is to pursue a blend of internal and external solutions to ensure two key factors:

  1. The ability to identify, quantify and estimate the chances of an event occurring and the extent of likely losses, and
  2. The ability to set adequate rates.

Once companies have an understanding of their catastrophe potential, they can effectively formulate underwriting guidelines to act as control valves on their catastrophe loss potential but, most importantly, even in high-risk regions, identify those exposures that still can meet underwriting criteria based on any given risk appetite. Underwriting criteria relative to writing catastrophe-prone exposure must be used as a set of benchmarks, not simply as a blind gatekeeper.

In our next article, we examine two factors that could derail the progress made by CAT models in the insurance industry. Model uncertainty and poor data quality threaten to raise skepticism about the accuracy of the models, and that skepticism could inhibit further progress in model development.

Riding Out the Storm: the New Models

In our last article, When Nature Calls, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. Those massive losses were a direct result of an industry overconfident in its ability to gauge the frequency and severity of catastrophic events. Insurers were using only history and their limited experience as their guide, resulting in a tragic loss of years’ worth of policyholder surplus.

The turmoil of this period cannot be overstated. Many insurers went insolvent, and those that survived needed substantial capital infusions to continue functioning. Property owners in many states were left with no affordable options for adequate coverage and, in many cases, were forced to go without any coverage at all. The property markets seized up. Without the ability to properly estimate how catastrophic events would affect insured properties, it looked as though the market would remain broken indefinitely.

Luckily, in the mid 1980s, two people on different sides of the country were already working on solutions to this daunting problem. Both had asked themselves: If the problem is lack of data because of the rarity of recorded historical catastrophic events, then could we plug the historical data available now, along with mechanisms for how catastrophic events behave, into a computer and then extrapolate the full picture of the historical data needed? Could we then take that data and create a catalog of millions of simulated events occurring over thousands of years and use it to tell us where and how often we can expect events to occur, as well as how severe they could be? The answer was unequivocally yes, but with caveats.

In 1987, Karen Clark, a former insurance executive out of Boston, formed Applied Insurance Research (now AIR Worldwide). She spent much of the 1980s with a team of researchers and programmers designing a system that could estimate where hurricanes would strike the coastal U.S., how often they would strike and ultimately, based on input insurance policy terms and conditions, how much loss an insurer could expect from those events. Simultaneously, on the West Coast at Stanford University, Hemant Shah was completing his graduate degree in engineering and attempting to answer those same questions, only he was focusing on the effects of earthquakes occurring around Los Angeles and San Francisco.

In 1988, Clark released the first commercially available catastrophe model for U.S. hurricanes. Shah released his earthquake model a year later through his company, Risk Management Solutions (RMS). Their models were incredibly slow, limited and, according to many insurers, unnecessary. However, for the first time, loss estimates were being calculated based on actual scientific data of the day along with extrapolated probability and statistics in place of the extremely limited historical data previously used. These new “modeled” loss estimates were not in line with what insurers were used to seeing and certainly could not be justified based on historical record.

Clark’s model generated hurricane storm losses in the tens of billions of dollars while, up until that point, the largest insured loss ever recorded did not even reach $1 billion! Insurers scoffed at the comparison. But all of that quickly changed in August 1992, when Hurricane Andrew struck southern Florida.

Using her hurricane model, Clark estimated that insured losses from Andrew might exceed $13 billion. Even in the face of heavy industry doubt, Clark published her prediction. She was immediately derided and questioned by her peers, the press and virtually everyone around. They said her estimates were unprecedented and far too high. In the end, though, when it turned out that actual losses, as recorded by Property Claims Services, exceeded $15 billion, a virtual catastrophe model feeding frenzy began. Insurers quickly changed their tune and began asking AIR and RMS for model demonstrations. The property insurance market would never be the same.

So what exactly are these revolutionary models, which are now affectionately referred to as “cat models?”

Regardless of the model vendor, every cat model uses the same three components:

  1. Event Catalog – A catalog of hypothetical stochastic (randomized) events, which informs the modeler about the frequency and severity of catastrophic events. The events contained in the catalog are based on millions of years of computerized simulations using recorded historical data, scientific estimation and the physics of how these types of events are formed and behave. Additionally, for each of these events, associated hazard and local intensity data is available, which answers the questions: Where? How big? And how often?
  2. Damage Estimation – The models employ damage functions, which describe the mathematical interaction between building structure and event intensity, including both their structural and nonstructural components, as well as their contents and the local intensity to which they are exposed. The damage functions have been developed by experts in wind and structural engineering and are based on published engineering research and engineering analyses. They have also been validated based on results of extensive damage surveys undertaken in the aftermath of catastrophic events and on billions of dollars of actual industry claims data.
  3. Financial Loss – The financial module calculates the final losses after applying all limits and deductibles on a damaged structure. These losses can be linked back to events with specific probabilities of occurrence. Now an insurer not only knows what it is exposed to, but also what its worst-case scenarios are and how frequently those may occur.

Screenshot-2014-11-13-14.50.41

When cat models first became commercially available, industry adoption was slow. It took Hurricane Andrew in 1992 followed by the Northridge earthquake in 1994 to literally and figuratively shake the industry out of its overconfidence. Reinsurers and large insurers were the first to use the models, mostly due to their vast exposure to loss and their ability to afford the high license fees. Over time, however, much of the industry followed suit. Insurers that were unable to afford the models (or who were skeptical of them) could get access to all the available major models via reinsurance brokers that, at that time, also began rolling out suites of analytic solutions around catastrophe model results.

Today, the models are ubiquitous in the industry. Rating agencies require model output based on prescribed model parameters in their supplementary rating questionnaires to understand whether or not insurers can economically withstand certain levels of catastrophic loss. Reinsurers expect insurers to provide modeled loss output on their submissions when applying for reinsurance. The state of Florida has even set up a commission, the Florida Commission on Loss Prevention Methodology, which consists of “an independent body of experts created by the Florida Legislature in 1995 for the purpose of developing standards and reviewing hurricane loss models used in the development of residential property insurance rates and the calculation of probable maximum loss levels.”

Models are available for tropical cyclones, extra tropical cyclones, earthquakes, tornados, hail, coastal and inland flooding, tsunamis and even for pandemics and certain types of terrorist attacks. The first set of models started out as simulated catastrophes for U.S.-based perils, but now models exist globally for countries in Europe, Australia, Japan, China and South America.

In an effort to get ahead of the potential impact of climate change, all leading model vendors even provide U.S. hurricane event catalogs, which simulate potential catastrophic scenarios under the assumption that the Atlantic Ocean sea-surface temperatures will be warmer on average. And with advancing technologies, open-source platforms are being developed, which will help scores of researchers working globally on catastrophes to become entrepreneurs by allowing “plug and play” use of their models. This is the virtual equivalent of a cat modeling app store.

Catastrophe models have provided the insurance industry with an innovative solution to a major problem. Ironically, the solution itself is now an industry in its own right, as estimated revenues from model licenses now annually exceed $500 million (based on conversations with industry experts).

But how have the models performed over time? Have they made a difference in the industry’s ability to help manage catastrophic loss? Those are not easy questions to answer, but we believe they have. All the chaos from Hurricane Andrew and the Northridge earthquake taught the industry some invaluable lessons. After the horrific 2004 and 2005 hurricane seasons, which ravaged Florida with four major hurricanes in a single year, followed by a year that saw two major hurricanes striking the Gulf Coast – one of them being Hurricane Katrina, the single most costly natural disaster in history – there were no ensuing major insurance company insolvencies. This was a profound success.

The industry withstood a two-year period of major catastrophic losses. Clearly, something had changed. Cat models played a significant role in this transformation. The hurricane losses from 2004 and 2005 were large and painful, but did not come as a surprise. Using model results, the industry now had a framework to place those losses in proper context. In fact, each model vendor has many simulated hurricane events in their catalogs, which resemble Hurricane Katrina. Insurers knew, from the models, that Katrina could happen and were therefore prepared for that possible, albeit unlikely, outcome.

However, with the universal use of cat models in property insurance comes other issues. Are we misusing these tools? Are we becoming overly dependent on them? Are models being treated as a panacea to vexing business and scientific questions instead of as the simple framework for understanding potential loss?

Next in this series, we will illustrate how modeling results are being used in the industry and how overconfidence in the models could, once again, lead to crisis.