Tag Archives: Catastrophe models

Heading Toward a Data Disaster

On July 6, 1988, the Piper Alpha oil platform exploded. 167 people died. Much of the insurance was with what became known as the London Market Excess of Loss (LMX) Spiral, a tightly knit and badly managed web of insurance policies. Losses cascaded up and around the market. The same insurers were hit again and again. After 14 years, all claims had finally been settled. The cost exceeded $16 billion, more than 10 times the initial estimate.

The late 1980s were a bad time to be in insurance. Piper Alpha added to losses hitting the market from asbestos, storms in Europe and an earthquake in San Francisco. During this time, over 34,000 underwriters and Lloyd’s names paid out between £100,000 and £5 million. Many were ruined.

Never the same again

In the last 30 years, regulation has tightened, and analytics have improved significantly. Since 1970, 19 of the largest 20 catastrophes were caused by natural hazards. Only one, the World Trade Center attack in 2001, was man-made. No insurance companies failed as a result of any of these events. Earnings may have been depressed and capital taken a hit, but reinsurance protections behaved as expected.

But this recent ability to absorb the losses from physically destructive events doesn’t mean that catastrophes will never again be potentially fatal for insurers. New threats are emerging. The modeling tools of the last couple of decades are no longer sufficient.

Lumpy losses

Insurance losses are not evenly distributed across the market. Every year, one or more companies still suffer losses out of all proportion to their market share. They experience a “private catastrophe.” The company may survive, but the leaders of the business frequently experience unexpected and unwanted career changes.

See also: Data Prefill: Now You See It, Now You Don’t  

In the 1980s, companies suffered massive losses because the insurance market failed to appreciate the increasing connectivity of its own exposures and lacked the data and the tools to track this growing risk. Today, all companies have the ability to control their exposures to loss from the physical assets they insure. Managing the impact of losses to intangible assets is much harder.

A new class of modelers

The ability to analyze and manage natural catastrophe risk led to the emergence of a handful of successful natural catastrophe modeling companies over the last 20 years. A similar opportunity now exists for a new class of companies to emerge that can build the models to assess the new “man-made” risks.

Risk exposure is increasingly moving toward the intangible values. According to CB Insights, only 20% of the value of the S&P 500 companies today is made up of physical assets. It was 80% 40 years ago. The non-physical assets are more ephemeral, such as reputation, supply networks, IP and cyber.

Major improvements in safety procedures, risk assessment and the awareness of the destructive potential of insurance spirals makes a repeat of the type of loss seen after Piper Alpha extremely unlikely. The next major catastrophic losses for the insurance market are unlikely to be physical. They will occur because of a lack of understanding of the full reach, and contagion, of intangible losses.

The most successful new analytic companies of the next two decades will include those that are key to helping insurers measure and manage their own exposures to these new classes of risk.

The big data deception

Vast amounts of data are becoming available to insurers. Both free open data and tightly held, transactional data. Smart use of data is expected to radically change how insurers operate and create opportunities for new entrants into the market. Thousands of companies have already emerged in the last few years offering products to help insurers make better decisions about risk selection, price more accurately, service clients better, settle claims faster and reduce fraud.

But too much data, poorly managed, blurs critical signals. It increases the risk of loss. In less than 20 years, the industry has moved from being blinded by lack of data to being dazzled by the glare of too much.

The introduction of data governance processes and compliance officers became widespread in banks after the 2008 credit crunch. Most major insurance companies have risk committees and all are required to maintain a risk register. Yet ensuring that data management processes are of the highest quality is not always a board-level priority.

Looking at the new companies attracting attention and funding, very few appear to be offering solutions to help insurers solve this problem. Some, such as CyberCube, offer specific solutions to manage exposure to cyber risk across a portfolio. Others, such as Atticus DQPro, are quietly deploying tools across London and the U.S. to help insurers keep on top of their own evolving risks. Providing excellent data compliance and management solutions may not be as attention-grabbing as artificial intelligence or blockchain, but they offer a higher probability of being successful with innovations in an otherwise crowded space.

Past performance is no guide to the future, but, as Mark Twain noted, even if history doesn’t repeat itself, it often rhymes. Piper Alpha wasn’t the only nasty surprise in the last 30 years. Many events had a disproportional impact on one or more companies. The signs of impending disaster may have been blurred, but not invisible. Some companies suffered more than others. Jobs were lost. Each event spawned new regulation. But these events also created opportunities to build companies and products to prevent a future repeat. Looking for a problem to solve? Read on.

1. Enron Collapse (2001)

Enron, one of the most powerful and largest companies in the world, collapsed once shareholders realized the company’s success had been dramatically (and fraudulently) overstated. Insurers lost $3.5 billion from collapsed securities and insurance claims. Chubb and Swiss Re each reported losses of over $700 million. Jeff Skilling, CEO, spent 14 years in prison. One of the reasons for poor internal controls was that bonuses for the risk management team were influenced by appraisals from the people they were meant to be policing.

2. Hurricane Katrina and the Floating Casinos (2005)

At $83 billion, Hurricane Katrina is still the largest insured loss ever. No one anticipated the scale of the storm surge, the failure of the levies and the subsequent flooding. There were a lot of surprises. One of the large contributors to loss, from property damage and business interruption, were the floating casinos, ripped from their moorings and torn apart. Many underwriters had assumed the casinos were land-based, unaware that Mississippi’s 1990 law legalizing casinos had required all gambling to take place offshore.

3. Thai Flood Losses (2011)

After heavy rainfall lasting from June to October 2011, seven major industrial zones in Thailand were flooded to depths of up to 3 meters. The resulting insurance loss is the 13th-largest global insured loss ever ($16 billion in today’s value). Before 2011, many insurers didn’t record exposures in Thailand because the country was never considered a catastrophe-prone area. Data on the location and value of the large facilities of global manufacturers wasn’t offered or requested. The first time insurers realized that so many of their clients had facilities so close together was when the claims started coming in. French reinsurer CCR, set up primarily to reinsure French insurers, was hit with 10% of the total losses. Munich Re, along with Swiss Re, paid claims in excess of $500 million and called the floods a “wake-up call.”

See also: The Problems With Blockchain, Big Data  

4. Tianjin Explosion (2015)

With an insured loss of $3.5 billion, the explosions at the Tianjin port in China are the largest man-made insurance loss in Asia. The property, infrastructure, marine, motor vehicle and injury claims hit many insurers. Zurich alone suffered close to $300 million in losses, well in excess of its market share. The company admitted later that the accumulation was not detected because different information systems did not pick up exposures that crossed multiple lines of business. Martin Senn, the CEO, left shortly afterward.

5. Financial Conduct Authority Fines (2017 and onward)

Insurers now also face the risk of being fined by regulators and not just from GDPR-related issues. FCA, the U.K. regulator, levied fines of £230 million in 2017. Liberty Mutual Insurance was charged £5 million (failure in claims handling by a third party) and broker Blue Fin £4 million (not reporting a conflict of interest). Deutsche Bank received the largest fine of £163 million for failing to impose adequate anti-money laundering processes in the U.K., topped up later by a further fine of $425 million from the New York Department of Financial Services.

Looking ahead

“We’re more fooled by noise than ever before,” Nicholas Taleb writes in his book Antifragile.

We will see more data disasters and career-limiting catastrophes in the next 20 years. Figuring out how to keep insurers one step ahead looks like a great opportunity for anyone looking to stand out from the crowd in 2019.

How to Avoid Failed Catastrophe Models

Since commercial catastrophe models were introduced in the 1980s, they have become an integral part of the global (re)insurance industry. Underwriters depend on them to price risk, management uses them to set business strategies and rating agencies and regulators consider them in their analyses. Yet new scientific discoveries and claims insights regularly reshape our view of risk, and a customized model that is fit-for-purpose one day might quickly become obsolete if it is not updated for changing business practices and advances in our understanding of natural and man-made events in a timely manner.

Despite the sophisticated nature of each new generation of models, new events sometimes expose previously hidden attributes of a particular peril or region. In 2005, Hurricane Katrina caused economic and insured losses in New Orleans far greater than expected because models did not consider the possibility of the city’s levees failing. In 2011, the existence of a previously unknown fault beneath Christchurch and the fact the city sits on an alluvial plain of damp soil created unexpected liquefaction in the New Zealand earthquake. And in 2012, Superstorm Sandy exposed the vulnerability of underground garages and electrical infrastructure in New York City to storm surge, a secondary peril in wind models that did not consider the placement of these risks in pre-Sandy event sets.

Such surprises affect the bottom lines of (re)insurers, who price risk largely based on the losses and volatility suggested by the thousands of simulated events analyzed by a model. However, there is a silver lining for (re)insurers. These events advance modeling capabilities by improving our understanding of the peril’s physics and damage potential. Users can then often incorporate such advances themselves, along with new technologies and best practices for model management, to keep their company’s view of risk current – even if the vendor has not yet released its own updated version – and validate enterprise risk management decisions to important stakeholders.

See also: Catastrophe Models Allow Breakthroughs  

When creating a resilient internal modeling strategy, (re)insurers must weigh cost, data security, ease of use and dependability. Complementing a core commercial model with in-house data and analytics and standard formulas from regulators, and reconciling any material differences in hazard assumptions or modeled losses, can help companies of all sizes manage resources. Additionally, the work protects sensitive information, allows access to the latest technology and support networks and mitigates the impact of a crisis to vital assets – all while developing a unique risk profile.

To the extent resources allow, (re)insurers should analyze several macro- and micro-level considerations when evaluating the merits of a given platform. On the macro level, unless a company’s underwriting and claims data dominated the vendor’s development methodology, customization is almost always desirable, especially at the bottom of the loss curve where there is more claim data; if a large insurer with robust exposure and claims data is heavily involved in the vendor’s product development, the model’s vulnerability assumptions and loss payout and developments patterns will likely mirror that of the company itself, so less customization is necessary. Either way, users should validate modeled losses against historical claims from both their own company and industry perspectives, taking care to adjust for inflation, exposure changes or non-modeled perils, to confirm the reasonability of return periods in portfolio and industry occurrence and aggregate exceedance-probability curves. Without this important step, insurers may find their modeled loss curves differ materially from observed historical results, as illustrated below.

A micro-level review of model assumptions and shortcomings can further narrow the odds of a “shock” loss. As such, it is critical to precisely identify risks’ physical locations and characteristics, as loss estimates may vary widely within a short distance – especially for flood, where elevation is an important factor. When a model’s geocoding engine or a national address database cannot assign location, there are several disaggregation methodologies available, but each produces different loss estimates. European companies will need to be particularly careful regarding data quality and integrity as the new General Data Protection Regulation, which may mean less specific location data is collected, takes effect.

Equally as important as location is a risk’s physical characteristics, as a model will estimate a range of possibilities without this information. If the assumption regarding year of construction, for example, differs materially from the insurer’s actual distribution, modeled losses for risks with unknown construction years may be under- or overestimated. The exhibit below illustrates the difference between an insurer’s actual data and a model’s assumed year of construction distribution based on regional census data in Portugal. In this case, the model assumes an older distribution than the actual data shows, so losses on risks with unknown construction years may be overstated.

There is also no database of agreed property, contents or business interruption valuations, so if a model’s assumed valuations are under- or overstated, the damage function may be inflated or diminished to balance to historical industry losses.

See also: How to Vastly Improve Catastrophe Modeling  

Finally, companies must also adjust “off-the-shelf” models for missing components. Examples include overlooked exposures like a detached garage; new underwriting guidelines, policy wordings or regulations; or the treatment of sub-perils, such as a tsunami resulting from an earthquake. Loss adjustment difficulties are also not always adequately addressed in models. Loss leakage – such as when adjusters cannot separate covered wind loss from excluded storm surge loss – can inflate results, and complex events can drive higher labor and material costs or unusual delays. Users must also consider the cascading impact of failed risk mitigation measures, such as the malfunction of cooling generators in the Fukushima nuclear power plant after the Tohoku earthquake.

If an insurer performs regular, macro-level analyses of its model, validating estimated losses against historical experience and new views of risk, while also supplementing missing or inadequate micro-level components appropriately, it can construct a more resilient modeling strategy that minimizes the possibility of model failure and maximizes opportunities for profitable growth.

The views expressed herein are solely those of the author and do not reflect the views of Guy Carpenter & Company, LLC, its officers, managers, or employees.

You can find the article originally published on Brink.

How CAT Models Lead to Soft Prices

In our first article in this series, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. In the second article, we looked at how, beginning in the mid-1980s, people began developing models that could prevent recurrences of those staggering losses. In this article, we look at how modeling results are being used in the industry.

 

Insurance is a unique business. In most other businesses, expenses associated with costs of operation are either known or can be fairly estimated. The insurance industry, however, needs to estimate expenses for things that are extremely rare or have never happened before. Things such as the damage to a bridge in New York City from a flood or the theft of a precious heirloom from your home or the fire at a factory, or even Jennifer Lopez injuring her hind side. No other industry has to make so many critical business decisions as blindly as the insurance industry. Even in circumstances in which an insurer can accurately estimate a loss to a single policyholder, without the ability to accurately estimate multiple losses all occurring simultaneously, which is what happens during natural catastrophes, the insurer is still operating blindly. Fortunately, the introduction of CAT models greatly enhances both the insurer’s ability to estimate the expenses (losses) associated with a single policyholder and concurrent claims from a single occurrence.

When making decisions about which risks to insure, how much to insure them for and how much premium is required to profitably accept the risk, there are essentially two metrics that can provide the clarity needed to do the job. Whether you are a portfolio manager managing the cumulative risk for a large line of business or an underwriter getting a submission from a broker to insure a factory or an actuary responsible for pricing exposure, what these stakeholders need to minimally know is:

  1. On average, what will potential future losses look like?
  2. On average, what are the reasonable worst case loss scenarios, or the probable maximum loss (PML)?

Those two metrics alone supply enough information for an insurer to make critical business decisions in these key areas:

  • Risk selection
  • Risk-based pricing
  • Capacity allocation
  • Reinsurance program design

Risk Selection

Risk selection includes an underwriter’s determination of the class (such as preferred, standard or substandard) to which a particular risk is deemed to belong, its acceptance or rejection and (if accepted) the premium.

Consider two homes: a $1 million wood frame home and a $1 million brick home both located in Los Angeles. Which home is riskier to the insurer?  Before the advent of catastrophe models, the determination was based on historical data and, essentially, opinion. Insurers could have hired engineers who would have informed them that brick homes are much more susceptible to damage than wood frame homes under earthquake stresses. But it was not until the introduction of the models that insurers could finally quantify how much financial risk they were exposed to. They shockingly discovered that on average brick homes are four times riskier than wood frame homes and are twice as likely to sustain a complete loss (full collapse). This was data not well-known by insurers.

Knowing how two or more different risks (or groups of risks) behave at an absolute and relational level provides a foundation to insurers to intelligently set underwriting guidelines, which work toward their strengths and excludes risks they do not or cannot absorb, based on their risk appetite.

Risk-Based Pricing

Insurance is rapidly becoming more of a commodity, with customers often choosing their insurer purely on the basis of price. As a result, accurate ratemaking has become more important than ever. In fact, a Towers Perrin survey found that 96% of insurers consider sophisticated rating and pricing to be either essential or very important.

Multiple factors go into determining premium rates, and, as competition increases, insurers are introducing innovative rate structures. The critical question in ratemaking is: What risk factors or variables are important for predicting the likelihood, frequency and severity of a loss? Although there are many obvious risk factors that affect rates, subtle and non-intuitive relationships can exist among variables that are difficult, if not impossible, to identify without applying more sophisticated analyses.

Regarding our example involving the two homes situated in Los Angeles, catastrophe models tell us two very important things: what the premium to cover earthquake loss should roughly be and that the premium for masonry homes should be approximately four times larger than wood frame homes.

The concept of absolute and relational pricing using catastrophe models is revolutionary. Many in the industry may balk at our term “revolutionary,” but insurers using the models to establish appropriate price levels for property exposures have a massive advantage over public entities such as the California Earthquake Authority (CEA) and the National Flood Insurance Program (NFIP) that do not adhere to risk-based pricing.

The NFIP and CEA, like most quasi-government insurance entities, differ in their pricing from private insurers along multiple dimensions, mostly because of constraints imposed by law. Innovative insurers recognize that there are literally billions of valuable premium dollars at stake for risks for which the CEA, the NFIP and similar programs significantly overcharge – again, because of constraints that forbid them from being competitive.

Thus, using average and extreme modeled loss estimates not only ensures that insurers are managing their portfolios effectively, but enables insurers, especially those that tend to have more robust risk appetites, to identify underserved markets and seize valuable market share. From a risk perspective, a return on investment can be calculated via catastrophe models.

It is incumbent upon insurers to identify the risks they don’t wish to underwrite as well as answer such questions as: Are wood frame houses less expensive to insure than homes made of joisted masonry? and, What is the relationship between claims severity and a particular home’s loss history? Traditional univariate pricing analysis methodologies are outdated; insurers have turned to multivariate statistical pricing techniques and methodologies to best understand the relationships between multiple risk variables. With that in mind, insurers need to consider other factors, too, such as marketing costs, conversion rates and customer buying behavior, just to name a few, to accurately price risks. Gone are the days when unsophisticated pricing and risk selection methodologies were employed. Innovative insurers today cross industry lines by paying more and more attention to how others manage data and assign value to risk.

Capacity Allocation

In the (re)insurance industry, (re)insurers only accept risks if those risks are within the capacity limits they have established based on their risk appetites. “Capacity” means the maximum limit of liability offered by an insurer during a defined period. Oftentimes, especially when it comes to natural catastrophe, some risks have a much greater accumulation potential, and that accumulation potential is typically a result of dependencies between individual risks.

Take houses and automobiles. A high concentration of those exposure types may very well be affected by the same catastrophic event – whether a hurricane, severe thunderstorm, earthquake, etc. That risk concentration could potentially put a reinsurer (or insurer) in the unenviable position of being overly exposed to a catastrophic single-loss occurrence.  Having a means to adequately control exposure-to-accumulation is critical in the risk management process. Capacity allocation enables companies to allocate valuable risk capacity to specific perils within specific markets and accumulation zones to minimize their exposure, and CAT models allow insurers to measure how capacity is being used and how efficiently it is being deployed.

Reinsurance Program Design

With the advent of CAT models, insurers now have the ability to simulate different combinations of treaties and programs to find the right fit, maximizing their risk and return. Before CAT models, it would require gut instinct to estimate the probability of attachment of one layer over another or to estimate the average annual losses for a per-risk treaty covering millions of exposures. The models estimate the risk and can calculate the millions of potential claims transactions, which would be nearly impossible to do without computers and simulation.

It is now well-known how soft the current reinsurance market is. Alternative capital has been a major driving force, but we consider the maturation of CAT models as having an equally important role in this trend.

First, insurers using CAT models to underwrite, price and manage risk can now intelligently present their exposure and effectively defend their position on terms and conditions. Gone are the days when reinsurers would have the upper hand in negotiations; CAT models have leveled the playing field for insurers.

Secondly, alternative capital could not have the impact that it is currently having without the language of finance. CAT models speak that language. The models provide necessary statistics for financial firms looking to allocate capital in this area. Risk transfer becomes so much more fungible once there is common recognition of the probability of loss between transferor and transferee. No CAT models, no loss estimates. No loss estimates, no alternative capital. No alternative capital, no soft market.

A Needed Balance

By now, and for good reason, the industry has placed much of its trust in CAT models to selectively manage portfolios to minimize PML potential. Insurers and reinsurers alike need the ability to quantify and identify peak exposure areas, and the models stand ready to help understand and manage portfolios as part of a carrier’s risk management process. However, a balance between the need to bear risk and the need to preserve a carrier’s financial integrity in the face of potential catastrophic loss is essential. The idea is to pursue a blend of internal and external solutions to ensure two key factors:

  1. The ability to identify, quantify and estimate the chances of an event occurring and the extent of likely losses, and
  2. The ability to set adequate rates.

Once companies have an understanding of their catastrophe potential, they can effectively formulate underwriting guidelines to act as control valves on their catastrophe loss potential but, most importantly, even in high-risk regions, identify those exposures that still can meet underwriting criteria based on any given risk appetite. Underwriting criteria relative to writing catastrophe-prone exposure must be used as a set of benchmarks, not simply as a blind gatekeeper.

In our next article, we examine two factors that could derail the progress made by CAT models in the insurance industry. Model uncertainty and poor data quality threaten to raise skepticism about the accuracy of the models, and that skepticism could inhibit further progress in model development.

Riding Out the Storm: the New Models

In our last article, When Nature Calls, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. Those massive losses were a direct result of an industry overconfident in its ability to gauge the frequency and severity of catastrophic events. Insurers were using only history and their limited experience as their guide, resulting in a tragic loss of years’ worth of policyholder surplus.

The turmoil of this period cannot be overstated. Many insurers went insolvent, and those that survived needed substantial capital infusions to continue functioning. Property owners in many states were left with no affordable options for adequate coverage and, in many cases, were forced to go without any coverage at all. The property markets seized up. Without the ability to properly estimate how catastrophic events would affect insured properties, it looked as though the market would remain broken indefinitely.

Luckily, in the mid 1980s, two people on different sides of the country were already working on solutions to this daunting problem. Both had asked themselves: If the problem is lack of data because of the rarity of recorded historical catastrophic events, then could we plug the historical data available now, along with mechanisms for how catastrophic events behave, into a computer and then extrapolate the full picture of the historical data needed? Could we then take that data and create a catalog of millions of simulated events occurring over thousands of years and use it to tell us where and how often we can expect events to occur, as well as how severe they could be? The answer was unequivocally yes, but with caveats.

In 1987, Karen Clark, a former insurance executive out of Boston, formed Applied Insurance Research (now AIR Worldwide). She spent much of the 1980s with a team of researchers and programmers designing a system that could estimate where hurricanes would strike the coastal U.S., how often they would strike and ultimately, based on input insurance policy terms and conditions, how much loss an insurer could expect from those events. Simultaneously, on the West Coast at Stanford University, Hemant Shah was completing his graduate degree in engineering and attempting to answer those same questions, only he was focusing on the effects of earthquakes occurring around Los Angeles and San Francisco.

In 1988, Clark released the first commercially available catastrophe model for U.S. hurricanes. Shah released his earthquake model a year later through his company, Risk Management Solutions (RMS). Their models were incredibly slow, limited and, according to many insurers, unnecessary. However, for the first time, loss estimates were being calculated based on actual scientific data of the day along with extrapolated probability and statistics in place of the extremely limited historical data previously used. These new “modeled” loss estimates were not in line with what insurers were used to seeing and certainly could not be justified based on historical record.

Clark’s model generated hurricane storm losses in the tens of billions of dollars while, up until that point, the largest insured loss ever recorded did not even reach $1 billion! Insurers scoffed at the comparison. But all of that quickly changed in August 1992, when Hurricane Andrew struck southern Florida.

Using her hurricane model, Clark estimated that insured losses from Andrew might exceed $13 billion. Even in the face of heavy industry doubt, Clark published her prediction. She was immediately derided and questioned by her peers, the press and virtually everyone around. They said her estimates were unprecedented and far too high. In the end, though, when it turned out that actual losses, as recorded by Property Claims Services, exceeded $15 billion, a virtual catastrophe model feeding frenzy began. Insurers quickly changed their tune and began asking AIR and RMS for model demonstrations. The property insurance market would never be the same.

So what exactly are these revolutionary models, which are now affectionately referred to as “cat models?”

Regardless of the model vendor, every cat model uses the same three components:

  1. Event Catalog – A catalog of hypothetical stochastic (randomized) events, which informs the modeler about the frequency and severity of catastrophic events. The events contained in the catalog are based on millions of years of computerized simulations using recorded historical data, scientific estimation and the physics of how these types of events are formed and behave. Additionally, for each of these events, associated hazard and local intensity data is available, which answers the questions: Where? How big? And how often?
  2. Damage Estimation – The models employ damage functions, which describe the mathematical interaction between building structure and event intensity, including both their structural and nonstructural components, as well as their contents and the local intensity to which they are exposed. The damage functions have been developed by experts in wind and structural engineering and are based on published engineering research and engineering analyses. They have also been validated based on results of extensive damage surveys undertaken in the aftermath of catastrophic events and on billions of dollars of actual industry claims data.
  3. Financial Loss – The financial module calculates the final losses after applying all limits and deductibles on a damaged structure. These losses can be linked back to events with specific probabilities of occurrence. Now an insurer not only knows what it is exposed to, but also what its worst-case scenarios are and how frequently those may occur.

Screenshot-2014-11-13-14.50.41

When cat models first became commercially available, industry adoption was slow. It took Hurricane Andrew in 1992 followed by the Northridge earthquake in 1994 to literally and figuratively shake the industry out of its overconfidence. Reinsurers and large insurers were the first to use the models, mostly due to their vast exposure to loss and their ability to afford the high license fees. Over time, however, much of the industry followed suit. Insurers that were unable to afford the models (or who were skeptical of them) could get access to all the available major models via reinsurance brokers that, at that time, also began rolling out suites of analytic solutions around catastrophe model results.

Today, the models are ubiquitous in the industry. Rating agencies require model output based on prescribed model parameters in their supplementary rating questionnaires to understand whether or not insurers can economically withstand certain levels of catastrophic loss. Reinsurers expect insurers to provide modeled loss output on their submissions when applying for reinsurance. The state of Florida has even set up a commission, the Florida Commission on Loss Prevention Methodology, which consists of “an independent body of experts created by the Florida Legislature in 1995 for the purpose of developing standards and reviewing hurricane loss models used in the development of residential property insurance rates and the calculation of probable maximum loss levels.”

Models are available for tropical cyclones, extra tropical cyclones, earthquakes, tornados, hail, coastal and inland flooding, tsunamis and even for pandemics and certain types of terrorist attacks. The first set of models started out as simulated catastrophes for U.S.-based perils, but now models exist globally for countries in Europe, Australia, Japan, China and South America.

In an effort to get ahead of the potential impact of climate change, all leading model vendors even provide U.S. hurricane event catalogs, which simulate potential catastrophic scenarios under the assumption that the Atlantic Ocean sea-surface temperatures will be warmer on average. And with advancing technologies, open-source platforms are being developed, which will help scores of researchers working globally on catastrophes to become entrepreneurs by allowing “plug and play” use of their models. This is the virtual equivalent of a cat modeling app store.

Catastrophe models have provided the insurance industry with an innovative solution to a major problem. Ironically, the solution itself is now an industry in its own right, as estimated revenues from model licenses now annually exceed $500 million (based on conversations with industry experts).

But how have the models performed over time? Have they made a difference in the industry’s ability to help manage catastrophic loss? Those are not easy questions to answer, but we believe they have. All the chaos from Hurricane Andrew and the Northridge earthquake taught the industry some invaluable lessons. After the horrific 2004 and 2005 hurricane seasons, which ravaged Florida with four major hurricanes in a single year, followed by a year that saw two major hurricanes striking the Gulf Coast – one of them being Hurricane Katrina, the single most costly natural disaster in history – there were no ensuing major insurance company insolvencies. This was a profound success.

The industry withstood a two-year period of major catastrophic losses. Clearly, something had changed. Cat models played a significant role in this transformation. The hurricane losses from 2004 and 2005 were large and painful, but did not come as a surprise. Using model results, the industry now had a framework to place those losses in proper context. In fact, each model vendor has many simulated hurricane events in their catalogs, which resemble Hurricane Katrina. Insurers knew, from the models, that Katrina could happen and were therefore prepared for that possible, albeit unlikely, outcome.

However, with the universal use of cat models in property insurance comes other issues. Are we misusing these tools? Are we becoming overly dependent on them? Are models being treated as a panacea to vexing business and scientific questions instead of as the simple framework for understanding potential loss?

Next in this series, we will illustrate how modeling results are being used in the industry and how overconfidence in the models could, once again, lead to crisis.

The Traps Hiding in Catastrophe Models

Catastrophe models from third-party vendors have established themselves as essential tools in the armory of risk managers and other practitioners wanting to understand insurance risk relating to natural catastrophes. This is a welcome trend. Catastrophe models are perhaps the best way of understanding the risks posed by natural perils—they use a huge amount of information to link extreme or systemic external  events to an economic loss and, in turn, to an insured (or reinsured) loss. But no model is perfect, and a certain kind of overreliance on the output from catastrophe models can have egregious effects.

This article provides a brief overview of the kinds of traps and pitfalls associated with catastrophe modeling. We expect that this list is already familiar to most catastrophe modelers. It is by no means intended to be exhaustive. The pitfalls could be categorized in many different ways, but this list might trigger internal lines of inquiry that lead to improved risk processes. In the brave new world of enterprise risk management, and ever-increasing scrutiny from stakeholders, that can only be a good thing.

1. Understand what the model is modeling…and what it is not modeling!

This is probably not a surprising “No. 1” issue. In recent years, the number and variety of loss-generating natural catastrophes around the world has reminded companies and their risk committees that catastrophe models do not, and probably never will, capture the entire universe of natural perils; far from it. This is no criticism of modeling companies, simply a statement of fact that needs to remain at the front of every risk-taker’s mind.

The usual suspects—such as U.S. wind, European wind and Japanese earthquake—are “bread and butter” peril/territory combinations. However, other combinations are either modeled to a far more limited extent, or not at all. European flood models, for example, remain limited in territorial scope (although certain imminent releases from third-party vendors may well rectify this). Tsunami risk, too, may not be modeled even though it tends to go hand-in-hand with earthquake risk (as evidenced by the devastating 2011 Tohoku earthquake and tsunami in Japan).

Underwriters often refer to natural peril “hot” and “cold” spots, where a hot spot means a type of natural catastrophe that is particularly severe in terms of insurance loss and is (relatively) frequent. This focus of modeling companies on the hot spots is right and proper but means that cold spots are potentially somewhat overlooked. Indeed, the worldwide experience in 2011 and 2012 (including, among other events, a Thailand flood, an Australian flood and a New Zealand earthquake) reminded companies that so-called cold spots are very capable of aggregating up to some significant levels of insured loss. The severity of the recurrent earthquakes in Christchurch, and associated insurance losses, demonstrates the uncertainty and subjectivity associated with the cold spot/ hot spot distinction.

There are all sorts of alternative ways of managing the natural focus of catastrophe models on hot spots (exclusions, named perils within policy wordings, maximum total exposure, etc.) but so-called cold spots do need to remain on insurance companies’ risk radars, and insurers also need to remain aware of the possibility, and possible impact, of other, non-modeled risks.

2. Remember that the model is only a fuzzy version of the truth.

It is human nature to take the path of least resistance; that is, to rely on model output and assume that the model is getting you pretty close to the right answer. After all, we have the best people and modelers in the business! But even were that to be true, there can be a kind of vicious circle in which model output is treated with most suspicion by the modeler, with rather less concern by the next layer of management and so on, until summarized output reaches the board and is deemed absolute truth.

We are all very aware that data is never complete, and there can be surprising variations of data completeness across territories. For example, there may not be a defined post or zip code system for identifying locations, or original insured values may not be captured within the data. The building codes assigned to a particular risk may also be quite subjective, and there can be a number of “heroic” assumptions made during the modeling process in classifying and preparing the modeling data set. At the very least, these assumptions should be articulated and challenged. There can also be a “key person” risk, where data preparation has traditionally resided with one critical data processor, or a small team.  If knowledge is not shared, then there is clear vulnerability to that person or team leaving. But there is also a risk of undue and unquestioning reliance being placed upon that individual or team, reliance that might be due more to their unique position than to any proven expertise.

What kind of model has been run? A detailed, risk-by-risk model or an aggregate model? Certain people in the decision-making chain may not even understand that this could be an issue and simply consider that “a model is a model.”

It is worth highlighting how this fuzzy version of the truth has emerged both retrospectively and prospectively. Retrospectively, actual loss levels have on occasion far exceeded modeled loss levels: the breaching of the levies protecting New Orleans, for example, during Hurricane Katrina in 2005. Prospectively, new releases or revisions of catastrophe models have caused modeled results to move, sometimes materially, even when there is no change to the actual underlying insurance portfolio.

3. Employ additional risk monitoring tools beyond the catastrophe model(s). 

Catastrophe models are a great tool, but it is dangerous to rely on them as the only source of risk management information, even when an insurer has access to more than one proprietary modelling package.

Other risk management tools and techniques available include:

  • Monitoring total sum insured (TSI) by peril and territory
  • Stress and scenario testing
  • Simple internal validation models
  • Experience analysis

Stress and scenario testing, in particular, can be very instructive because a scenario yields intuitive and understandable insight into how a given portfolio might respond to a specific event (or small group of events). It enjoys, therefore, a natural complementarity with the hundreds of thousands of events underlying a catastrophe model. Furthermore, it is possible to construct scenarios to investigate areas where the catastrophe model may be especially weak, such as consideration of cross-class clash risk.

Experience analysis might, at first glance, appear to be an inferior tool for assessing catastrophe loss. Indeed, at the most extreme end of the scale, it will normally provide only limited insight. But catastrophe models are themselves built and given parameters from historical data and historical events. This means that a quick assessment of how a portfolio has performed against the usual suspects, such as, for U.S. exposures, hurricanes Ivan (2004), Katrina (2005), Rita (2005), Wilma (2005), Ike (2008) and Sandy (2012), can provide some very interesting independent views on the shape of the modeled distribution. In this regard, it is essential to tap into the underwriting expertise and qualitative insight that the property underwriters can bring to risk assessment.

4. Communicate the modeling uncertainty.

In light of the inherent uncertainties that exist around modeled risk, it is always worth discussing how to load explicitly for model and parameter risk when reporting return-period exposures, and their movements, to senior management. Pointing out the need for model risk buffers, and highlighting that they are material, can trigger helpful discussions in the relevant decision-making forums. Indeed, finding the most effective way of communicating the weaknesses of catastrophe modeling, without losing the headline messages in the detail and complexity of the modeling steps, and without senior management dismissing the models as too flawed to be of any use, is sometimes as important for the business as the original modeling process.

The decisions that emerge from these internal debates should ultimately protect the risk carrier from surprise or outsize losses. When they happen, such surprises have a tendency to cause rapid loss of credibility from outside analysts, rating agencies or capital providers.