Tag Archives: new zealand earthquake

How to Avoid Failed Catastrophe Models

Since commercial catastrophe models were introduced in the 1980s, they have become an integral part of the global (re)insurance industry. Underwriters depend on them to price risk, management uses them to set business strategies and rating agencies and regulators consider them in their analyses. Yet new scientific discoveries and claims insights regularly reshape our view of risk, and a customized model that is fit-for-purpose one day might quickly become obsolete if it is not updated for changing business practices and advances in our understanding of natural and man-made events in a timely manner.

Despite the sophisticated nature of each new generation of models, new events sometimes expose previously hidden attributes of a particular peril or region. In 2005, Hurricane Katrina caused economic and insured losses in New Orleans far greater than expected because models did not consider the possibility of the city’s levees failing. In 2011, the existence of a previously unknown fault beneath Christchurch and the fact the city sits on an alluvial plain of damp soil created unexpected liquefaction in the New Zealand earthquake. And in 2012, Superstorm Sandy exposed the vulnerability of underground garages and electrical infrastructure in New York City to storm surge, a secondary peril in wind models that did not consider the placement of these risks in pre-Sandy event sets.

Such surprises affect the bottom lines of (re)insurers, who price risk largely based on the losses and volatility suggested by the thousands of simulated events analyzed by a model. However, there is a silver lining for (re)insurers. These events advance modeling capabilities by improving our understanding of the peril’s physics and damage potential. Users can then often incorporate such advances themselves, along with new technologies and best practices for model management, to keep their company’s view of risk current – even if the vendor has not yet released its own updated version – and validate enterprise risk management decisions to important stakeholders.

See also: Catastrophe Models Allow Breakthroughs  

When creating a resilient internal modeling strategy, (re)insurers must weigh cost, data security, ease of use and dependability. Complementing a core commercial model with in-house data and analytics and standard formulas from regulators, and reconciling any material differences in hazard assumptions or modeled losses, can help companies of all sizes manage resources. Additionally, the work protects sensitive information, allows access to the latest technology and support networks and mitigates the impact of a crisis to vital assets – all while developing a unique risk profile.

To the extent resources allow, (re)insurers should analyze several macro- and micro-level considerations when evaluating the merits of a given platform. On the macro level, unless a company’s underwriting and claims data dominated the vendor’s development methodology, customization is almost always desirable, especially at the bottom of the loss curve where there is more claim data; if a large insurer with robust exposure and claims data is heavily involved in the vendor’s product development, the model’s vulnerability assumptions and loss payout and developments patterns will likely mirror that of the company itself, so less customization is necessary. Either way, users should validate modeled losses against historical claims from both their own company and industry perspectives, taking care to adjust for inflation, exposure changes or non-modeled perils, to confirm the reasonability of return periods in portfolio and industry occurrence and aggregate exceedance-probability curves. Without this important step, insurers may find their modeled loss curves differ materially from observed historical results, as illustrated below.

A micro-level review of model assumptions and shortcomings can further narrow the odds of a “shock” loss. As such, it is critical to precisely identify risks’ physical locations and characteristics, as loss estimates may vary widely within a short distance – especially for flood, where elevation is an important factor. When a model’s geocoding engine or a national address database cannot assign location, there are several disaggregation methodologies available, but each produces different loss estimates. European companies will need to be particularly careful regarding data quality and integrity as the new General Data Protection Regulation, which may mean less specific location data is collected, takes effect.

Equally as important as location is a risk’s physical characteristics, as a model will estimate a range of possibilities without this information. If the assumption regarding year of construction, for example, differs materially from the insurer’s actual distribution, modeled losses for risks with unknown construction years may be under- or overestimated. The exhibit below illustrates the difference between an insurer’s actual data and a model’s assumed year of construction distribution based on regional census data in Portugal. In this case, the model assumes an older distribution than the actual data shows, so losses on risks with unknown construction years may be overstated.

There is also no database of agreed property, contents or business interruption valuations, so if a model’s assumed valuations are under- or overstated, the damage function may be inflated or diminished to balance to historical industry losses.

See also: How to Vastly Improve Catastrophe Modeling  

Finally, companies must also adjust “off-the-shelf” models for missing components. Examples include overlooked exposures like a detached garage; new underwriting guidelines, policy wordings or regulations; or the treatment of sub-perils, such as a tsunami resulting from an earthquake. Loss adjustment difficulties are also not always adequately addressed in models. Loss leakage – such as when adjusters cannot separate covered wind loss from excluded storm surge loss – can inflate results, and complex events can drive higher labor and material costs or unusual delays. Users must also consider the cascading impact of failed risk mitigation measures, such as the malfunction of cooling generators in the Fukushima nuclear power plant after the Tohoku earthquake.

If an insurer performs regular, macro-level analyses of its model, validating estimated losses against historical experience and new views of risk, while also supplementing missing or inadequate micro-level components appropriately, it can construct a more resilient modeling strategy that minimizes the possibility of model failure and maximizes opportunities for profitable growth.

The views expressed herein are solely those of the author and do not reflect the views of Guy Carpenter & Company, LLC, its officers, managers, or employees.

You can find the article originally published on Brink.

The Traps Hiding in Catastrophe Models

Catastrophe models from third-party vendors have established themselves as essential tools in the armory of risk managers and other practitioners wanting to understand insurance risk relating to natural catastrophes. This is a welcome trend. Catastrophe models are perhaps the best way of understanding the risks posed by natural perils—they use a huge amount of information to link extreme or systemic external  events to an economic loss and, in turn, to an insured (or reinsured) loss. But no model is perfect, and a certain kind of overreliance on the output from catastrophe models can have egregious effects.

This article provides a brief overview of the kinds of traps and pitfalls associated with catastrophe modeling. We expect that this list is already familiar to most catastrophe modelers. It is by no means intended to be exhaustive. The pitfalls could be categorized in many different ways, but this list might trigger internal lines of inquiry that lead to improved risk processes. In the brave new world of enterprise risk management, and ever-increasing scrutiny from stakeholders, that can only be a good thing.

1. Understand what the model is modeling…and what it is not modeling!

This is probably not a surprising “No. 1” issue. In recent years, the number and variety of loss-generating natural catastrophes around the world has reminded companies and their risk committees that catastrophe models do not, and probably never will, capture the entire universe of natural perils; far from it. This is no criticism of modeling companies, simply a statement of fact that needs to remain at the front of every risk-taker’s mind.

The usual suspects—such as U.S. wind, European wind and Japanese earthquake—are “bread and butter” peril/territory combinations. However, other combinations are either modeled to a far more limited extent, or not at all. European flood models, for example, remain limited in territorial scope (although certain imminent releases from third-party vendors may well rectify this). Tsunami risk, too, may not be modeled even though it tends to go hand-in-hand with earthquake risk (as evidenced by the devastating 2011 Tohoku earthquake and tsunami in Japan).

Underwriters often refer to natural peril “hot” and “cold” spots, where a hot spot means a type of natural catastrophe that is particularly severe in terms of insurance loss and is (relatively) frequent. This focus of modeling companies on the hot spots is right and proper but means that cold spots are potentially somewhat overlooked. Indeed, the worldwide experience in 2011 and 2012 (including, among other events, a Thailand flood, an Australian flood and a New Zealand earthquake) reminded companies that so-called cold spots are very capable of aggregating up to some significant levels of insured loss. The severity of the recurrent earthquakes in Christchurch, and associated insurance losses, demonstrates the uncertainty and subjectivity associated with the cold spot/ hot spot distinction.

There are all sorts of alternative ways of managing the natural focus of catastrophe models on hot spots (exclusions, named perils within policy wordings, maximum total exposure, etc.) but so-called cold spots do need to remain on insurance companies’ risk radars, and insurers also need to remain aware of the possibility, and possible impact, of other, non-modeled risks.

2. Remember that the model is only a fuzzy version of the truth.

It is human nature to take the path of least resistance; that is, to rely on model output and assume that the model is getting you pretty close to the right answer. After all, we have the best people and modelers in the business! But even were that to be true, there can be a kind of vicious circle in which model output is treated with most suspicion by the modeler, with rather less concern by the next layer of management and so on, until summarized output reaches the board and is deemed absolute truth.

We are all very aware that data is never complete, and there can be surprising variations of data completeness across territories. For example, there may not be a defined post or zip code system for identifying locations, or original insured values may not be captured within the data. The building codes assigned to a particular risk may also be quite subjective, and there can be a number of “heroic” assumptions made during the modeling process in classifying and preparing the modeling data set. At the very least, these assumptions should be articulated and challenged. There can also be a “key person” risk, where data preparation has traditionally resided with one critical data processor, or a small team.  If knowledge is not shared, then there is clear vulnerability to that person or team leaving. But there is also a risk of undue and unquestioning reliance being placed upon that individual or team, reliance that might be due more to their unique position than to any proven expertise.

What kind of model has been run? A detailed, risk-by-risk model or an aggregate model? Certain people in the decision-making chain may not even understand that this could be an issue and simply consider that “a model is a model.”

It is worth highlighting how this fuzzy version of the truth has emerged both retrospectively and prospectively. Retrospectively, actual loss levels have on occasion far exceeded modeled loss levels: the breaching of the levies protecting New Orleans, for example, during Hurricane Katrina in 2005. Prospectively, new releases or revisions of catastrophe models have caused modeled results to move, sometimes materially, even when there is no change to the actual underlying insurance portfolio.

3. Employ additional risk monitoring tools beyond the catastrophe model(s). 

Catastrophe models are a great tool, but it is dangerous to rely on them as the only source of risk management information, even when an insurer has access to more than one proprietary modelling package.

Other risk management tools and techniques available include:

  • Monitoring total sum insured (TSI) by peril and territory
  • Stress and scenario testing
  • Simple internal validation models
  • Experience analysis

Stress and scenario testing, in particular, can be very instructive because a scenario yields intuitive and understandable insight into how a given portfolio might respond to a specific event (or small group of events). It enjoys, therefore, a natural complementarity with the hundreds of thousands of events underlying a catastrophe model. Furthermore, it is possible to construct scenarios to investigate areas where the catastrophe model may be especially weak, such as consideration of cross-class clash risk.

Experience analysis might, at first glance, appear to be an inferior tool for assessing catastrophe loss. Indeed, at the most extreme end of the scale, it will normally provide only limited insight. But catastrophe models are themselves built and given parameters from historical data and historical events. This means that a quick assessment of how a portfolio has performed against the usual suspects, such as, for U.S. exposures, hurricanes Ivan (2004), Katrina (2005), Rita (2005), Wilma (2005), Ike (2008) and Sandy (2012), can provide some very interesting independent views on the shape of the modeled distribution. In this regard, it is essential to tap into the underwriting expertise and qualitative insight that the property underwriters can bring to risk assessment.

4. Communicate the modeling uncertainty.

In light of the inherent uncertainties that exist around modeled risk, it is always worth discussing how to load explicitly for model and parameter risk when reporting return-period exposures, and their movements, to senior management. Pointing out the need for model risk buffers, and highlighting that they are material, can trigger helpful discussions in the relevant decision-making forums. Indeed, finding the most effective way of communicating the weaknesses of catastrophe modeling, without losing the headline messages in the detail and complexity of the modeling steps, and without senior management dismissing the models as too flawed to be of any use, is sometimes as important for the business as the original modeling process.

The decisions that emerge from these internal debates should ultimately protect the risk carrier from surprise or outsize losses. When they happen, such surprises have a tendency to cause rapid loss of credibility from outside analysts, rating agencies or capital providers.