Tag Archives: cat models

How CAT Models Are Extending to Cyber

The insurance industry relies heavily on catastrophe modeling to set capital adequacy, adhere and respond to evolving regulatory requirements and stress test portfolios. The same is now increasingly true of the cyber catastrophe sphere, in which key areas of focus include how models can help with capital allocation, stress testing and informing development of underwriting guidelines and insurance products. Parallels can be drawn from the cyber catastrophe and natural catastrophe risk management sectors when modeling these risks.

The introduction of models provided critical insight into the potential for catastrophic claims for all risk policies or policies without clear exclusionary language. Historical events such as the April 1906 San Francisco earthquake (leading to unanticipated claims for fire policies), 2005 Hurricane Katrina flooding (resulting in unanticipated claims for homeowners wind policies) or the 9/11 U.S. terrorist attacks (experiencing unanticipated war exclusion interpretation and definition of a single event), and the current unfolding of the coronavirus pandemic crisis highlight the criticality of understanding the triggers and correlation of potential loss due to a single event.

In many cases, insurers paid losses to avert “reputational risk” and have since used models to provide insight into realistic structuring of policy, reinsurance and other risk transfer vehicles. Clear exclusionary language, endorsements and coverage-specific terms evolved over the decades in concert with evolving scientific knowledge of the risks and modeled loss potential. 

Today, we are seeing the same evolution with respect to insuring cyber risk, but over a highly compressed period, without the decades of experience of systemic insured loss events. Many cyber catastrophe risk managers attempt to apply the same lens of current natural catastrophe model availability of data resolution, data quality, catastrophic event knowledge and model validation expectations. But by embracing the commonality of lessons learned from the evolution of the property catastrophe insurance market, we can prepare for an event considered to be a case of not “if” but “when.”

The role of data in models

A first common theme is to recognize that the understanding and availability of information for a rapidly evolving risk means that there is value in aggregate data in the absence of detailed data. This has been and is still the case for property catastrophes and is also the case for cyber catastrophe risk models. Confidentiality obligations in portfolio data as well as the lack of high-quality data is an issue for all models. However, new sources of data as well as sophisticated data science and artificial intelligence analytics are being incorporated into models that provide an increased confidence in assessing the potential risk to an individual company or entity. 

See also: Coronavirus Boosts Cyber Risk

A second related common theme is the ability of catastrophe risk models to augment lack of risk-specific data capture at the time of underwriting. This is where all catastrophe risk models add significant value, where context for what should be captured as well as what can be captured is provided. In the case of cyber, this can include access to both inside-out (behind the firewall) and outside-in (outside the firewall) data. Inside-out data refers to aggregate data for segments of the economy, measuring the anonymized trends of security behaviors (such as frequency of software patching). Outside-in data is made up of specific signals that can be identified from outside an organization and that give indications of overall cybersecurity maturity (such as the use of unsupported end-of-life products). 

A third commonality is the value in extrapolating the impact of past events into the future given evolving available data on the changing causes of frequency and severity of cyber events. The property catastrophe arena is grappling with very similar issues relative to the rapid and uncertain evolution of climate models. For cyber risks, history is not a predictor of the future in terms of modeling threat actors, the methods they deploy and the vulnerabilities they exploit. However, it is possible to examine historic data and the types of cyber incidents that have occurred while addressing the challenges in the way that information is collected, curated and used. This historic data is used against the backdrop of a near-term threat actor and technological trends to understand future potential systemic losses due to large-scale attacks on bigger and more interconnected entities. 

The role of probabilistic models

At the enterprise level, the market is struggling with how to assess potential aggregations within and across business lines. Event clash due to a single event causing multiple loss triggers to policies and reinsurance treaties is a key concern across all lines of business. Use of common cyber and other catastrophe risk loss metrics that can be combined across perils and lines of business are being explored. In addition, regulatory groups are considering requirements similar to property catastrophe risk to address solvency requirements relative to cyber risk. 

In this environment, consistent and structured definitions of risk measures are critical for assessing and communicating potential systemic catastrophic loss. Both deterministic cyber scenario event analyses as well as probabilistic stochastic cyber event analyses are required. Given this context, cyber catastrophe risk models that can withstand validation scrutiny similar to property catastrophe risk models require the same level of rigorous attention to transparency in communication of model methodology.

Similarities… but some differences

There are some key differences between the systemic risks of natural disasters and cyber events. One material contrast is that cyber perils manifest with active adversaries seeking to cause malicious damage to individuals and companies globally. The factors affecting modeling include the changing nature of geopolitical threats, the dramatic increase in the use of digital means for criminal enterprises, the hyperconnectivity of developed economies and an ever-increasing reliance on networked technologies. Cyber event scenarios are developed to represent a range of potential systemic events in which technological dependencies affect individual insured companies, due to a common vulnerability or a “single point of failure.” Examples include common cloud service providers, payment systems, mobile phone networks, operating systems and other connected technologies. 

See also: Risks, Opportunities in the Next Wave  

There are limitations in any model relating to cyber risk, given the inherent uncertainties. Nevertheless, these models provide valuable insights to better decision-making relating to capital planning, reinsurance and addressing regulatory issues. By learning from previous insurance shocks, we can support a more stable and resilient cyber risk insurance market.

How to Avoid Failed Catastrophe Models

Since commercial catastrophe models were introduced in the 1980s, they have become an integral part of the global (re)insurance industry. Underwriters depend on them to price risk, management uses them to set business strategies and rating agencies and regulators consider them in their analyses. Yet new scientific discoveries and claims insights regularly reshape our view of risk, and a customized model that is fit-for-purpose one day might quickly become obsolete if it is not updated for changing business practices and advances in our understanding of natural and man-made events in a timely manner.

Despite the sophisticated nature of each new generation of models, new events sometimes expose previously hidden attributes of a particular peril or region. In 2005, Hurricane Katrina caused economic and insured losses in New Orleans far greater than expected because models did not consider the possibility of the city’s levees failing. In 2011, the existence of a previously unknown fault beneath Christchurch and the fact the city sits on an alluvial plain of damp soil created unexpected liquefaction in the New Zealand earthquake. And in 2012, Superstorm Sandy exposed the vulnerability of underground garages and electrical infrastructure in New York City to storm surge, a secondary peril in wind models that did not consider the placement of these risks in pre-Sandy event sets.

Such surprises affect the bottom lines of (re)insurers, who price risk largely based on the losses and volatility suggested by the thousands of simulated events analyzed by a model. However, there is a silver lining for (re)insurers. These events advance modeling capabilities by improving our understanding of the peril’s physics and damage potential. Users can then often incorporate such advances themselves, along with new technologies and best practices for model management, to keep their company’s view of risk current – even if the vendor has not yet released its own updated version – and validate enterprise risk management decisions to important stakeholders.

See also: Catastrophe Models Allow Breakthroughs  

When creating a resilient internal modeling strategy, (re)insurers must weigh cost, data security, ease of use and dependability. Complementing a core commercial model with in-house data and analytics and standard formulas from regulators, and reconciling any material differences in hazard assumptions or modeled losses, can help companies of all sizes manage resources. Additionally, the work protects sensitive information, allows access to the latest technology and support networks and mitigates the impact of a crisis to vital assets – all while developing a unique risk profile.

To the extent resources allow, (re)insurers should analyze several macro- and micro-level considerations when evaluating the merits of a given platform. On the macro level, unless a company’s underwriting and claims data dominated the vendor’s development methodology, customization is almost always desirable, especially at the bottom of the loss curve where there is more claim data; if a large insurer with robust exposure and claims data is heavily involved in the vendor’s product development, the model’s vulnerability assumptions and loss payout and developments patterns will likely mirror that of the company itself, so less customization is necessary. Either way, users should validate modeled losses against historical claims from both their own company and industry perspectives, taking care to adjust for inflation, exposure changes or non-modeled perils, to confirm the reasonability of return periods in portfolio and industry occurrence and aggregate exceedance-probability curves. Without this important step, insurers may find their modeled loss curves differ materially from observed historical results, as illustrated below.

A micro-level review of model assumptions and shortcomings can further narrow the odds of a “shock” loss. As such, it is critical to precisely identify risks’ physical locations and characteristics, as loss estimates may vary widely within a short distance – especially for flood, where elevation is an important factor. When a model’s geocoding engine or a national address database cannot assign location, there are several disaggregation methodologies available, but each produces different loss estimates. European companies will need to be particularly careful regarding data quality and integrity as the new General Data Protection Regulation, which may mean less specific location data is collected, takes effect.

Equally as important as location is a risk’s physical characteristics, as a model will estimate a range of possibilities without this information. If the assumption regarding year of construction, for example, differs materially from the insurer’s actual distribution, modeled losses for risks with unknown construction years may be under- or overestimated. The exhibit below illustrates the difference between an insurer’s actual data and a model’s assumed year of construction distribution based on regional census data in Portugal. In this case, the model assumes an older distribution than the actual data shows, so losses on risks with unknown construction years may be overstated.

There is also no database of agreed property, contents or business interruption valuations, so if a model’s assumed valuations are under- or overstated, the damage function may be inflated or diminished to balance to historical industry losses.

See also: How to Vastly Improve Catastrophe Modeling  

Finally, companies must also adjust “off-the-shelf” models for missing components. Examples include overlooked exposures like a detached garage; new underwriting guidelines, policy wordings or regulations; or the treatment of sub-perils, such as a tsunami resulting from an earthquake. Loss adjustment difficulties are also not always adequately addressed in models. Loss leakage – such as when adjusters cannot separate covered wind loss from excluded storm surge loss – can inflate results, and complex events can drive higher labor and material costs or unusual delays. Users must also consider the cascading impact of failed risk mitigation measures, such as the malfunction of cooling generators in the Fukushima nuclear power plant after the Tohoku earthquake.

If an insurer performs regular, macro-level analyses of its model, validating estimated losses against historical experience and new views of risk, while also supplementing missing or inadequate micro-level components appropriately, it can construct a more resilient modeling strategy that minimizes the possibility of model failure and maximizes opportunities for profitable growth.

The views expressed herein are solely those of the author and do not reflect the views of Guy Carpenter & Company, LLC, its officers, managers, or employees.

You can find the article originally published on Brink.

San Andreas — The Real Horror Story

For the past two weeks, the disaster movie “San Andreas” has topped the box office, taking in more than $200 million worldwide. The film stars Dwayne “The Rock” Johnson, who plays a helicopter rescue pilot who, after a series of cataclysmic earthquakes on the San Andreas fault in California, uses his piloting skills to save members of his family. It’s an action-packed plot sure to keep audiences on the edge of their seats.

As insurance professionals who specialize in quantifying catastrophic loss, we can’t help but think of the true disaster that awaits California and other regions in the U.S. when “the big one” actually does occur.

The real horror starts with the fact that 90% of California residents DO NOT maintain earthquake insurance. The “big one” is likely to produce economic losses in either the San Francisco or Los Angeles metropolitan areas in excess of $400 billion. With so little of this potential damage insured, thousands of families will become homeless, and countless businesses will be affected – many permanently. The cost burden for the cleanup, rescue, care and rebuilding will likely be borne by the U.S. taxpayer. The images of the carnage will make the human desperation we saw in both Hurricane Katrina and Superstorm Sandy pale by comparison.

The reasons given for such low take-up of earthquake insurance generally fall into two categories: (1) Earthquake risk is too volatile, too difficult to insure and, as a result, (2) is too expensive for most homeowners.

Is California earthquake risk too volatile to insure?

No.

The earthquake faults in California, including the Hayward, the Calaveras and the San Andreas faults. are the most studied and understood fault systems in the world. The U.S. Geological Survey (USGS) publishes updated frequency and severity likelihood every six years for the entire U.S. This means that estimation of potential earthquake losses, while not fully certain, can be reasonably achieved in the same manner that we can currently estimate potential losses from perils such as tornados and hurricanes. In fact, the catastrophe (CAT) models agree that it’s likely that on a dollar-for-dollar exposure basis, losses from Florida hurricanes that make landfall are more severe and more frequent over time than California earthquakes, yet nearly 100% of Florida homeowners actually maintain windstorm insurance. If hurricane risk in Florida isn’t too volatile for insurers to cover, then earthquake risk in California should follow that same path.

Isnt earthquake coverage expensive?

Again, the answer is a resounding no.

The California Earthquake Authority (CEA), the largest writer of earthquake insurance in the U.S., has a premium calculator that quotes mobile homes, condos, renters and homeowners insurance. For example, a $500,000 single-family home in Orange County, CA, can be insured for about $800 a year, or roughly the same price as a traditional fire insurance policy. To protect a $500,000 home, an $800 investment is hardly considered expensive.

The real question should be: Are California homeowners getting good value? CEA policies carry very high deductibles — typically in the 10% to 15% range — and the price is “expensive” when the high deductibles are considered. As one actuary once explained it to us, “With that kind of deductible, I’ll likely never use the coverage, so like everyone else I’ll cross my fingers and hope the ‘big one’ doesn’t happen in my lifetime.”

It’s this lack of value that’s the single biggest impediment preventing millions of California homeowners from purchasing earthquake insurance. It’s also an area that has much room for improvement.

How can we as an industry raise the value proposition of earthquake coverage? Consider the following:

  1. The industry can make better use of technology, especially the CAT models. California is earthquake country, but it’s also a massive state. This map shows that the high-risk areas mostly follow the San Andreas fault and the branches off that fault. There are many lower-risk areas in California, and the CAT models can be used to distinguish the high risk from the low risk. Low risk exposures should demand lower premiums. Even high-risk exposures can be controlled by using the CAT models to manage aggregates and identify the low-risk exposure within the high-risk pools. We expect that CAT models will help us get back to Insurance 101 by helping the industry to better understand exposure to loss, segment risks, correct pricing, manage aggregates and create profitable pools of exposure.
  2. The industry can bundle earthquake risks with other risks to reduce volatility. Earthquake-only writers (and flood as well) are essentially “all in” on one type of risk, to steal a common poker term. Those writers will fluctuate year to year; there will be years with little or no losses, then years with substantial losses. That volatility affects retained losses and also affects reinsurance prices. Having one source of premium means constantly conducting business on the edge of insolvency. Bundling earthquake risks geographically and with other perils reduces volatility. The Pacific Northwest, Alaska, Hawaii and even areas in the Midwest and the Carolinas are all known to be seismically active. In fact, Oklahoma and Texas are now the new hotbed regions of earthquake activity. Demand in those areas exist, so why not package that risk? Reducing volatility will reduce prices and help stabilize the market. We estimate that in parts of California, volatility is the cause of as much as 50% of the CEA premium.

Hollywood has produced yet another action-packed film. But to add a touch of realism, Hollywood screenwriters should consider making the leading actor, The Rock, a true hero – an “insurance super hero” who sells affordable earthquake insurance.

Model Errors in Disaster Planning

“All models are wrong; some are useful.” – George Box

We have spent three articles (article 1, article 2, article 3) explaining how catastrophe models provide a tool for much-needed innovation to the global insurance industry. Catastrophe models have covered for the lack of experience with many losses and let insurers properly price and underwrite risks, manage portfolios, allocate capital and design risk management strategies. Yet for all the practical benefits CAT models have infused into the industry, product innovation has stalled.

The halt in progress is a function of what models are and how they work. In fairness to those who do not put as much stock in the models as a useful tool, it is important to speak of the models’ limitations and where the next wave of innovation needs to come from.

Model Design

Models are sets of simplistic instructions that are used to explain phenomena and provide relevant insight on future events (for CAT models – estimating future catastrophic losses). We humans start using models at very early ages. No one would confuse a model airplane with a real one; however, if a parent wanted to simplify the laws of physics to explain to a child how planes fly, then a model airplane is a better tool than, say, a physics book or computer-aided design software. Conversely, if you are a college student studying engineering or aerodynamics, the reverse is true. In each case, we are attempting to use a tool – models of flight, in this instance – to explain how things work and to lend insight into what could happen based on historical data so that we can merge theory and practice into something useful. It is the constant iteration between theory and practice that allows an airplane manufacturer to build a new fighter jet, for instance. No manufacturer would foolishly build an airplane based on models no matter how scientifically advanced those models are, but those models would be incredibly useful in guiding the designers to experimental prototypes. We build models, test them, update them with new knowledge, test them again and repeat the process until we achieve desired results.

The design and use of CAT models follows this exact pattern. The first CAT models estimated loss by first calculating total industry losses and then proportionally allocating losses to insurers based on assumption of market share. That evolved into calculating loss estimates for specific locations at specific addresses. As technology advanced into the 1990s, model developers harnessed that computing power and were able to develop simulation programs to analyze more data, faster. The model vendors then added more models to cover more global peril regions. Today’s CAT models can even estimate construction type, height and building age if an insurer does not readily have that information.

As catastrophic events occur, modelers routinely compare the actual event losses with the models and measure how well or how poorly the models performed. Using actual incurred loss data helps calibrate the models and also enables modelers to better understand the areas in which improvements must be made to make them more resilient.

However, for all the effort and resources put into improving the models (model vendors spend millions of dollars each year on model research, development, improvement and quality assurance), there is still much work to be done to make them even more useful to the industry. In fact, virtually every model component has its limitations. A CAT model’s hazard module is a good example.

The hazard module takes into account the frequency and severity of potential disasters. Following the calamitous 2004 and 2005 U.S. hurricane seasons, the chief model vendors felt pressure to amend their base catalogs with something to reflect the new high-risk era we were in, that is, taking into account higher-than-average sea surface temperatures. These model changes dramatically affected reinsurance purchase decisions and account pricing. And yet, little followed. What was assumed to be the new normal of risk taking actually turned into one of the quietest periods on record.

Another example was the magnitude 9.0, 2011 Great Tōhuko Earthquake in Japan. The models had no events even close to this monster earthquake in their event catalogs. Every model clearly got it wrong, and, as a result, model vendors scrambled to fix this “error” in the model. Have the errors been corrected? Perhaps in these circumstances, but what other significant model errors exist that have yet to be corrected?

CAT model peer reviewers have also taken issue with actual event catalogs used in the modeling process to quantify catastrophic loss. For example, a problem for insurers is answering the type of question of: What is the probability of a Category 5 hurricane making landfall in New York City? Of course, no one can provide an answer with certainty. However, while no one can doubt the significance of the level of damage an event of that intensity would bring to New York City (Super Storm Sandy was not even a hurricane at landfall in 2012 and yet caused tens of billions of dollars in insured damages), the critical question for insurers is: Is this event rare enough that it can be ignored, or do we need to prepare for an event of that magnitude?

To place this into context, the Category 3, 1938 Long Island Express event would probably cause more than $50 billion in insured losses today, and that event did not even strike New York City. If a Category 5 hurricane hitting New York City was estimated to cause $100 billion in insured losses, then knowing whether this was a 1-in-10,000-year possibility or a 1-in-100-year possibility could mean the difference between solvency and insolvency for many carriers. If that type of storm was closer to a 1-in-100-year probability, then insurers have the obligation to manage their operations around this possibility; the consequences are too grave, otherwise.

Taking into account the various chances of a Category 5 directly striking New York City, what does that all mean? It means that adjustments in underwriting, pricing, accumulated capacity in that region and, of course, reinsurance design all need to be considered — or reconsidered, depending on an insurer’s present position relative to its risk appetite. Knowing the true probability is not possible at this time; we need more time and research to understand that. Unfortunately for insurers, rating agencies and regulators, we live in the present, and sole reliance on the models to provide “answers” is not enough.

Compounding this problem is that, regardless of the peril, errors exist in every model’s event catalog. These errors cannot even be avoided, and the problem escalates where our paucity of historical recordings and scientific experiments limit our industry’s ability to inch us closer and closer to greater certainty.

Earthquake models still lie beyond a comfortable reach of predictability. Some of the largest and most consequential earthquakes in U.S. history have occurred near New Madrid, MO. Scientists are still wrestling with the mechanics of that fault system. Thus, managing a portfolio of properties solely dependent on CAT model output is foolhardy at best. There is too much financial consequence from phenomena that scientists still do not understand.

Modelers also need to continuously assess property vulnerability when it comes to taking into consideration various building stock types with current building codes. Assessing this with imperfect data and across differing building codes and regulations is difficult. That is largely the reason that so-called “vulnerability curves” oftentimes are revised after spates of significant events. Understandably, each event yields additional data points for consideration, which must be taken into account in future model versions. Damage surveys following Hurricane Ike showed that the models underestimated contents vulnerability within large high-rises because of water damage caused by wind-driven rain.

As previously described, a model is a set of simplified instructions, which can be programmed to make various assumptions based on the input provided. Models, therefore, fall into the Garbage In – Garbage out complex. As insurers adapt to these new models, they often need to cajole their legacy IT systems to provide the required data to run the models. For many insurers, this is an expensive and resource-intensive process, often taking years.

Data Quality’s Importance

Currently, the quality of industry data to be used in such tools as CAT models is generally considered poor. Many insurers are inputting unchecked data into the models. For example, it is not uncommon that building construction type, occupancy, height and age, not to mention a property’s actual physical address, are unknown! For each  property whose primary and secondary risk characteristics are missing, the models must make assumptions regarding those precious missing inputs – even regarding where the property is located. This increases model uncertainty, which can lead to inaccurate assessment of an insurer’s risk exposure.

CAT modeling results are largely ineffective without quality data collection. For insurers, the key risk is that poor data quality could lead to a misunderstanding regarding what their exposure is to potential catastrophic events. This, in turn, will have an impact on portfolio management, possibly leading to unwanted exposure distribution and unexpected losses, which will affect both insurers’ and their reinsurers’ balance sheets. If model results are skewed as a result of poor data quality, this can lead to incorrect assumptions, inadequate capitalization and the failure to purchase sufficient reinsurance for insurers. Model results based on complete and accurate data ensures greater model output certainty and credibility.

The Future

Models are designed and built based on information from the past. Using them is like trying to drive a car by only looking in the rear view mirror; nonetheless, catastrophes, whether natural or man-made, are inevitable, and having a robust means to quantify them is critical to the global insurance marketplace and lifecycle.

Or is it?

Models, and CAT models in particular, provide a credible industry tool to simulate the future based on the past, but is it possible to simulate the future based on perceived trends and worst-case scenarios? Every CAT model has its imperfections, which must be taken into account, especially when employing modeling best practices. All key stakeholders in the global insurance market, from retail and wholesale brokers to reinsurance intermediaries, from insurers to reinsurers and to the capital markets and beyond, must understand the extent of those imperfections, how error-sensitive the models can be and how those imperfections must be accounted for to gain the most accurate insight into individual risks or entire risk portfolios. The difference in a few can mean a lot.

The next wave of innovation in property insurance will come from going back to insurance basics: managing risk for the customer. Despite model limitations, creative and innovative entrepreneurs will use models to bundle complex packages of risks that will be both profitable to the insurer and economical to the consumer. Consumers desiring to protect themselves from earthquake risks in California, hurricane risks in Florida and flood risks on the coast and inland will have more options. Insurers looking to deploy capital and find new avenues of growth will use CAT models to simulate millions of scenarios to custom create portfolios optimizing their capacity and create innovative product features to distinguish their products against competitors. Intermediaries will use the models to educate and craft effective risk management programs to maximize their clients’ profitability.

For all the benefit CAT models have provided the industry over the past 25 years, we are only driving the benefit down to the consumer in marginal ways. The successful property insurers of the future will be the ones who close the circle and use the models to create products that make the transfer of earthquake, hurricane and other catastrophic risks available and affordable.

In our next article, we will examine how we can use CAT models to solve some of the critical insurance problems we face.

How CAT Models Lead to Soft Prices

In our first article in this series, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. In the second article, we looked at how, beginning in the mid-1980s, people began developing models that could prevent recurrences of those staggering losses. In this article, we look at how modeling results are being used in the industry.

 

Insurance is a unique business. In most other businesses, expenses associated with costs of operation are either known or can be fairly estimated. The insurance industry, however, needs to estimate expenses for things that are extremely rare or have never happened before. Things such as the damage to a bridge in New York City from a flood or the theft of a precious heirloom from your home or the fire at a factory, or even Jennifer Lopez injuring her hind side. No other industry has to make so many critical business decisions as blindly as the insurance industry. Even in circumstances in which an insurer can accurately estimate a loss to a single policyholder, without the ability to accurately estimate multiple losses all occurring simultaneously, which is what happens during natural catastrophes, the insurer is still operating blindly. Fortunately, the introduction of CAT models greatly enhances both the insurer’s ability to estimate the expenses (losses) associated with a single policyholder and concurrent claims from a single occurrence.

When making decisions about which risks to insure, how much to insure them for and how much premium is required to profitably accept the risk, there are essentially two metrics that can provide the clarity needed to do the job. Whether you are a portfolio manager managing the cumulative risk for a large line of business or an underwriter getting a submission from a broker to insure a factory or an actuary responsible for pricing exposure, what these stakeholders need to minimally know is:

  1. On average, what will potential future losses look like?
  2. On average, what are the reasonable worst case loss scenarios, or the probable maximum loss (PML)?

Those two metrics alone supply enough information for an insurer to make critical business decisions in these key areas:

  • Risk selection
  • Risk-based pricing
  • Capacity allocation
  • Reinsurance program design

Risk Selection

Risk selection includes an underwriter’s determination of the class (such as preferred, standard or substandard) to which a particular risk is deemed to belong, its acceptance or rejection and (if accepted) the premium.

Consider two homes: a $1 million wood frame home and a $1 million brick home both located in Los Angeles. Which home is riskier to the insurer?  Before the advent of catastrophe models, the determination was based on historical data and, essentially, opinion. Insurers could have hired engineers who would have informed them that brick homes are much more susceptible to damage than wood frame homes under earthquake stresses. But it was not until the introduction of the models that insurers could finally quantify how much financial risk they were exposed to. They shockingly discovered that on average brick homes are four times riskier than wood frame homes and are twice as likely to sustain a complete loss (full collapse). This was data not well-known by insurers.

Knowing how two or more different risks (or groups of risks) behave at an absolute and relational level provides a foundation to insurers to intelligently set underwriting guidelines, which work toward their strengths and excludes risks they do not or cannot absorb, based on their risk appetite.

Risk-Based Pricing

Insurance is rapidly becoming more of a commodity, with customers often choosing their insurer purely on the basis of price. As a result, accurate ratemaking has become more important than ever. In fact, a Towers Perrin survey found that 96% of insurers consider sophisticated rating and pricing to be either essential or very important.

Multiple factors go into determining premium rates, and, as competition increases, insurers are introducing innovative rate structures. The critical question in ratemaking is: What risk factors or variables are important for predicting the likelihood, frequency and severity of a loss? Although there are many obvious risk factors that affect rates, subtle and non-intuitive relationships can exist among variables that are difficult, if not impossible, to identify without applying more sophisticated analyses.

Regarding our example involving the two homes situated in Los Angeles, catastrophe models tell us two very important things: what the premium to cover earthquake loss should roughly be and that the premium for masonry homes should be approximately four times larger than wood frame homes.

The concept of absolute and relational pricing using catastrophe models is revolutionary. Many in the industry may balk at our term “revolutionary,” but insurers using the models to establish appropriate price levels for property exposures have a massive advantage over public entities such as the California Earthquake Authority (CEA) and the National Flood Insurance Program (NFIP) that do not adhere to risk-based pricing.

The NFIP and CEA, like most quasi-government insurance entities, differ in their pricing from private insurers along multiple dimensions, mostly because of constraints imposed by law. Innovative insurers recognize that there are literally billions of valuable premium dollars at stake for risks for which the CEA, the NFIP and similar programs significantly overcharge – again, because of constraints that forbid them from being competitive.

Thus, using average and extreme modeled loss estimates not only ensures that insurers are managing their portfolios effectively, but enables insurers, especially those that tend to have more robust risk appetites, to identify underserved markets and seize valuable market share. From a risk perspective, a return on investment can be calculated via catastrophe models.

It is incumbent upon insurers to identify the risks they don’t wish to underwrite as well as answer such questions as: Are wood frame houses less expensive to insure than homes made of joisted masonry? and, What is the relationship between claims severity and a particular home’s loss history? Traditional univariate pricing analysis methodologies are outdated; insurers have turned to multivariate statistical pricing techniques and methodologies to best understand the relationships between multiple risk variables. With that in mind, insurers need to consider other factors, too, such as marketing costs, conversion rates and customer buying behavior, just to name a few, to accurately price risks. Gone are the days when unsophisticated pricing and risk selection methodologies were employed. Innovative insurers today cross industry lines by paying more and more attention to how others manage data and assign value to risk.

Capacity Allocation

In the (re)insurance industry, (re)insurers only accept risks if those risks are within the capacity limits they have established based on their risk appetites. “Capacity” means the maximum limit of liability offered by an insurer during a defined period. Oftentimes, especially when it comes to natural catastrophe, some risks have a much greater accumulation potential, and that accumulation potential is typically a result of dependencies between individual risks.

Take houses and automobiles. A high concentration of those exposure types may very well be affected by the same catastrophic event – whether a hurricane, severe thunderstorm, earthquake, etc. That risk concentration could potentially put a reinsurer (or insurer) in the unenviable position of being overly exposed to a catastrophic single-loss occurrence.  Having a means to adequately control exposure-to-accumulation is critical in the risk management process. Capacity allocation enables companies to allocate valuable risk capacity to specific perils within specific markets and accumulation zones to minimize their exposure, and CAT models allow insurers to measure how capacity is being used and how efficiently it is being deployed.

Reinsurance Program Design

With the advent of CAT models, insurers now have the ability to simulate different combinations of treaties and programs to find the right fit, maximizing their risk and return. Before CAT models, it would require gut instinct to estimate the probability of attachment of one layer over another or to estimate the average annual losses for a per-risk treaty covering millions of exposures. The models estimate the risk and can calculate the millions of potential claims transactions, which would be nearly impossible to do without computers and simulation.

It is now well-known how soft the current reinsurance market is. Alternative capital has been a major driving force, but we consider the maturation of CAT models as having an equally important role in this trend.

First, insurers using CAT models to underwrite, price and manage risk can now intelligently present their exposure and effectively defend their position on terms and conditions. Gone are the days when reinsurers would have the upper hand in negotiations; CAT models have leveled the playing field for insurers.

Secondly, alternative capital could not have the impact that it is currently having without the language of finance. CAT models speak that language. The models provide necessary statistics for financial firms looking to allocate capital in this area. Risk transfer becomes so much more fungible once there is common recognition of the probability of loss between transferor and transferee. No CAT models, no loss estimates. No loss estimates, no alternative capital. No alternative capital, no soft market.

A Needed Balance

By now, and for good reason, the industry has placed much of its trust in CAT models to selectively manage portfolios to minimize PML potential. Insurers and reinsurers alike need the ability to quantify and identify peak exposure areas, and the models stand ready to help understand and manage portfolios as part of a carrier’s risk management process. However, a balance between the need to bear risk and the need to preserve a carrier’s financial integrity in the face of potential catastrophic loss is essential. The idea is to pursue a blend of internal and external solutions to ensure two key factors:

  1. The ability to identify, quantify and estimate the chances of an event occurring and the extent of likely losses, and
  2. The ability to set adequate rates.

Once companies have an understanding of their catastrophe potential, they can effectively formulate underwriting guidelines to act as control valves on their catastrophe loss potential but, most importantly, even in high-risk regions, identify those exposures that still can meet underwriting criteria based on any given risk appetite. Underwriting criteria relative to writing catastrophe-prone exposure must be used as a set of benchmarks, not simply as a blind gatekeeper.

In our next article, we examine two factors that could derail the progress made by CAT models in the insurance industry. Model uncertainty and poor data quality threaten to raise skepticism about the accuracy of the models, and that skepticism could inhibit further progress in model development.