Tag Archives: hurricanes

Hurricane Season: More Trouble Ahead?

With the official start of the 2020 Atlantic hurricane season just one month away, there has likely never been a more important one for the insurance industry. This is not just because most of the early-season guidance points to an above-average hurricane season, which could increase the chances of hurricane landfall along the U.S. coastline; but, because of COVID-19, if a hurricane makes landfall it comes with increasing pressures for the people affected and the stress on the insurance industry. With insurers already strained due to COVID-19, additional losses from a named storm could disrupt the industry. 

By their very nature, hurricanes force people to gather close together in shelters and travel away from their places of residency during evacuations. This goes directly against what the Centers for Disease Control and Prevention (CDC) recommend for countering a Covid-19 outbreak. Think about when Hurricane Katrina hit New Orleans in 2005: Around 20,000 people took refuge in the Superdome. One building with 20,000 people can’t happen in the current environment. Additionally, we have all read that the elderly population is more susceptible to COVID-19 and that the CDC guidelines should be strictly followed for this demographic, but the elderly are even more at risk during a hurricane because COVID-19 complicates evacuation procedures that are already difficult for them.

COVID-19 is already placing unprecedented strain on disaster management, health and other systems; a hurricane will exacerbate that strain. With outbreaks across the entire nation, an area hit by a hurricane is less likely to get aid from other states or regions. Will power crews travel hundreds of miles to help restore power? The lack of quick response can further create problems with mold growth if the power is not restored fast enough. After a storm, sometimes as many as 10,000 volunteers come from all over to help with the recovery, but it might be hard finding volunteers amid the pandemic. How about claims adjusters and another insurance personal that need to inspect property damage? How about contractors getting into an area to put tarps on roofs and prevent further damage? The list is almost endless of how uncertainty increases.

This is why all eyes will be on any little disturbance that develops anywhere in the Atlantic Ocean this season.  

Atlantic Hurricane Forecasts Are a Dime a Dozen

Do you know there are at least 26 different entities that forecast various aspects of the Atlantic hurricane season? You can track the majority of the early season predictions here: http://seasonalhurricanepredictions.org.

In meteorology forecasting class, one of the lessons that is taught is that the consensus forecast is a hard forecast to beat. Although early April hurricane season forecasts for activity have the least amount of overall reliability, when you get a great number of forecasts that agree that the overall activity will be above normal it should get the attention of the insurance industry. 

See also: The Best Tools for Disaster Preparation  

All the forecasts speculate on the same general climate factors that are leading indicators to an active hurricane season. One is the El Niño-Southern Oscillation (ENSO). Currently, the majority of the global ENSO forecast models call for ENSO-neutral conditions during the peak of the Atlantic hurricane season for August-October. When ENSO is warmer than normal, it is called El Niño, and it typically reduces Atlantic hurricane activity via increased upper-level westerly winds in the Caribbean extending into the tropical Atlantic that shear apart storms as they are trying to form. This is not forecast to occur this year. There is some indication that late in the summer months a La Niña might develop, which would bring even less wind shear to the Caribbean and might lead to above normal activity. From 1995-2019, the non-El Niño seasonal mean Accumulated Cyclone Energy (ACE) index across the Atlantic Basin is 160 (104 is the 1981-2010 average), with those non-El Niño years having an average of 16 named storms, eight hurricanes and four major hurricanes across the basin. So this is a good place to start if one wants to make the argument for more favorable activity across the Atlantic basin.

SubsurfaceSSTENSO
Currently, the Pacific remains in a warm-neutral state following a weak Modoki El Niño event in early 2019. Indications are showing cooling waters below the surface and conducive low-level winds at the surface, suggesting a La Niña event will slowly take shape over the next 3-5 months. This favors a busy season, particularly from September onward.

The other major climate forcer leading to an above-normal forecast is the Atlantic Sea Surface Temperature (SST), which is unusually warm at this time. This early-season warmth in the Main Development Region (MDR) has a strong relationship to an active hurricane season as a catalyst to tropical waves that move off Africa.

While these warm SSTs could change before the summer, the fact that the air temperatures over the area will only get warmer will likely limit any cooling of the current SST. Together with the increased probability of La Niña, the Atlantic SST signals elevated chances of a busy Atlantic hurricane season.

ECMWF SST Forecast
Much of the Atlantic’s waters are already warmer than average as of the end of April. The fact that the SST is already this warm and forecasted to stay above normal suggest a more active than normal named storm season. Above is the ECMWF August September October SST anomaly forecast. 

If you haven’t noticed, there has been plenty of severe weather in the Southeast U.S. over the last month or so. Part of the blame of these severe weather outbreaks can also be put on the warmer than normal SST in the Gulf of Mexico feeding moisture into the Southeast as mid-latitude low-pressure systems pull moisture up from the Gulf of Mexico, where water temperatures are one to three degrees above normal. There is no relationship between April Gulf SST and annual hurricane activity, mainly because Gulf of Mexico conditions can change quickly over a given season with weather pattern shifts. However, if such anomalies persist into July, the temperature could be deeply unsettling. 

A wildcard to the season could be how much dry air rolls off the coast of Africa with tropical waves. There have been seasons where all the major climate forcers looked to align, but named storms need the precise set of ingredients to come together to make for a super active year, and dry air can be a wildcard; too much dry air aloft can inhibit named storm development. 

Landfall Analogs

Many of the seasonal hurricane forecasts shy away from the most important factor for the insurance industry, which is overall landfall activity. After all, the Atlantic basin can be very active, but if no hurricanes make landfall the impact on the insurance industry is irrelevant. By looking at current oceanic and atmospheric conditions that are similar to conditions now and that might be expected during the peak of the hurricane season, analog years can provide useful clues as to what type of landfall activity might occur for the 2020 Atlantic Hurricane season. The years that seem to be most common are 1960, 1995, 1998, 2007, 2010 and 2013.

The analog years suggest a clustering of a pattern that would point to named storm activity along the central Gulf Coast and the Outer Banks. There is also a cluster of activity north of Puerto Rico and in the western Caribbean. In general, the analogs point to years with named storm landfall activity, so landfalling named storms should be expected from Texas to Maine, with the most focus on the regions mentioned above.

See also: Flood Insurance: Are the Storm Clouds Lifting?  

Early Season Activity

By now, the insurance industry understands that often tropical named storm activity comes in waves, which is largely a result of the passage of the Madden–Julian Oscillation (MJO) or large scale Convectively Coupled Kelvin Wave (CCKW). As we approach the start of the Atlantic hurricane season, the insurance industry can start to get a sense of when the Atlantic basin might experience some activity. The latest forecast guidance suggests that the first such wave of activity might occur around May 11, with another coming shortly after the start of the Atlantic Hurricane season. Given that the last five hurricane seasons have produced at least one named storm before June 1, it wouldn’t surprise me if this year tried to follow suit given the warm SST in the Gulf of Mexico. Often, early season development occurs with storm activity off the Carolina coastline. These types of early systems tend to meander or make landfall along the North Carolina to northeast Florida coastline. But another area ripe for early season activity this year could be in the Gulf of Mexico. 

Summary

The season looks to be active with a higher probability of named storm landfalls along the Gulf Coast or Outer Banks, NC, raising many questions about the possible effects on the insurance industry.

As the BMS Property Practice pointed out, if there is a heightened risk along the U.S. coastline would the authorities even allow non-permanent residents into the area to reach a second home if their primary residence is out of state?

Who is going to take steps to limit damage? How will storm response hamper recovery efforts in terms of volunteers or field adjusters? If hotels are not operating, where do people go, where do adjusters stay?

Maybe this is the year that insurtech solutions help the insurance industry respond to a natural disaster in new ways. No matter how you look at it, we are entering uncharted territory this hurricane season.

How Different Flood Types Affect Risk

For insurers to most effectively understand flood risk, they must have access to data that provides a full picture of the hazard, including the different flood types that might affect a property: fluvial, pluvial and storm surge. Although it may seem that flood is just flood, different types can produce various impacts on a property, causing different levels of damage.

Fluvial, pluvial and storm surge: Why it matters

Much of the U.S. is prone to both fluvial flooding (when rivers overtop their banks) and pluvial flooding (when water accumulates across the surface of the ground as a result of heavy rainfall). However, many coastal regions also experience storm surge flooding, which is a result of increased sea levels caused by weather events.

Storm surge flooding is extremely damaging due to the salinity of the water, while pluvial flooding is typically cleaner and quick to recede, likely resulting in lower-cost claims.

Without a view of these different drivers of flooding, insurers cannot understand the full exposure to their portfolios or fully engage with the private flood insurance market.

Use case: Jacksonville, Fla.

The need to understand all the drivers of flood can be illustrated using a residential property on 2nd Avenue, Jacksonville, Fla. Jacksonville is one of the five most vulnerable cities to hurricanes on the U.S. East Coast and at high risk from flooding, experiencing widespread storm surges and flooding during hurricanes Irma and Matthew.

The residential property shown in Figure 1 originally fell into a FEMA Zone X (designated as minimal flood risk).

Figure 1: Contains data from the FEMA National Flood Hazard Layer.

However, when we look at its location on the JBA flood map, we can see some differences in analysis. The JBA flood map identifies this location as at very severe risk to flood (Figure 2, below), from both fluvial and storm surge flooding, whereas using FEMA data alone would not account for either flood type or differentiate between fluvial and pluvial flood. Accessing data sources in addition to FEMA helps provide a more comprehensive understanding of the risk.

Figure 2

The complex interplay between flood types

The risk is particularly high for hurricane-prone areas like Jacksonville, where storm surges often coincide with inland flooding. It’s important to represent this complex interplay during the mapping process instead of tackling each flood type separately. JBA’s storm surge mapping has been developed in partnership with leading hurricane modelers Applied Research Associates, ensuring that hurricane activity is fully accounted for. Additionally, surge data has been used to modify JBA’s inland flood mapping process to reflect the fact that, during a hurricane, rivers can’t flow out to sea as they can in normal conditions. Flood waters then back up, exacerbating fluvial flooding. For insurers to obtain a complete understanding of the hazard, flood maps must fully represent this relationship.

Even with FEMA recently re-mapping the area as a FEMA A Zone, demonstrating that the area is at risk to flood, the drivers of the flood are not clear. As such, underwriting against the FEMA map alone could misrepresent the insurance coverage required.

See also: FEMA Flood Maps Aren’t Good Enough  

It’s clear that having a view of the different drivers of flood risk is vital for effectively understanding and underwriting the risk, especially in areas where hurricanes can be a major source of flood-driven losses.

How to Predict Atlantic Hurricanes

The 2017 Atlantic hurricane season was remarkable, including five landfalls of Category 5 storms in the Caribbean Basin and three Category 4 strikes on the U.S. coastline. The 2017 landfalls cost hundreds of lives and record-breaking economic losses, exceeding $250 billion. These losses are sober reminders of hurricane vulnerability and the importance of hurricane prediction for public safety and the management of insurance and other economic risks.

Hurricane forecasts have continued to improve in recent years, but they are not yet as good as they could be. Continued advances in weather and climate computer models used to make forecasts and improved observations from satellites and aircraft are driving these improvements. Also essential to progress are advances in understanding of weather and climate dynamics.

Short-term track and wind intensity forecasts

The National Hurricane Center (NHC) provides five-day forecasts of hurricane tracks and wind intensity that guide emergency management. Technological improvements from higher resolution weather forecast models and improved satellite observations are helping improve hurricane forecasts. The figure below shows the improvement over several decades in the NHC’s forecasted location of storms, referred to as “track error.” See figure 1 below.

An average track error at 48 hours of about 50 nautical miles is impressive in a meteorological context. However, a track uncertainty of just 50 nautical miles for Hurricane Irma’s predicted Florida landfall in 2017 meant the difference between a costly Miami landfall or a relatively benign Everglades landfall. As seen in the figure above, we are approaching a track forecast accuracy limit for one to two days, arising from the inherent unpredictability of weather. Over the past decade, the greatest improvements have been in the three- to five-day track forecasts.

A recent analysis conducted by Climate Forecast Applications Network (CFAN) compared track errors from different global and regional weather forecast models, all of which are considered by the NHC in preparing its forecast. The European model, operated by the European Center for Medium Range Weather Forecasting (ECMWF) and supported by 22 European countries, consistently outperformed the U.S. models maintained by the National Oceanic and Atmospheric Administration (NOAA) for track forecasts beyond two days. At five days lead time, the ECMWF model had an average track error for the 2017 season of 120 nautical miles, compared with 148 nm for the official NHC forecasts.

Innovators in the private sector apply proprietary algorithms to improve upon the NOAA and ECMWF model forecasts. At CFAN, ECMWF forecasts are corrected for model biases based on historical track errors. For 2017, CFAN’s bias-corrected storm tracks resulted in five-day average track error of 114 nautical miles – 26% lower than the average track error for the official NHC forecast.

Forecasts beyond five days (120 hours) are becoming increasingly important to the insurance community, especially with the development of insurance-linked securities and catastrophe bonds. The superior global weather forecasts provided by the European model (ECMWF) produced Atlantic hurricane tracks for 2017 with an average track error of 200 nautical miles out to eight days in advance. The proprietary track calibrations and synthetic tracks produced by CFAN from the European model maintain an average track error of 200 nautical miles even beyond 10 days, for the longest-lived storms.

Forecasting of storm wind intensity (as measured by maximum sustained wind speed) is also of key importance. The NHC’s intensity forecasts are slowly improving – the NHC’s intensity forecast errors at time horizons of two to five days average from 10 to 20 knots (10 knots = 11.5 mph) over the past several years. The greatest challenge in short-term hurricane forecasting remains the prediction of rapid intensification, as occurred with Hurricane Harvey in August 2017, immediately before landfall. The NHC has invested considerable resources in the development of high-resolution regional models to improve prediction of hurricane intensity. The prediction of rapid intensification remains elusive, although there is considerable research underway on this topic.

Seasonal and longer-term forecasts

Advances have been made in forecasting the probability of track locations on weekly timescales out to a month in advance. Monthly forecasts based on global weather forecast models are provided by several private sector weather forecast providers. Beyond the timescale of about a month, however, global models show little skill in predicting hurricanes. Hence, most seasonal forecasting efforts, particularly beyond timescales of six months, focus on data-driven statistical methods that examine longer-term trends in the global atmosphere.

Sea surface temperatures in the Atlantic and the tropical Pacific are key predictors for seasonal forecasts of Atlantic hurricane activity. El Niño (warmer tropical Pacific sea surface temperatures) and La Niña (cooler tropical Pacific sea surface temperatures) patterns have a strong influence, with La Niña being associated with higher levels of Atlantic hurricane activity.

Atmospheric circulation patterns also have some long-term “memory” that is useful for seasonal forecasts. CFAN’s research has identified additional predictors of seasonal Atlantic hurricane activity through examination of global and regional interactions among circulations in the ocean and in the lower and upper atmosphere. The predictors are identified through data mining, interpreted in the context of climate dynamics analysis, and then subjected to statistical tests in historical forecasts.

See also: Hurricane Harvey’s Lesson for Insurtechs  

While forecasts issued around June 1 for the coming season generally have skill that is better than climatology, different outcomes are suggested by late May/early June forecasts for the 2018 Atlantic hurricane season. Predictions range from low activity (CFAN and Tropical Storm Risk ) to average activity (Colorado State University ) to near or above normal activity (NOAA Climate Prediction Center ). While many of the late May/early June 2017 forecasts predicted an above-normal season, none of the publicly reported forecasts predicted the extreme activity that was observed during the 2017 season.

At the longer forecast horizons, forecast skill is increasingly diminished. The greatest challenge in making seasonal forecasts in April and earlier is the springtime “predictability barrier” for El Niño/La Niña, whereby random spring weather events in the tropical Pacific Ocean during spring can determine its longer seasonal evolution. Seasonal forecasts from the latest version of the European model show substantially improved forecast skill of El Niño and La Niña across the spring predictability barrier, which improves the prospects for seasonal hurricane forecasts issued in late March/early April. La Niña generally heralds an active hurricane season, whereas El Niño is generally associated with a weak hurricane season. However, the occurrence of El Niño or La Niña accounts for only about half of year-to-year variation in Atlantic hurricane activity. In particular, the extremely active years such as 2017, 2004 and 2005 were not characterized by much of a signal from La Niña.

The greatest challenge for seasonal predictions of hurricane activity is to forecast the possibility of an extremely active hurricane season such as observed during 2017, 2005, 2004 and 1995. CFAN’s seasonal forecast models capture the extremes in 1995 and 2017 but not 2004 and 2005. Improved understanding of the causes of the extreme activity during 2004 and 2005 is an active area of research.

The most important target of seasonal forecasts is the number of landfalling hurricanes and their likely locations. The number of U.S. landfalling hurricanes in one year has varied from zero to six since 1920. The number of landfalling hurricanes is only moderately correlated with overall seasonal activity. It is notable that the period of overall elevated hurricane activity from 1995 to 2014 overlapped with a historic 2006-2014 drought of major hurricane and Florida landfalls.

Several seasonal forecasters provide a prediction of landfalls. These forecasts may specify the number of U.S. and Caribbean landfalls, the probability of a U.S. landfall (tropical storm, hurricane, major hurricane). Few forecast providers attempt to predict location of the landfalls. New research conducted by CFAN scientists has uncovered strong relationships between U.S. landfall totals and spring atmospheric circulation over the Arctic, which tends to precede summer dynamic conditions in the western North Atlantic and the Gulf of Mexico.

Certain insurance contracts with hurricane exposure typically take effect Jan. 1 of each year, and for this reason there has been a desire for Atlantic hurricane forecasts to be issued in December for the following season. Such contracts often are written for a period of a year or even longer time horizons. Because of the apparent lack of hurricane predictability on this time scale, in December Colorado State University provides a qualitative forecast discussion, rather than a forecast. CFAN research has identified some sources of hurricane predictability on timescales from 12 to 48 months. Research is underway to exploit this predictability into skillful annual and inter-annual predictions of Atlantic hurricane activity.

Five- to 10-year outlooks

Some atmospheric modelers provide a five-year outlook of annual hurricane activity, focused on landfall frequency. A key element of such for such outlooks is the state of the Atlantic Multidecadal Oscillation (AMO). The AMO is an ocean circulation pattern and related Atlantic sea surface temperature that changes in 25- to 40-year periods with increased or suppressed hurricane activity.

The year 1995 marked the transition to the warm phase of the AMO, which has been an active period for Atlantic hurricanes. In the warm phase of the AMO, sea surface temperatures in the tropical Atlantic are anomalously warm compared with the rest of the global tropics. These conditions produce weaker vertical wind shear and a stronger West African monsoon system that are conducive to increased hurricane activity in the North Atlantic.

There has been a great deal of uncertainty about the status of the AMO, complicated by the overall global warming trend. According to the AMO index produced by NOAA, the current positive (warm) AMO phase has not yet ended. In contrast, an alternative AMO definition, the standardized Klotzbach and Gray AMO Index, indicates the AMO has generally been in a negative (cooler) phase since 2014 – and May 2018 had the lowest value since 2015.

An intriguing development is underway in the Atlantic in 2018. The figure below shows sea surface temperature anomalies in the Atlantic for May. You see an arc of cold blue temperature anomalies extending from the equatorial Atlantic, up the coast of Africa and then in an east-west band just south of Greenland and Iceland. This pattern is referred to as the Atlantic ARC pattern. See figure 2 below.

A time series of sea surface temperature anomalies in the ARC region since 1880, depicted below, shows that temperature changes occur in sharp shifts occurring in 1902, 1926, 1971 and 1995. On the bottom graph, the ARC temperatures show a precipitous drop over the past few months. Is this just a cool anomaly, similar to 2002? Or does this portend a shift to cool phase of the AMO? See figure 3 below.

Figure 3. Top: ARC temperatures from 1880-2017. The black lines reflect the cold and warm regimes of the Atlantic Multidecadal Oscillation. Bottom: ARC temperatures from 1982 through June 2017.

Based on past shifts, a forthcoming shift to the cool phase of the AMO is expected to have profound impacts:

  • diminished Atlantic hurricane activity
  • increased U.S. rainfall
  • decreased rainfall over India and the Sahel region of Africa
  • shift in north Atlantic fish stocks
  • acceleration of sea level rise on northeast U.S. coast.

The figures below depict how the AMO has a substantial impact on Atlantic hurricanes. The top figure shows the time series of the number of major hurricanes since 1920. The warm phases of the AMO are shaded in yellow. There are substantially higher numbers of major hurricanes during the periods shaded in yellow. A similar effect of the AMO is seen on the Accumulated Cyclone Energy (ACE). Seasonal ACE is a measure of the overall activity of a hurricane season that accounts for the number, strength and duration of all of tropical storms in the season. See figure 4 below.

These variations in Florida landfalls associated with changes in the AMO have had a substantial impact on development in Florida. The spate of hurricanes starting in 1926 killed the economic boom that started in 1920. Florida’s population and development accelerated in the 1970s, aided by a period of low hurricane activity. By contrast, the warm versus cool phase of the AMO has little impact on the frequency of U.S. landfalling hurricanes generally. However, the phase of the AMO has a substantial impact on Florida landfalls. During the previous cold phase, no season had more than one Florida landfall, while during the warm phase there have been multiple years with as many as three landfalls. A major hurricane striking Florida is more than twice as likely during the warm phase relative to the cool phase.

New developments in decadal scale prediction are combining global climate model simulations with statistical models. Such predictions have shown improved skill relative to climatological and persistence forecasts on the decadal time scale.

See also: Tornadoes: Can We Stop the Cycle?  

2018 Atlantic hurricane season

The recent tropic Pacific Ocean La Niña event is now over; the tropical Pacific is trending to neutral with an El Niño watch underway. Sea surface temperatures in the subtropical Atlantic are currently the lowest that have been seen since 1982. For the 2018 Atlantic hurricane season, many forecasters who predicted a normal or active season previously are now lowering their forecasts, considering the trend toward El Niño and the cool temperatures observed in the tropical Atlantic.

Based on the overall expectations for low Atlantic hurricane activity in 2018, combined with forecasts of a U.S. landfall ranging from 50% to 100%, we can expect 2018 to be a year with smaller economic loss from landfalling hurricanes relative to the average.

Looking at longer time horizons, there is a potential game-changer in play – a possible shift to the cold phase of the Atlantic Multidecadal Oscillation that would herald multiple decades of suppressed Atlantic hurricane activity that would have a substantial impact on reduced landfalls, particularly in Florida.

New Era in Modeling Catastrophic Risk

The 2018 hurricane season opened with the arrival of subtropical storm Alberto on the coast of Florida. Natural disasters such as these regularly imperil human lives and trillions of dollars of infrastructure. Although we can’t stop them, we can limit their financial repercussions through the development of more accurate predictions based on an updated approach to modeling catastrophic risk.

The Flawed Assumption

Stationarity is the name for the concept of data remaining unchanged—or stationary—over time. When applied to climate science, it refers to the assumption that the earth’s climate is not changing. The vast majority of climate scientists believe the stationarity assumption is incorrect, and any approaches based on this assumption are fundamentally flawed.

Yet traditional catastrophic climate risk models are built on the assumption of stationarity. They project the future based on past statistics and the assumption of a static climate. Insurers actually use this approach with reasonable success for regional, national and international insurance policy portfolios. However, when stationarity is applied to risk analyses for specific structures or large commercial properties, the model breaks down.

Localized Assets

The problem is that risks to localized assets are not homogeneous across regions and properties. Localized predictions require data that accounts for the dynamics of the local environment.

Those dynamics include not only a changing climate but human-engineered alterations, as well. Simply breaking ground for a new building affects potential flooding scenarios. To accurately assess and mitigate potential risk, developers, municipalities and insurance companies need models for the individual block and street and are not constrained by stationarity.

Creating a dynamic model that collects and analyzes data with such localized resolution is not a simple matter of “downscaling” old methods. It requires a different strategy and discipline, with single-site analysis as a core objective.

See also: Role of Big Data in Fighting Climate Risk  

Risk Modeling Reimagined

Incorporating natural and human-architected factors in a dynamic, integrated model is fundamental to an asset-focused solution that delivers accurate, actionable information. Such a solution requires comprehensive and current data, powerful big data analytics and a flexible design that can easily incorporate new modeling techniques as they become available.

At Jupiter Intelligence, our solution is built on a cloud-based platform designed specifically for the rigors of climate analysis and links data, probabilistic and scenario-based models and advanced validation. ClimateScore runs multiple models based on a changing climate, such as weather research and forecasting. ClimateScore’s models are continuously fine-tuned using petabytes of constantly refreshed data from millions of ground-based and orbital sensors. Novel machine learning techniques reduce local biases of scientific simulations and help the system continually improve as new observations become available.

Forgoing stationarity and adding the flexibility of a cloud model, current data from multiple sources and state-of-the-art analytics, machine learning and artificial intelligence technology produces asset-level predictions that are accurate from two hours to 50 years in the future.

Case Study: Miami

Understanding how developed Miami’s coast has become with localized data down to the individual block and street can help insurance companies, municipalities and developers assess the potential risk and determine cost-effective solutions.

Engineering firms need this data to evaluate the potential effects of flooding at a particular site and simulate how effective individual coastal protection measures are in protecting properties and neighborhoods from flooding over the life of these structures.

Public agencies also need this granularity to figure out how vulnerable their assets (ports, airports, transit, waste water treatment and drinking water facilities) are to a changing climate. Similarly, private entities want to assess exposed assets (substations, buildings, generators and data centers) and critical systems that may need to be redesigned to handle changing conditions. One critical condition to evaluate is the capacity of the electrical grid to handle peak demand during longer and more intense heat waves.

See also: Low-Risk Doesn’t Mean No-Risk 

New Risk-Transfer Mechanisms

Stationarity-based catastrophic risk models were never intended to assess risks to specific assets. To mitigate asset-level risk, all aspects of the private sector, as well as government bodies at the international, national and local levels, must make informed decisions based on accurate, current, highly localized data.

Property values, liability risk and lives are at stake. With dynamic models, current data and modern analytics, mitigating risk is feasible. This type of information resource also will support new risk transfer mechanisms, including private insurance—and help reform obsolete mitigation strategies.

This article was originally published at Brink News, here.

What Insurers Can Teach Others on ERM

The risk management practices of insurance companies have been scrutinized by rating agencies, regulators, analysts and others for years because insurers are financial institutions that deal with high levels of risk that, improperly managed, could not only hurt their creditworthiness but damage the financial well-being of their customers. As a result of this scrutiny, insurers have developed robust and comprehensive risk management processes, increasingly known as enterprise risk management (ERM). The ERM process covers the entire company, from strategy setting to core business operations and even relationships with external stakeholders. The maturity of insurers’ models means that there are some best practices worthy of emulation or adaptation by other industries.

A selection of these is presented in this article: aggregation of risk, correlation of risk and opportunity risk management.

Aggregation of risks

Within the ERM process step of “risk identification,” insurers pay special attention to aggregation of risk. How much of the same risk can be prudently taken, and how much risk is represented by one catastrophic event?

A simple example would relate to how much property insurance is being written in Florida, which is prone to hurricane losses. Or, how much workers’ compensation is being written for one industry group that could be affected by a pervasive occupational health hazard such as mesothelioma.

A proper assessment requires: 1) knowledge of what business is being written (sold), 2) fine-tuned understanding of that business (e.g., not all property in the state of Florida is subject to the same degree of hurricane loss), 3) recognition of what could be a potential risk issue within a book of business (e.g., workers in industries that still handle asbestos or operate in older buildings that have not been remediated).

Having taken account of accumulations, insurers proceed to reduce their exposure to them. This can take many forms, including: 1) writing less business within the geography, customer segment or type of coverage making up the accumulation, 2) adding exclusions or sub-limits into the insurance policy to eliminate or reduce what is covered if the risk produces a loss, 3) requiring/ helping customers to make themselves less vulnerable to the risk and 4) developing rapid responses to minimize the extent of loss after the risk has created a loss.

Moving outside the insurance company realm, any company can be subject to a variety of types of aggregations that can be above a normal, acceptable range of risk. Some examples might include:

• Shopping center management companies with many centers in neighborhoods with poor economic outlooks
• Banks with loan portfolios too heavily balanced toward governments or businesses in countries with low ratings for economic or political stability
• OEM manufacturers that supply parts to only one industry — one that may be in the process of technological obsolescence or some other life cycle dip
• Consumer goods manufacturers with narrow product lines that are tied to one demographic group that is fickle or is becoming economically pinched

Consider a large company with many silos, one that is not very good at sharing information and not tightly managed. What would happen if: 1) one unit of that company placed its call center in one of the BRIC countries (Brazil, Russia, India and China), 2) another unit opened a major manufacturing plant in that country, 3) another unit outsourced its IT code development to that country and 4) the finance unit invested in bonds from that same country — and that country suddenly had a debilitating natural catastrophe, the government or currency collapsed or a nationwide problem developed? The point is that the company in the example should be aware that it is creating an aggregated risk potential by having so many ties to that country with varying exposures.

Any significant concentration of geography, market segmentation or product offering can pose a risk to a business.

What makes ERM so powerful is that all important risks get identified, whether insurable or not, especially strategic risks, and that these risks get addressed through mitigation action plans. It is surprising how often companies do not see the magnitude or variations of risks they are facing; an effective ERM process should prevent that blindness.

Having identified an aggregation risk, companies can create mitigation plans for managing the risk. Mitigation tactics for aggregation risks in non-insurance businesses could include:

• Diversification in geographic spread
• Diversification in product portfolio
• Diversification in customer segmentation
• Innovation around uses of current products
• Innovation around ways to be more profitable with current products such that margins could increase while sales decrease
• Growth limits in risky areas; growth goals in less risky areas

Correlation of risks

Insurers have also become adept at identifying correlated risks. These are risks that may not appear to be connected but could be realized as part of the same event. Or they could be risks that have a cause and effect relationship on each other — a domino effect.

Correlated risks could dramatically strain an insurer’s ability to pay claims or remain fiscally viable. A hurricane, for example, might not only trigger covered property damage but also business interruption, supply chain, losses from canceled event and so on. Unless the insurer understands the totality of correlated losses, it cannot determine how much business it should write in any single hurricane-prone territory. Also correlated to the hurricane is an increase in the cost of repair and rebuilding property because of what is termed “storm surge” — when goods or services are in greater demand after a major event. So, the insurer is not only paying out on claims from different policies (or lines of business) but may also be paying more than usual because of inflated costs.

The concept of correlated risk is not very prevalent in non-insurance companies but could be just as serious an exposure. Consider an electrical power company. It knows that its dependence on an adequate supply of water leads to a risk that drought could affect its output capabilities and its customer satisfaction. The utility may not be fully cognizant of the correlated risks. Therefore, its risk mitigation and contingency planning may not include those risks. These might include: 1) the risk that government subsidies or support could be cut as the government attends to other issues arising from drought; 2) the risk that the cost of water or expense for routing the water supply will increase because of low water levels; 3) the risk that malfunctions will occur with power plant equipment because of lower or inconsistent water supply feeds, or 4) the risk that business customers that do not get sufficient water for their operations may sue the supplier. Without a robust ERM process to help identify both insurable and non-insurable risks, these risks may go unrecognized and unmitigated and without an effective response plan.

In fact, all companies fear that “perfect storm” where many risks materialize at once that could damage and destabilize the business. Yet, some correlations might have been identifiable and action taken to ameliorate the risks, had an effective ERM strategy been in place.

Opportunity risks

There is risk in both taking and missing a potential opportunity. It may be too much to ask businesses to identify the risks and calculate the cost of not taking every opportunity that management decides against for strategic, risk-related or other reasons. However, it is expected, within an ERM oriented business, that the risks of taking or avoiding an opportunity are considered and addressed.

When an insurer offers a new type of coverage for exposures such as supply chain, or cyber or reputation for the first time, the risk is great. That is because there is often no historical loss data upon which to estimate losses and price the product. For initial losses, there is no historical data to use in setting up an adequate reserve. Additionally, there is no guarantee that enough business will be written to create a large enough pool of policy holders (law of large numbers) to spread the odds of loss enough to produce favorable outcomes.

The ERM process that insurers employ compels them to look for opportunity risks and to devise ways to ameliorate the risks. How do insurers do this? They build their risk mitigation action plans using expertise across their many functions.

For new product risk, insurers might start out by: 1) offering low limits, 2) requiring higher deductibles or self-insured retentions, 3) buying more reinsurance or partnering with a reinsurer on the new book of business, or 4) charging prices that may appear to be high but that take into account the risk-adjusted cost of capital.

In other industries, new products also pose opportunity risks. Key questions to ask include: Will the new product reach the required ROI set for it within the timeframe set? Will the new product cannibalize some existing product or products? Will the new product create issues related to product recall, patent infringement or other lawsuits?

Through the application of a robust ERM process, all or most of the risks can be identified and mitigation action plans developed. This creates a safety net for the company and makes it more likely that it will get more comfortable and proficient at product innovation. There are so many types of opportunity risk beyond new products. ERM can help with each of them.