Tag Archives: catastrophe model

Hurricane Joaquin: Why the Model Matters

It has been fascinating watching the progression of the forecasted path for Hurricane Joaquin — what a perfect example this is of the importance of a modern data and analytics platform!

The big news is that the hurricane is not expected to make landfall on the East Coast of the U.S., but the new forecast depends as much on analytics and big data as it does on actual changes in the storm’s path. The spotlight is now on the European Center for Medium-Range Weather Forecasts (the European model) vs. the Global Forecast System (GFS) run by the National Weather Service. The New York Times has a great article discussing the reasons for the changing forecast and, crucially, the differences between the two models.

This is an excellent lesson for insurers to see the power of modern data and analytics in action and what happens to models when they are not using the advanced capabilities available today. Fortunately, investment in analytics continues to rise, as detailed in SMA’s recent report, Maturing Technologies in Insurance. Almost three in four insurers are increasing their investment in analytics over the next three years. 48% of P&C insurers, in fact, are planning to increase their analytics investments by more than 10% annually during that time.

In recent conversation with key CAT modelers, we have learned that they are working to use their weather data and insights at a more granular level than ever before in coming releases. The advance of these CAT model tools creates opportunities for insurers in search of better predictive capabilities on weather events. An upgrade to the GFS model has been planned by the end of the year, taking advantage of soon-to-be-available computing capacity. Once it is up and running, it will be interesting to see how the upgraded GFS model compares with the current European model, especially when applied to future CAT events.

Insurers can take the continuing story of Hurricane Joaquin as a wake-up call — not only is analytics a critical area for investment, but the quality of the information and the computing capacity available have a major impact on how useful predictive modeling can be.

Modeling Flood — the Peril of Inches

“Baseball is a game of inches” – Branch Rickey

Property damage because of flooding is quite different from any other catastrophic peril such as hurricane, tornado or earthquake. Unlike with those perils, estimating losses from flood requires a higher level of geospatial exactness. Not only do we need to know precisely where that property is located and the distance to the nearest potential flooding source, but we also need to know the elevation of the property in comparison to its nearby surroundings and the source of flooding. Underwriting flood insurance is a game of inches, not ZIP codes.

With flood, a couple feet can make the difference between being in a flood zone or not, and a few inches of elevation can increase or decrease loss estimates by orders of magnitude. This realization helps explain the current financial mess of the National Flood Insurance Program (NFIP). In hindsight, even if the NFIP had perfect actuarial knowledge about the risk of flood, its destiny was preordained simply because it lacked other necessary tools.

This might make the reader believe that insuring flood is essentially impossible. Until just a few years ago, you’d be right. But, since then, interesting stuff has happened.

In the past decade, technologies like data storage, processing, modeling and remote sensing (i.e. mapping) have improved incredibly. All of a sudden it is possible to measure and store all topographical features of the U.S. — it has been done. Throw in analytical servers able to process trillions of calculations in seconds, and all of a sudden processing massive amounts of data is relatively easy. Meanwhile, the science around flood modeling, including meteorology, hydrology and topology, has been developed in a way that the new geospatial information and processing power can be used to produce models that have real predictive capabilities. These are not your grandfather’s flood maps. There are now models and analytics that provide estimates for frequency AND severity of flood loss for a specific location, an incredible leap forward from zone or ZIP code averaging. Like baseball, flood insurance is also a game of inches. And now it’s also a game that can be played and profited from by astute insurance professionals.

For the underwriting of insurance, having dependable frequency and severity loss estimates at a location level is gold. There is no single flood model that will provide all the answers, but there is definitely enough data, models and information available to determine frequency and severity metrics for flood to enable underwriters to segment exposure effectively. Low-, moderate- and high-risk exposures can be discerned and segregated, which means risk–based, actuarial pricing can be confidently implemented. The available data and risk models can also drive the design of flood mitigation actions (with accurate credit incentives attached to them) and marketing campaigns.

With the new generation of models, all three types of flooding can be evaluated, either individually or as a composite, and have their risk segmented appropriately. The available geospatial datasets and analytics support estimations of flood levels, flood depths and the likelihood of water entering a property by knowing the elevation of the structure, floors of occupancy and the relationship between the two.

In the old days, if your home was in a FEMA A or V zone but you were possibly safe from their “base flood” (a hypothetical 1% annual probability flood), you’d have to spend hundreds of dollars to get an elevation certificate and then petition the NFIP, at further cost, hoping to get a re-designation of your home. Today, it’s not complicated to place the structure in a geospatial model and estimate flood likelihood and depths in a way that can be integrated with actuarial information to calculate rates – each building getting rated based on where it is, where the water is and the likelihood of the water inundating the building.

In fact, the new models have essentially made the FEMA flood maps irrelevant in flood loss analysis. We don’t need to evaluate what flood zone the property is in. We just need an address. Homeowners don’t need to spend hundreds of dollars for elevation certificates; the models already have that data stored. Indeed, much of the underwriting required to price flood risk can be handled with two to three additional questions on a standard homeowners insurance application, saving the homeowner, agent and carrier time and frustration. The process we envision would create a distinctive competitive advantage for the enterprising carrier and one that would create and capture real value throughout the distribution chain, if done correctly. This is what disruption looks like before it happens.

In summary, the tools are now available to measure and price flood risk. Capital is flooding (sorry, we couldn’t help ourselves) into the insurance sector, seeking opportunities to be put to work. While we understand the skepticism of the industry to handle flood, the risk can be understood well enough to create products that people desperately need. Insuring flood would be a shot in the arm to an industry that has become stale at offering anything new. Billions of dollars of premium are waiting for the industry to capitalize on. One thing the current data and analytics make clear is this: There are high-, medium- and low-risk locations waiting to be insured based on actuarial methods. As long as flood insurance is being rated by zone (whether it is FEMA zone or Zipcode), there is cherry-picking to be done.

Who wants to get their ladder up the cherry tree first? And who will be last?

Updating Your Models for Hurricane Season

June 1 opened the North Atlantic hurricane season, with this year marking the 10th anniversary of one of the costliest storms to make landfall in the U.S. — Hurricane Katrina. Each year, hurricane season puts catastrophe (CAT) models to the test, with potentially millions of dollars riding on their accuracy. The loss estimates calculated by CAT models can play an important role in protecting your organization from financial loss.

The models have changed a lot over the past several years. For example, Hurricane Andrew in 1992 exposed the shortcomings of traditional actuarial methods that insurers use to model risks. And the billions of dollars in insured losses from Hurricane Katrina in 2005 helped lead to today’s CAT modeling rigor and its universal acceptance and use by the industry.

New Storms Change CAT Models
CAT models use algorithms to estimate potential losses stemming from a catastrophic event. Over the 10 years since Katrina, CAT modeling has become more complex because of technology improvements and the greater availability of data. After a significant storm, the models are updated based on the new data and a larger body of knowledge. These changes could considerably affect your property insurance and risk management strategies.

Here are some CAT modeling factors — which for U.S. hurricane exposures have changed several times in the last few years. You should consider the items below as you prepare for this year’s hurricane season:

  • Check your policy, including deductibles, coverage limits and sublimits, to ensure they’re adequate and realistic; check that exclusions are acceptable.
  • Ensure the quality of your CAT modeling data. Incomplete data causes more uncertainty for insurers; improving the data enables more accurate loss estimates and reduces the uncertainty for the underwriters.
  • Take a big picture view of your CAT exposures. By modeling your worldwide portfolio, you can identify regional drivers, which can help put U.S. hurricane risks in perspective. Also, using actuarial resources after a CAT or non-CAT claim can help evaluate your organization’s total cost of risk (TCOR), which can better inform how you use your risk management resources.

If you have locations in CAT-prone areas, you can fine-tune their CAT loss estimates with an understanding of how they’ve changed with each model update. Aligning your risk data with CAT modeling changes can yield better outputs for insurers to underwrite your risks.

To register for a webinar on June 17, 2015, on the lessons from Hurricane Katrina, click here.

Catastrophe Models Allow Breakthroughs

“In business there are two ways to make money; you can bundle or you can unbundle.” –Jim Barksdale

We have spent a series of articles introducing catastrophe models and describing the remarkable benefits they have provided the P&C industry since their introduction (article 1, article 2, article 3, article 4). CAT models have enabled the industry to pull the shroud off of quantifying catastrophic risk and finally given (re)insurers the ability to price and manage their exposure to the violent and unpredictable effects of large-scale natural and man-made events. In addition, while not a panacea, the models have leveled the playing field between insurers and reinsurers. Via the use of the models, insurers have more insight than even before into their exposures and the pricing mechanics behind catastrophic risk. As a result, they can now negotiate terms with confidence, whereas prior to the advent of the models and other similar tools, reinsurers had the upper hand with information and research.

We also contend that CAT models are the predominant cause of the reinsurance soft market via the entry of alternative capital from the capital markets. And yet, with all the value that CAT models have unleashed, we still have a collective sour taste in our mouths as to how these invaluable tools have benefited consumers, the ones who ultimately make the purchasing decisions and, thus, justify the industry’s very existence.

There are, in fact, now ways to benefit customers by, for instance, bundling earthquake coverage with homeowners insurance in California and helping companies deal with hidden volatility in their supply chains.

First, some background:

Bundling Risks

Any definition of insurance usually addresses the concept of risk transfer: the mechanism that ensures full or partial financial compensation for the loss or damage caused by event(s) beyond the control of the insured. In addition, the law of large numbers applies: the principle that the average of a large number of independent identically distributed random variables tends to fall close to the expected value. This result can be used to show that the entry of additional risks to an insured pool tends to reduce the variation of the average loss per policyholder around the expected value. When each policyholder’s contribution to the pool’s resources exceeds the expected loss payment, the entry of additional policyholders reduces the probability that the pool’s resources will be insufficient to pay all claims. Thus, an increase in the number of policyholders strengthens the insurance by reducing the probability that the pool will fail.

Our collective experiences in this world are risky, and we humans have consistently desired the ability to shed the financial consequences of risk to third parties. Insurance companies exist by using their large capital base, relying on the law of large numbers, but, perhaps most importantly, leveraging the concept of spread of risk, the selling of insurance in multiple areas to multiple policyholders to minimize the danger that all policyholders will experience losses simultaneously.

Take the peril of earthquake. In California, 85% to 90% of all homeowners do NOT maintain earthquake coverage even though earthquake is the predominant peril in that state. (Traditional homeowners policies exclude earth movement as a covered peril). News articles point to the price of the coverage as the limiting factor, and that makes sense because of that peril’s natural volatility. Or does it make sense?

Is the cost of losses from earthquakes in California considerably different than, say, losses from hurricanes in Florida, in which the wind peril is typically included in most homeowners insurance forms? Earthquakes are a lot more localized than hurricanes, but the loss severity can also be more pronounced in those localized regions. Hurricanes that strike Florida can be expected with higher frequency than large damage-causing earthquakes that shake California. In the final analysis, the average projected loss costs are similar between the two perils, but one has nearly a 100% take-up rate vs. the other at roughly 10%. But why is that so? The answer lies in the law of large numbers, or in this case the lack thereof.

Rewind the clock to the 1940s. If you were a homeowner then, the property insurance world looked very different than it does today. As a homeowner back then, you would need to virtually purchase separate policies for each peril sought: a fire, theft and liability policy and then a windstorm policy to adequately cover your home. The thought of packaging those perils into one convenient, comprehensive policy was thought to be cost-prohibitive. History has proven otherwise.

The bundling of perils creates a margin of safety from a P&C insurer’s perspective. Take two property insurers who offer fire coverage. Company A offers monoline fire, whereas Company B packages fire as part of a comprehensive homeowners policy. If both companies use identical pricing models, then Company B can actually charge less for fire protection than Company A simply because the additional premium from Company B affords peril diversification. Company B has the luxury of using premiums from other perils to help offset losses, whereas Company A is stuck with only its single-source fire premium and, thus, must make allowances in its pricing that it could be wrong. At the same time, Company B must also make allowances in the event its pricing is wrong, but can apply smaller allowances because of the built-in safety margin.

This brings us back to the models. It is easy to see why earthquake and other perils, such as flood, was excluded from homeowners policies in the past. Without models, it was nearly impossible to estimate future losses with any sort of reliable precision, leaving insurers the inability to collect enough premium to compensate for the inevitable catastrophic event. Enter the National Flood Insurance Program (NFIP), which stepped in to offer flood coverage but never fundamentally approached it from a sound underwriting perspective. Instead, in an effort to make the coverage affordable to the masses, the NFIP severely underpriced its only product and is now tens of billions of dollars in the red. Other insurers bravely offered the earthquake peril via endorsement and were devastated after the Northridge earthquake in 1994. In both cases, various market circumstances, including the lack of adequate modeling capabilities, contributed to underpricing and adverse risk selection as the most risk-prone homeowners gobbled up the cheap coverage.

Old legacies die hard, but models stand ready to help responsibly underwrite and manage catastrophic risk, even when the availability of windstorm, earthquake and flood insurance has been traditionally limited and expensive.

The next wave of P&C industry innovation will come from imaginative and enterprising companies that use CAT models to economically bundle risks designed to lower the costs to consumers. We view a future where more CAT risk will be bundled into traditional products. As they continue to improve, CAT models will afford the industry the confidence needed to include earthquake and flood cover for all property lines at full limits and with flexible, lower deductibles. In the future, earthquake and flood hazards will be standard covered perils in traditional property forms, and the industry will one day look back from a product standpoint and wonder why it had not evolved sooner.

Unbundling Risks

Insurance policies as contracts can be clumsy in handling complicated exposures. For example, insurers have the hardest time handling supply chain and contingent business interruption exposures, and rightly so. Because of globalization and extreme competition, multinational companies are continuously seeking value in the inputs for their products. A widget in a product can be produced in China one year, the Philippines the next, Thailand the following year and so on. It is time-consuming and resource intensive to keep track of not only how much of a company’s widgets are manufactured, but also what risks exist surrounding the manufacturing plant that could interrupt production or delivery. We would be hard-pressed to blame underwriters for wanting to exclude or significantly sublimit exposures related to supply chain or business interruption; after all, underwriters have enough difficulty just to manage the actual property exposures inherent in these types of risks.

It is precisely this type of opportunity that makes sense for the industry to create specialized programs. Unbundle the exposure from the remainder of the policy and treat it as a separate exposure with dedicated resources to analyze, price and manage the risk.

Take a U.S. semiconductor manufacturer with supply exposure in Southeast Asia. As was the case with the 2011 Thailand floods or the 2011 Tohoku earthquake and tsunami, this hypothetical manufacturer is likely exposed to supply chain risks of which it is unaware. It is also likely that the property insurance policy meant to indemnify the manufacturer for covered losses in its supply chain will fall short of expectations. An enterprising underwriter could carve out this exposure and transfer it to a new form. In that form, the underwriter can work with the manufacturer to clarify policy wording, liberalize coverage, simplify claims adjusting and provide needed additional capacity. As a result, the manufacturer gets a risk transfer mechanism that more precisely aligns with the balance sheet affecting risks it is exposed to. The insurer gets a new line of business that can provide a significant source of new revenue using tools such as CAT models and other analytics to price and manage those specific risks. By applying some ingenuity, the situation can be a win/win all around.

What if you are a manufacturer or importer and rely on the Port of Los Angeles or Miami International Airport (or any other major international port) to transport your goods in and out of markets? This is another area where commercial policies handle business exposure poorly, or not even at all. CAT models stand ready to provide the analytics required to transfer the risks of these choke points from business balance sheets to insurers. All that is required is vision to recognize the opportunity and the sense to use the toolsets now available to invent solutions rather than relying on legacy group think.

At the end of the day, the next wave of innovation will not come directly from models or analytics. While the models and analytics will continue to improve, real innovation will come from creative individuals who recognize the risks that are causing market discomfort and then use these wonderful tools to build products and programs that effectively transfer those risks more effectively than ever. Those same individuals will understand that the insured comes first, and that rather than retrofitting dated products to suit a modern-day business problem, the advent of new products and services is an absolute necessity to maintain the industry’s relevance. The only limiting factor preventing true innovation in property insurance is imagination and a willingness to no longer cling to the past.