Tag Archives: flood insurance

How to Minimize Flood Losses

The Weather Network reports that “the first three months of the 2020 Atlantic hurricane season have featured numerous… records” as we enter the official peak in September. Hurricane Hanna, the first of the season, brought heavy rain and flash flooding that, according to Karen Clark & Co. (KCC), caused $350 million of damage to Texas automobiles and properties, The Insurance Information Institute (III) reported that Hurricane Laura, which hit on Aug. 26, caused insured wind and flood losses “well into the billions of dollars.

It’s vital for insurers to understand the flooding impact of hurricanes vs. the tropical storms they develop into after they hit land. In June 2020, the federal government stated, “New NOAA-funded research finds that across all major Atlantic hurricanes affecting the southeastern and eastern U.S. during the twentieth century, the largest areas and heaviest intensities of rainfall over land occur after major hurricanes become tropical storms, not during hurricanes or even major hurricanes.” Therefore, if more record-breaking hurricane seasons occur, the losses to insurers from tropical storms are likely to be the biggest risk.

Many insurers now publish flood plans that can be used by homeowners or businesses and encourage the use of flood defenses and other resilience measures to mitigate losses. However, losses continue to mount, which for many is due to the quality of flood warnings they receive. 

National and local flood warnings currently have two weaknesses: lack of detailed information on the precise locations at risk of flood and too many false alarms. The weaknesses are caused by weather’s unpredictability, with every storm being different. For instance, Hurricane Laura’s flood warnings predicting flooding to 20 feet high and 40 miles inland did not materialize because the eye of the storm hit 40 miles east of the forecast’s prediction.  Such traditional forecasts use library-based approaches, which do not consider flow routing processes. As a result, they have very low levels of accuracy and are unable to provide depth or time forecasts. 

Over the last 19 years of university and commercial research, in conjunction with the U.K. government, the next generation of flood forecasting has been developed. This uses a live modeling approach, which solves the library-based problems by using hydrodynamic modeling, explicitly routing flood water over the landscape. This new forecasting technology has been proven globally, including against Superstorm Sandy, which hit New York City in 2012, where the model had 90% accuracy. These results prompted Loughborough University to commercialize the service through a spinout business, Previsico.

See also: Now Comes the Flood Season

This new approach provides real-time warnings at an individual property level, with a new forecast generated every three hours that predicts flooding up to 48 hours ahead of time. Critically, this approach works for storm surge and river and surface water flooding, including flash flooding.

Case studies show, for instance, that two New York art galleries that incurred $6.3 million in losses in Superstorm Sandy would have had no losses if the new approach had been in force, providing a timely warning to move the art work to safety. 

Lloyd’s Lab case studies show that reducing flood losses from immovable assets can range from 10% to above 90% if the property has protection such as flood defenses and other resilience measures. While many hurricane victims may have some flood protection, they need accurate warnings to ensure the protection is in place in time.  

The National Flood Insurance Program (NFIP) provides buildings and content insurance support to businesses and homeowners alike. However, this still leaves insurers exposed to business interruption and automobile claims when hurricanes hit. 2017’s Hurricane Harvey brought 20 trillion gallons of rainfall, the equivalent of nearly 1 million gallons per person living in Texas, according to the federal government. Thousands of businesses were affected and incurred billions in losses that could have been avoided. For instance, Big Star Honda in Houston lost 600 vehicles and had to cease operating for five days. With actionable flood warnings, the cars could have been easily moved to safety.

The U.S. private flood insurance market is growing, as Lloyd’s and re-insurers offer capacity to try to fill the $40 billion U.S. flood insurance gap. This is important because the National Association of Insurance Commissioners found that 50% of flood losses are outside FEMA’s high risk areas, and 99% of properties that fall outside the zones have no flood insurance. The insurance gap is only going to increase, as NOAA states that, “With future warming, hurricane rainfall rates are likely to increase, as will the number of very intense hurricanes, according to both theory and numerical models.”

See also: A Way Forward on Flood Insurance?

Flood insurance is a substantial opportunity, with gains possible across a carrier’s business, especially in a hardening market. Underwriters can manage loss ratios more effectively by agreeing with customers on their responsibility to manage use flood warnings and plans. Marketing can offer flood warnings as a value-added service to customers, and claims departments can respond to flooding more efficiently. Finally, boards armed with a real-time property-level flood loss estimate can manage their exposure.

A Way Forward on Flood Insurance?

In the mess that is flood insurance in the U.S., a bright spot emerged late last month when First Street Foundation released a major report on the issue, along with a model that will go a long way toward making assessment of flood risk more accurate and transparent.

The report serves first and foremost as a wake-up call. It says, for instance, that 70% more homes are within a “100-year” flood zone than are designated as such by the the Federal Emergency Management Agency (FEMA). That means 6 million households face flood risks they don’t anticipate, yet aren’t eligible for the National Flood Insurance Program. In Chicago, 13% of properties are at risk, according to First Street Foundation’s report, while FEMA puts that figure at less than 1%. The report says Washington, D.C., and Utah have five times the risk that FEMA sees, while Wyoming, Montana and Idaho have four times the risk.

Those sorts of figures are quite the clarion call, but First Street Foundation goes even further by providing the beginnings of a solution: data. Its model evaluates the risk for 142 million properties in the continental U.S., based on an exhaustive array of different inputs that not only are as accurate as possible for today but that project how risks will develop because of climate change. The model lets you search any address for free.

The model from First Street Foundation, a nonprofit research and technology group, should provide short-term benefits while laying the groundwork for smarter long-term policy decisions.

In the short run, potential buyers will understand their odds better and can either pass on a higher-risk property or can mitigate the risks by buying insurance or retrofitting the building. Banks will see the risks more clearly when writing mortgages — and some 30-year mortgages written today will still be in force in 2050, by which point the report projects at least 11% more properties will be at substantial risk of flooding. Insurers will price more accurately. Government — the 800-pound gorilla on flood policy — will have a better handle on what public works to undertake to protect vulnerable areas and what areas to steer clear of because the flood dangers are just too high.

(My entirely unrepresentative check on homes where I’ve lived over the decades struck me as spot on: All were ranked at the lowest level of risk, except for a condo I owned in Hoboken, N.J., that included the ground floor and that, in fact, flooded twice in the decade I owned it.)

In the long run, better information should allow flood risk to be allocated in a mostly rational manner, with homeowners and insurers mostly splitting the liability, but with government in the background to help with out-of-the-blue catastrophes.

We’ve all heard the stories about homes on the coast that get wiped out by storms, then rebuilt, only to be wiped out again, sometimes more than once. Having more accurate data should lead, in time, to underwriting decisions and government policy that reduce or even eliminate such craziness.

First Street Financial describes its report and model as a necessary but insufficient first step. That sounds right. The report is insufficient on its own because lots of other companies and groups will have to finetune the group’s data and, in general, deepen our understanding of flood risk. At ITL, we’ve long appreciated the work done by reThought and Hazard Hub, among others, but many firms will have to step up. And regulators, not known for turning on a dime, will need to become comfortable with using data that exists for each individual property, rather than thinking in broad, imprecise terms like flood plains.

But the report is a necessary, and very welcome, first step.

Stay safe.

Paul

P.S. Here is an intriguing piece from a sister publication, Risk & Insurance, on how insurance could help address systemic problems in police departments. The idea would be to require that police officers carry professional liability insurance. Police departments would cover the average cost of the insurance, but each officer deemed a high risk by actuaries (based on number and type of civilian complaints against them, for instance) would have to cover the additional premium payments. The hope would be to price bad officers out of work before they could do something that would wind up on the news.

I’m not at all sure the idea would work. Institutional forces such as police unions would resist like crazy, and there is surely enough uncertainty about how to weight risk factors that they’d be able to piece together an argument. But I found the idea innovative, so I figured I’d share the article. Maybe there’s a way to build on the idea.

P.P.S. Here are the six articles I’d like to highlight from the past week:

4 Post-COVID-19 Trends for Insurers

It’s not all gloom and doom. A crisis usually functions as a great breeding ground for innovation.

The Case for Paying COVID BII Claims

Is it reasonable to assume coverage for a COVID-19-related BII claim in the absence of a virus exclusion? The answer has to be, yes.

How Risk Managers Must Adapt to COVID

To modernize at the scale and speed required, ​”low-code” application development tools should be incorporated within the enterprise.

COVID: How Carriers Can Recover

Does RFP stand for “Request for Proposal” or “Really Frustrating Process?” Carriers can and must do better.

Strategic Planning in the COVID-19 Era

As insurers develop plans for 2021, the question is, where to start? Traditional processes may need to be supplemented with scenario planning.

ERM Shows Its Worth in Pandemic

Companies with sound ERM practices were better-positioned to deal with the pandemic than those with less sound or no ERM.

Micro-Censusing: Future of Flood

The ability to micro-census – that is, to gather granular data about individual homes and businesses and use it to inform underwriting – will lead to the biggest changes in flood insurance since the launch of the National Flood Insurance Program (NFIP) in 1968.

Most visible among these changes will be the transformation of NFIP policies and the rise of private flood insurance, which micro-censusing makes possible (read: potentially profitable at scale) for the first time. Here, I’ll examine the rise of micro-censusing in insurance; its likely applications in the flood market; and the potential impact on NFIP and private products, the agents selling them; and the Americans they’ll protect.

The Rise of Micro-Censusing and Its Applications in the Flood Market

In the last two decades, innovation in the insurance industry has been powered by better data. Data collection from smart devices is changing how health and auto insurers price policies, and data from services like Google Maps is changing how underwriters assess business insurance applications.

In homeowner’s insurance, providers are pulling publicly available, address-specific data – from roof type to proximity to a fire hydrant – and feeding it to algorithms to assess risks.

All of these can be considered examples of micro-censusing because they use data at the individual rather than demographic level to determine risk, which makes for much more accurate assessments.

Now that readily available tech makes micro-censusing possible and practical, it’s become wildly popular. With micro-census data, insurance providers can price policies more accurately and manage risk far better than was historically possible.

This is a boon for markets like flood, where existing risk models are often outdated. Micro-censusing makes it possible to assess risk on a property-by-property basis in something close to real time. 

In practice, this means, for example, that the houses on the lower-lying part of a street, where water tends to pool during heavy rainstorms, could receive vastly different quotes from those at the top of a modest hill, where puddles typically don’t form. Homeowners at both locations would receive more precise quotes.

Risk Rating 2.0: How NFIP Is Leading the Way

FEMA is redesigning its flood insurance products, thanks in part to micro-censusing breakthroughs. In October 2021, the NFIP plans to roll out Risk Rating 2.0, an all-new rating methodology.

While not many details about Risk Rating 2.0 are public, the update is expected to change NFIP policies in a few fundamental ways. 

First, micro-censusing capabilities are expected to introduce property-specific risk assessment capabilities, which will make way for flood insurance policies that are tailored to each household.

The new rating engine is also expected to help agents accurately price and sell policies. More rating clarity will help policyholders better understand their property’s flood risk and how that risk is captured in their cost of insurance.

Perhaps most importantly, NFIP’s rating characteristics under Risk Rating 2.0 include the cost to rebuild a home, which means that NFIP will aim to give more affordable quotes to owners of lower-value homes. In other words, the system will be able to provide fairer policies to all homeowners.

See also: Flood Insurance: Are the Storm Clouds Lifting?  

The Impact of Micro-Censusing on Private Insurance

In addition to changing the way the NFIP rates policies, micro-censusing technology is also drawing private insurers to enter the flood market. The new availability of data means they can now more confidently assess and underwrite risks around the country. 

The implications of this are significant: With private insurers entering the market, there’s sure to be an increase in available products, which means greater opportunity for Americans to protect their homes and greater opportunity for agents to grow their books and better serve their customers. 

Greater product availability will also put less strain on federal disaster funds. Today, 20% of all NFIP claims come from properties that aren’t in high-risk areas. Those properties receive a third of all federal disaster assistance for flooding, in part because they’re not required to carry flood insurance.

In other words, the impact of micro-censusing (and other technology) on flood insurance can’t really be overstated, especially in an era where FEMA’s official position is “anywhere it can rain, it can flood.”

See also: 5 FAQs on Private Flood Insurance  

Micro-Censusing Will Bring Macro Changes to America’s Flood Insurance

Today, the typical homeowner faces a 10% chance of fire loss over the course of a 30-year mortgage, but a 30% chance of flood loss. Still, 85% of homeowners have fire insurance and just 15% have flood insurance. These are clear indicators that flood insurance in America needs a makeover.

Micro-censusing has the power to spark dramatic change in the industry. The granular data it provides will lead to the entry of more private insurers and the improvement of NFIP policies, which will increase agents’ ability to find appropriate coverage for their customers. Overall, this will mean better flood protection for at-risk Americans.

Using High-Resolution Data for Flood Risk

More than 200 million people and two-thirds of the 48 contiguous states are at risk from flooding, according to Edward Clark, director of the U.S. National Water Center. This demonstrates the major threat that flooding poses to the reinsurance industry in the U.S. In light of this hazard, the U.S. private insurance market is growing, with 2017 reporting $600 million in premiums, an increase of $217 million over 2016. But why isn’t this figure higher? And, why do approximately 85% of U.S. homeowners lack flood insurance policies?

One of the key reasons, among many, for low private insurance penetration stems from the inadequacy of current flood data, such as FEMA’s, to fully assess this hazard.

Changes in landscape

Many parts of the U.S. have experienced extensive redevelopment since the creation of the industry-standard FEMA flood maps, and these redevelopment changes haven’t been adequately captured in flood maps—until now.

To illustrate this, take Sherwood Park in Palm Shores, Florida, which has seen rapid and major development since the 1980s. Areas that were once rural wetlands have been re-landscaped and developed into desirable real estate.

See also: How Non-Standard Became the Standard  

Figure 1 shows a historical map of the Palm Shores development in 1981. Palm Shores as an urban area was then confined to the southern part of the area shown, with areas to the north and west entirely rural with several lakes and ponds. This is contrasted with Figure 2, which shows aerial imagery of the Palm Shores development in 2016—an entirely different, more urbanized, landscape today.

Figure 1: Historical map of Palm Shores development in 1981. 

Figure 2: Aerial imagery of Palm Shores development in 2016. This shows an entirely different, more urbanized, landscape today. Basemap: U.S Geological Survey Historical Topographic Map Collection, 1983 ed., accessed via topoView.

Flood maps from the 1980s

To fully and accurately represent today’s flood hazard in these changing areas, it’s vital for insurers to use the most recent flood maps available, which use contemporary, best-available elevation data.

Figure 3 shows FEMA’s flood hazard zones for the Palm Shores area, showing some patches of Zone A flood zones in red, largely restricted to the west. The housing development in the bottom center is Sherwood Park, classified by FEMA as being in a Zone X area of minimal flood hazard.

Figure 3: FEMA flood hazard zones for the Sherwood Park housing development, Palm Shores, FL. 

Figure 4, on the other hand, shows JBA’s 1-in-100-year flood map, the equivalent of a FEMA Zone A map, which shows a very different picture. The areas of flood hazard are distributed across more of the overall area, with the Sherwood Park area now being represented as flood-prone.

Figure 4: JBA’S 1-in-100-year flood map, showing areas of flood hazard in Sherwood Park (dark orange indicates more severe flooding). Contains Microsoft® Bing™ Maps, © 2010-2017 Pirmin Kalberer & Mathias Walker, Sourcepole AG.

This disparity in flood hazard mapping can be better understood in the context of the altered landscape. It can be seen that FEMA’s mapping for this area largely corresponds to the landscape of the 1980s rather than the landscape of today. If insurers rely on mapping using this outdated data, they may easily over- or underestimate flood hazard in areas that have experienced redevelopment since the map’s creation.

See also: Hurricane Harvey: An Insurtech Case Study  

The need for high-resolution data

This urbanization in areas across the U.S. also highlights the need for high-resolution data, especially when we consider that more than 20% of all NFIP flood claims relate to properties outside FEMA-designated high-risk flood hazard areas.

Figure 5 shows downtown Miami when mapped at 30m resolution (left) and when mapped at 5m resolution (right). The 5m resolution illustrates the flow of water down narrow features such as walkways and roads much more effectively, whereas the 30m mapping can result in under- or overestimation of the hazard.

Figure 5: 30m flood map (left) vs JBA 5m flood map (right). Basemap data ©Mapbox ©OpenStreetMap.

In light of these changes in landscape, it’s vital that insurers use the right tools for today’s world. JBA’s flood maps include the most up-to-date elevation data and can be used within SpatialKey to help insurers better understand risk in the context of their portfolios for more informed and confident underwriting.

How Different Flood Types Affect Risk

For insurers to most effectively understand flood risk, they must have access to data that provides a full picture of the hazard, including the different flood types that might affect a property: fluvial, pluvial and storm surge. Although it may seem that flood is just flood, different types can produce various impacts on a property, causing different levels of damage.

Fluvial, pluvial and storm surge: Why it matters

Much of the U.S. is prone to both fluvial flooding (when rivers overtop their banks) and pluvial flooding (when water accumulates across the surface of the ground as a result of heavy rainfall). However, many coastal regions also experience storm surge flooding, which is a result of increased sea levels caused by weather events.

Storm surge flooding is extremely damaging due to the salinity of the water, while pluvial flooding is typically cleaner and quick to recede, likely resulting in lower-cost claims.

Without a view of these different drivers of flooding, insurers cannot understand the full exposure to their portfolios or fully engage with the private flood insurance market.

Use case: Jacksonville, Fla.

The need to understand all the drivers of flood can be illustrated using a residential property on 2nd Avenue, Jacksonville, Fla. Jacksonville is one of the five most vulnerable cities to hurricanes on the U.S. East Coast and at high risk from flooding, experiencing widespread storm surges and flooding during hurricanes Irma and Matthew.

The residential property shown in Figure 1 originally fell into a FEMA Zone X (designated as minimal flood risk).

Figure 1: Contains data from the FEMA National Flood Hazard Layer.

However, when we look at its location on the JBA flood map, we can see some differences in analysis. The JBA flood map identifies this location as at very severe risk to flood (Figure 2, below), from both fluvial and storm surge flooding, whereas using FEMA data alone would not account for either flood type or differentiate between fluvial and pluvial flood. Accessing data sources in addition to FEMA helps provide a more comprehensive understanding of the risk.

Figure 2

The complex interplay between flood types

The risk is particularly high for hurricane-prone areas like Jacksonville, where storm surges often coincide with inland flooding. It’s important to represent this complex interplay during the mapping process instead of tackling each flood type separately. JBA’s storm surge mapping has been developed in partnership with leading hurricane modelers Applied Research Associates, ensuring that hurricane activity is fully accounted for. Additionally, surge data has been used to modify JBA’s inland flood mapping process to reflect the fact that, during a hurricane, rivers can’t flow out to sea as they can in normal conditions. Flood waters then back up, exacerbating fluvial flooding. For insurers to obtain a complete understanding of the hazard, flood maps must fully represent this relationship.

Even with FEMA recently re-mapping the area as a FEMA A Zone, demonstrating that the area is at risk to flood, the drivers of the flood are not clear. As such, underwriting against the FEMA map alone could misrepresent the insurance coverage required.

See also: FEMA Flood Maps Aren’t Good Enough  

It’s clear that having a view of the different drivers of flood risk is vital for effectively understanding and underwriting the risk, especially in areas where hurricanes can be a major source of flood-driven losses.