Over the past year, flood insurance has become more apparent in the media and trade publications. Normally, only catastrophic events (i.e. hurricanes) capture so much attention, but the combination of some massive floods and the continued progress of private flood legislation has started conversations that are overdue. Both the nature of these storms and floods, and their impact on property owners are getting close attention, and that is welcome because it is changing the way people think about underwriting flood insurance.
Recently, Jeri Xu of Swiss Re published an article that illustrates such a change of perception. She offers a very useful way to think of the rain events (what NOAA calls 1-in-a-thousand-year rain storms) that have caused some of the most serious recent floods (i.e. 2016 Texas, West Virginia, Maryland and Louisiana). Because these types of flood-causing storms are localized at the county-level (roughly speaking), and there are about 3,000 counties in the country, it is not unreasonable to expect three flood-causing thousand-year rain storms every year. With this insight, Xu has transformed the extremely rare to the commonplace and reconciled the headlines with the stats.
A bit of caution is needed when comparing rain events with flood events – for the sake of this argument, let’s assume a millennial downpour does result in flooding (it is not a stretch to say so).
Xu and the headlines are teaching us to stop wondering when a serious flood is going to happen – it is way more important to understand where the damage will be when the serious flood does happen.
The accepted and common way to guess where the flooding will occur is the 100-year floodplain on FEMA’s FIRMs. However, according to this article from David Bull, 85% of the losses in Baton Rouge and Lafayette were outside the 100-year flood plain and uninsured. Clearly, the FIRMs do not help underwriters (or homeowners) understand flood risk (neither where, nor when). Indeed, the FIRMs were never intended for that, as they are rate maps, not risk maps.
This approach is comparable to how wind (and, lately, storm surge) is underwritten. Karen Clark & Co. has taken such an approach for hurricane: The software assumes an event (the firm calls them characteristic events, or CEs) and then calculates the expected loss results based on that CE happening. There is good reason for this: Underwriters should assume a handful of hurricanes will land on the coast in a given year, just as they should assume a handful of significant inland flood events should happen annually. Working with that logic makes it less important to wonder when something will happen.
It has long been written about how flood losses occur beyond flood zones. Looking at flood risk by where, not when, is an effective way for underwriters to manage their business while considering this fact. More importantly, it is a view of risk that supports the creation of insurance products that can help narrow the protection gap in the U.S., because it is unacceptable to have 85% of damaged homes (in Louisiana of all places) without flood coverage.
With our past few posts on ITL, we have been exploring how insurers can continue to bring more private capacity to U.S. flood (Note: Everything we talk about for U.S. flood is also relevant for Canada flood). We have explored here how technology, data and analytics exist to handle flood in an adequately sophisticated manner, and we have described here the market opportunity that exists. Now, it’s worth a look to explore how a flood program could be introduced, starting from scratch through cherry-picking mischaracterized risks and then to a full, mass-market solution.
What’s a FIRM? It’s not what you think
First, let’s take a quick look at how National Flood Insurance Program (NFIP) rates are determined: the Flood Insurance Rate Maps, or FIRMs. For the NFIP, FIRMs solve two core problems – identifying which properties must have flood insurance and how much to charge for it. The first function is for banks, giving them an easy answer for whether a property to be lent against requires flood insurance – this is what the Special Flood Hazard Area (SFHA) is for. Anything within the SFHA is deemed to be in a 100-year flood zone (basically, A and V zones), and requires flood insurance for a mortgage. The second function sets the pricing and conditions for the NFIP to sell the actual policies. The complexity of solving these two problems should not be underestimated for a country of this size. But it must be remembered that a FIRM is a marketing device and not a risk model.
Considering that FIRMs are a marketing device built on a huge scale, it makes perfect sense that some generalizations needed to be made on the delineation of the various flood zones. The banks needed a general guideline to know when flood insurance was needed, and the NFIP needed rates to be distributed in a way that could result in a broad enough risk pool to generate enough premium to be solvent. While the SFHA has served the banks well enough over the years, the rating of properties has not been so successful. There are plenty of reasons the NFIP is deep in debt (see page 6 of this report); suffice it to say that the rates set by FIRMs do not result in a solvent NFIP.
The fact that the FIRMs are a flawed rating device based on geographical generalizations means there are cherries to be picked. By applying location-based flood risk analytics to properties in the SFHA, a carrier can begin to find where the NFIP has overrated the risk. Using risk assessments based on geospatial analysis (such as measurements to water) and their own data (such as NFIP claims history), a carrier can undercut the NFIP on specific properties where the risk fits their own appetite. Note to cherry-pickers: Ensure you account for the height above ground of the building, because you won’t need elevation certificates for this type of underwriting. So far, cherry-picking has been focused on the SFHA for a couple reasons – homeowners need to have coverage, and the NFIP rates are the highest. There is no reason, though, that cherry picking can’t be done effectively in X zones and beyond.
The same data and analytics used for cherry-picking can be used more broadly to create a mass-market solution. By adjusting the dials on the flood risk analytics – and flood risk analytics really should be configurable – you can calibrate to calculate the flood risk at low-risk locations. In other words, flood risk can be parsed into however many bins are needed to underwrite flood risk on any property in the country. With the risk segmented, rates can be defined that can (and should) be applied as a standard peril on all homeowner policies. Flood risk can be underwritten like fire risk.
Insurers have traditionally been confident underwriting fire risk. But consider this: While fire is based on construction type, distance to fire hydrants and distance to fire station, flood risk can be assessed with parameters that can be measured with similar confidence but with greater correlation to a potential loss.
Flood will be the new fire
Insurers have been satisfied to leave flood risk to the Feds, and that was prudent for generations. But technology has evolved, and enterprising carriers can now craft an underwriting strategy to put flood risk on their books. Fire was once considered too high-risk to underwrite consistently, but as confidence grew on how to manage the risk it became a staple product of property insurers. Now, insurers are dipping their toes into flood risk. As others follow, confidence will grow, and flood will become the new fire.
Munich Re is known as a conservative giant of international reinsurance, so it might seem odd that it is joining the National Flood Insurance Program (NFIP) in covering U.S. flood. A quick look at the opportunity shows why the plan makes sense.
U.S. inland flood insurance is an untapped source of non-correlated premium unlike any other in the world. The market is dominated by an incumbent market maker that is in trouble because it offers an inferior product that cannot price risk correctly (this paper nicely summarizes the problems at NFIP). So, here is what the new entrants are seeing:
Contrary to industry beliefs, flood is insurable. The tools are present to accurately segment risk.
Carriers offering flood capacity will differentiate themselves from competitors. This will give them a leg up on the competition in a market that is highly homogeneous. Carriers not offering flood will likely disappear.
The market is massive, with potentially 130 million homes and tens of billions of dollars at stake.
Let’s go into details.
Capital Into a Ripe Market
The U.S. Flood Market
As most readers of Insurance Thought Leadership already know, many carriers have flood on the drawing board right now. The Munich Re announcement was not really a surprise. We all know there will be more announcements coming soon.
Let’s summarize the market reasons for the groundswell of private insurance in U.S. flood.
The most obvious characteristic of the market is the size. For the sake of this post, we’ll just consider homes and homeowner policies. Whether one considers the number of NFIP policies in force as the market size (about 5.4 million policies in 2014), the number of insurable buildings (133 million homes) or something in between, there is clearly a big market. And the NFIP presents itself as the ideal competitor – big, with a mandate not necessarily compatible with business results.
So, there is no doubt that a market exists. Can it be served? Yes, because the risk can be rated and segmented.
Low-Risk Flood Hazard
To be clear: A low-risk-flood property has a profile with losses estimated to be low-frequency and low-severity. In other words: Expected flood events would rarely happen, and not cause much damage if they do. For many readers, joining the words “low-risk” and “flood” together is an oxymoron. We strongly disagree. Common sense and technology can both illustrate how flood risk can be segmented efficiently and effectively into risk categories that include “low.”
Let’s start with common sense. Flood loss occurs because of three possible types of flood: coastal surge, fluvial/river or rain-induced/pluvial (here is more information on the three types of flood). The vast majority of U.S. homeowners are not close enough to coastal or river flooding to have a loss exposure (here is a blog post that explores the distribution of NFIP policies). Thus, the majority of American homeowners are only exposed to excess surface water getting into the home. We’d be willing to wager that most of the ITL readership does not purchase flood insurance, simply because they don’t need it. That is the common-sense way of thinking of low-risk flood exposure.
How does the technology handle this?
There is software available now that can be used to identify low-risk flood locations (as defined by each carrier), supported by the necessary geospatial data and analytics. Historically, this was not the case, but advances in remote sensing and computing capacity (as we explored here) make it entirely reasonable now, with location-based flood risk assessment the norm in several European countries. Distance to water, elevations, localized topographical analyses and flood models can all be used to assess flood risk with a high degree of confidence. In fact, claims are now best used as a handy ingredient in a flood score rather than as a prime indicator of flood risk.
How to Deliver Flood Insurance in the U.S.
Deliver Flood Insurance to What Kind of Market?
Readers must be wondering at the size of market, because we offered two distinctly different possibilities above – is it about 5 million to 10 million possible policies, or 130 million policies? The difference is huge – the difference is between a niche market and a mass market.
The approach taken by flood insurers thus far is for a niche market. The current approach probably has long-term viability in high-risk flood, and the early movers that are now underwriting there are establishing solid market shares, cherry-picking from the NFIP portfolio.
On a large scale, though, the insurance industry’s approach needs to be for a mass market.
Here is a case study describing the mass market opportunity:
Using InsitePro (see image below), you can see that the property is miles and miles away from any coastal areas, rivers or streams. More importantly, the home is elevated against its surroundings, so water flows away from the property, which is deemed low-risk.
The area has no history of flooding, and this particular community has one of the most modern drainage systems in the state.
Screenshot of InsitePro, courtesy of Intermap Technologies. FEMA zones in red and orange
Using Google Maps street view, we can estimate that the property is two to three feet above street level, which adds another layer of safety. Also, this view confirms that the area is essentially flat, so the property is not at the bottom of a bathtub.
And, as with most homes in California, this property has no basement, so if water were to get into the house it would need to keep rising to cause further damage.
To an underwriter, it should be clear that this home has minimal risk from flooding. As a sanity check, she could compare losses from flood for this property (and properties like it in the community) to other hazards such as fire, earthquake, wind, lightning, theft, vandalism or internal water damage. How do they compare? What are the patterns?
For this specific home, the NFIP premium for flood coverage is $430, which provides $250,000 in building limit and $100,000 in contents protection. The price includes the $25 NFIP surcharge.
This is a mind-boggling amount of premium for the risk imposed. Consider that for roughly the same price you can get a full homeowners policy that covers all of these perils: fire, earthquake, wind, lightning, theft AND MORE! It is crazy to equate the risk of flood to the risk of all those standard homeowner perils, combined! We provided this example to show that even without all the mapping and software tools available for pricing, what we can quickly conclude is that the NFIP pricing for these low-risk policies is absurdly high. Whatever the price “should” be for these types of risks, can you see that it MUST be a fraction of the price of a traditional homeowner’s policy? Don’t believe that either? Consider that the Lloyd’s is marketing its low-risk flood policies as “inexpensive,” and brokers tell us privately that many base-level policies will be 50% to 75% less expensive than NFIP equivalents.
The news gets even better. There are tens of millions of houses like this case example, with technology now available to quickly find them. These risks aren’t the exception; these risks can be a market in their own right. Let the mental arithmetic commence!
Summary: Differentiate or Die!
The Unwanted Commodity
Most consumers of personal lines products don’t have the time or the ability to evaluate an insurance policy to determine whether it provides good value. Regrettably, most agents and brokers don’t have the time to help them either. So, when shopping for a product that they hope they will never use and that they are incapable of truly understanding, consumers will focus on the one thing they do understand: price.
Competing on price becomes a race to the bottom (yay! – another soft market) and to death. But there is an opportunity here – carriers that compete on personal lines/homeowner insurance with benefits that are immediately apparent (like value, flexibility, service, conditions and, inevitably, price) have a rare chance to stake out significant new business, or to solidify their own share.
The flood insurance market is real, and it’s big enough for carriers to establish a healthy and competitive environment where service and quality will stand out, along with price. Carriers that would like to avoid dinosaur status can remain relevant and competitive, with no departure from insurance fundamentals – rate a risk, price it and sell it. It’s obvious, right?
Which carriers will be decisive and bold and begin to differentiate by offering flood capacity? Which carriers will evolve to keep pace or even lead the pack into the next generation of homeowner products? More importantly, which of you will lose market share and cease to exist in 10 years because you didn’t know what innovation looks like?
Property damage because of flooding is quite different from any other catastrophic peril such as hurricane, tornado or earthquake. Unlike with those perils, estimating losses from flood requires a higher level of geospatial exactness. Not only do we need to know precisely where that property is located and the distance to the nearest potential flooding source, but we also need to know the elevation of the property in comparison to its nearby surroundings and the source of flooding. Underwriting flood insurance is a game of inches, not ZIP codes.
With flood, a couple feet can make the difference between being in a flood zone or not, and a few inches of elevation can increase or decrease loss estimates by orders of magnitude. This realization helps explain the current financial mess of the National Flood Insurance Program (NFIP). In hindsight, even if the NFIP had perfect actuarial knowledge about the risk of flood, its destiny was preordained simply because it lacked other necessary tools.
This might make the reader believe that insuring flood is essentially impossible. Until just a few years ago, you’d be right. But, since then, interesting stuff has happened.
In the past decade, technologies like data storage, processing, modeling and remote sensing (i.e. mapping) have improved incredibly. All of a sudden it is possible to measure and store all topographical features of the U.S. — it has been done. Throw in analytical servers able to process trillions of calculations in seconds, and all of a sudden processing massive amounts of data is relatively easy. Meanwhile, the science around flood modeling, including meteorology, hydrology and topology, has been developed in a way that the new geospatial information and processing power can be used to produce models that have real predictive capabilities. These are not your grandfather’s flood maps. There are now models and analytics that provide estimates for frequency AND severity of flood loss for a specific location, an incredible leap forward from zone or ZIP code averaging. Like baseball, flood insurance is also a game of inches. And now it’s also a game that can be played and profited from by astute insurance professionals.
For the underwriting of insurance, having dependable frequency and severity loss estimates at a location level is gold. There is no single flood model that will provide all the answers, but there is definitely enough data, models and information available to determine frequency and severity metrics for flood to enable underwriters to segment exposure effectively. Low-, moderate- and high-risk exposures can be discerned and segregated, which means risk–based, actuarial pricing can be confidently implemented. The available data and risk models can also drive the design of flood mitigation actions (with accurate credit incentives attached to them) and marketing campaigns.
With the new generation of models, all three types of flooding can be evaluated, either individually or as a composite, and have their risk segmented appropriately. The available geospatial datasets and analytics support estimations of flood levels, flood depths and the likelihood of water entering a property by knowing the elevation of the structure, floors of occupancy and the relationship between the two.
In the old days, if your home was in a FEMA A or V zone but you were possibly safe from their “base flood” (a hypothetical 1% annual probability flood), you’d have to spend hundreds of dollars to get an elevation certificate and then petition the NFIP, at further cost, hoping to get a re-designation of your home. Today, it’s not complicated to place the structure in a geospatial model and estimate flood likelihood and depths in a way that can be integrated with actuarial information to calculate rates – each building getting rated based on where it is, where the water is and the likelihood of the water inundating the building.
In fact, the new models have essentially made the FEMA flood maps irrelevant in flood loss analysis. We don’t need to evaluate what flood zone the property is in. We just need an address. Homeowners don’t need to spend hundreds of dollars for elevation certificates; the models already have that data stored. Indeed, much of the underwriting required to price flood risk can be handled with two to three additional questions on a standard homeowners insurance application, saving the homeowner, agent and carrier time and frustration. The process we envision would create a distinctive competitive advantage for the enterprising carrier and one that would create and capture real value throughout the distribution chain, if done correctly. This is what disruption looks like before it happens.
In summary, the tools are now available to measure and price flood risk. Capital is flooding (sorry, we couldn’t help ourselves) into the insurance sector, seeking opportunities to be put to work. While we understand the skepticism of the industry to handle flood, the risk can be understood well enough to create products that people desperately need. Insuring flood would be a shot in the arm to an industry that has become stale at offering anything new. Billions of dollars of premium are waiting for the industry to capitalize on. One thing the current data and analytics make clear is this: There are high-, medium- and low-risk locations waiting to be insured based on actuarial methods. As long as flood insurance is being rated by zone (whether it is FEMA zone or Zipcode), there is cherry-picking to be done.
Who wants to get their ladder up the cherry tree first? And who will be last?