Tag Archives: rms

Parametric Insurance: 12 Firms to Know

If you’ve read Part One of this series, you’ll have got your crash course in parametric insurance and can now call yourself an expert (relatively speaking). In the article, I promised to give a short overview of the 12 companies I think worth looking at that are examples of how parametric insurance works, and what the future might look like. But first a quick summary of Part One:

  • Parametric insurance is emerging as a way to provide financial protection against losses that are often hard, or even impossible, to get insured for.
  • Parametric insurance has been around for over 20 years. Today, it makes up around 15% of issued catastrophe bonds in a $100 billion market.
  • Access to more data and more reliable sources of how that data is transferred is opening up an opportunity to take parametric insurance to companies small, medium and large. The access is also offering new ways to address the global insurance gap.

Now, here’s my list, ordered approximately by theme, rather than any specific ranking. Leaderboards have their place, but outside the established market for catastrophe bonds the area of parametric insurance is still too diverse, and broadly unproven, to attempt to rank companies. At least not yet.

AIR Worldwide (Owned by Verisk)

AIR’s U.S. hurricane catastrophe model was used for the first major catastrophe bond, issued by USAA, in 1997, providing $400 million of protection. AIR continues to provide one of the two most commonly used suites of catastrophe models. Reinsurers, brokers and insurers run cat models to price and manage natural catastrophe risk in most major countries. AIR has probably been the most frequently used modeling agent for U.S. hurricane catastrophe bonds in the last decade, and the company provided the analysis for the WHO pandemic catastrophe bond.


RMS and AIR have been jostling for position as the leader in providing insurance linked securities (ILS) catastrophe bonds to the global insurance industry. The RMS capital markets team has been behind some of the most complex and innovative catastrophe bonds, and RMS has been particularly strong in creating well-designed parametric triggers. Examples of bonds that RMS has worked on include Golden Goal, which provided $262 million of terrorism cover for FIFA for the 2006 World Cup. RMS was also behind the New York Mass Transit Authority (MTA) $200 million storm surge bond in 2013, issued following MTA’s unexpected and largely uninsured losses of $4.7 billion from Hurricane Sandy.

As I mentioned in Part One, Artemis has the most comprehensive directory of catastrophe bond issued, if you want to learn more about what RMS, AIR and EQECAT (since acquired by Corelogic) have worked on. 

See also: Growing Case for Parametric Coverage  

New Paradigm Underwriters

This is one version of what the future may look like as parametric insurance moves upstream to the company market. Founded in 2013, New Paradigm pre-dates the term “insurtech,” but as an MGA using new technology in a smart way it is one of the pioneers in this space. The company offers supplemental U.S. hurricane insurance for businesses that want added coverage for exclusions from the conventional policies on offer from insurers. The company’s first product used an index derived from recorded wind speed as the payout trigger, and the company is now diversifying into terrorism cover.

A quick side note here. It was discovered in the early history of parametric hurricane bonds that conventional windspeed recorders that were relied on to measure windspeed (and hence define if the bond had triggered) had a tendency to blow away when conditions got unusually gusty. New Paradigm, and others structuring windstorm parametric triggers, now use data from the WeatherFlow network. It has installed over 100 windspeed recorders designed to survive winds of 140 mph and requiring no external power.


Adam Rimmer and Ian Bartholomew, the founders of FloodFlash, started their careers at RMS and got their passion for parametric insurance when working on the New York MTA bond. The company’s seed funding came from U.K. investor and incubator Insurtech Gateway three years ago. Still relatively modestly funded (£2 million, according to Crunchbase), FloodFlash is one of the best examples of parametric insurance being used today to provide a solution where traditional insurers have declined cover.

In the U.K., most homeowners get flood protection thanks to the government’s Flood Re initiative, but commercial businesses are excluded. FloodFlash models the flood risk at a high resolution and sells building-specific parametric insurance. The company operates as an MGA and sells via brokers. A FloodFlash sensor is attached to a building and triggers a payment almost instantly when the water rises to a pre-agreed depth. With hundreds of clients already signed up in the U.K., FloodFlash proved its worth after Hurricane Caura hit the country earlier this year — “the fastest payout by a parametric insurance product that I’ve ever seen,” according to Steve Evans of Artemis.

Global Parametrics

Global Parametrics was launched in 2016, the brainchild of Professor Jerry Skees, and run today by Hector Ibarra, formerly of the World Bank and Partner Re. With funding that includes support from the U.K. government’s DIFD and Germany’s KfW, the company is building parametric products to support organizations and people in the developing world who lack insurance coverage or can’t afford it. Global Parametrics has commissioned its own models for climate-related losses around the world and is building out partnerships with other leading providers. Its customers include microfinance lending organizations and NGOs such as VisionFund. The company provides payments through disaster recovery payments, which can be used to help get vulnerable communities back on their feet after a flood, drought or other natural disaster. The team is well connected and has strong technical chops, definitely one to watch. Catch Hector live or listen to the recording of our chat on our BrightTALK channel.

Descartes Underwriting

It’s one thing to build the technology for parametric insurance, but someone needs to have the confidence to underwrite it. Descartes is a Paris-based underwriting specialty insurer and is open-minded in what it covers as long as it gets “proper data.” Coverage so far has included property damage, business interruption from natural catastrophes, losses from droughts and losses from excessively high or low temperatures. Descartes has covered industries in areas such as agriculture, mining, construction, renewable energy and supports banks in protecting their loans and assets. Sebastien Piguet, co-founder and head of underwriting at Descartes, spoke to us on stage in April last year, and you can hear him on Episode 23 of the InsTech London podcast

Jumpstart Recovery

Getting claims paid from traditional insurance cover can take weeks, or even longer after a major catastrophe, but the costs kick in immediately. California earthquake insurance is expensive,and there are few affordable options to the rather limited state-backed California Earthquake Authority. Kate Stillwell, an engineer and earthquake modeler, started Jumpstart in 2015 with the aim of providing much-needed funds to increase the financial resilience of communities and provide economic stimulus immediately after an earthquake. Jumpstart accesses the peak ground velocity of the earthquake recorded by the USGS (U.S. Geological Survey) and aims to pay claims after 24 hours. The cover is currently limited to $10,000 per person, for residents of California only, and users need to certify, by text, that there has been damage and loss. Jumpstart has been supported SCOR’s Channel Syndicate.


One of the best ways to reduce loss from natural disasters is to provide funds to help people act before an event even happens. There is a lot of work going on to improve resilience from natural disasters through improvements in construction, often at a city or state level, but actions taken by individuals before disasters hit can also make a big difference. No one’s yet figured out how to forecast earthquakes, but Chris Lee, Dublin-based founder of Exante, launched his company in 2019 with backing from Shipyard Technology Ventures. Its aim is to help increase hurricane resilience for companies and their staff with a new approach to using parametric cover. Exante has designed a payout approach that is developed and calibrated using near-time forecasts of U.S. hurricane severity and landfall. If a hurricane looks likely to make landfall, funds will be released in the hours before a hurricane strikes. Payments will be made directly to Exante’s clients’ employees to help cover the costs of protecting their homes or evacuation expenses. It’s early days yet for the company, but contingency finance for risk prevention is a smart way to reduce losses.

African Risk Capacity

The African Risk Capacity (ARC) is a specialized agency of the African Union established to help governments improve their abilities to plan for, prepare for and respond to extreme weather events and natural disasters. ARC is using parametric triggers to provide contingency funding, and ARC Insurance creates pools of risk across Africa, which are then insured in the global markets. One of ARC’s parametric covers had a wobble in 2016, when a major drought in Malawi caused a large loss for farmers, but due to a problem in how the modeled index was set up didn’t trigger a payment as was intended. ARC ended up agreeing to pay a contribution toward the costs, but the wobble is a reminder that parametric insurance is sensitive to modeling assumptions and data, and that payouts may not always match the financial losses suffered (a problem termed “basis risk).

See also: Travel Insurance: An Exemplary Experience  


Paul Prendergast set up Blink in 2016 to provide flight cancellation insurance and earlier this year announced the launch of “Blink Parametric.” Back in the normal world we knew a few weeks ago, Blink Travel offered a cash payout or vouchers for hotel stays to customers who missed a flight, all fully automated. A recent development is Blink Energy & IoT, aimed at domestic appliance insurance and industrial IoT, offering protection for problems such as unexpected increase or decrease in energy usage. Blink’s partners include Generali, Munich Re and Manulife. Paul reckons he’ll have three million customers by the end of this year.


Arbol was set up in 2018 by former banker and commodities trader Siddhartha Jha to provide weather-related crop cover for farmers and others. The team is using highly localized data sets accessed from IoT sensors and satellites to create bespoke cover down to individual field level and is selling these through an established insurer broker network. The market in the U.S. for agriculture insurance is limited due to government subsidies, but demand globally is significant, and a lack of crop insurance, particularly in developing countries, is one of the biggest contributors to the global insurance protection gap. I’ll be recording an interview with Siddhartha later this year.


Formerly known as Fractal, Qomplx has the experience, beefy technology and access to data for rapidly analyzing risk across many industries. The insurance business is headed by President Alastair Speare-Cole, previously chief underwriting officer of Qatar RE, CEO of broker JLT Re and chairman of Aon Benfield Securities. Qomplx has a number of initiatives in the pipeline. It recently launched its first parametric product, WonderCover, backed by Chaucer and offering cyber and terrorism cover for small to medium-sized enterprises (SMEs). Alastair and his team supported our live chat event on April 30 on our BrightTALK channel.

In conclusion..

It’s not possible to get every company offering parametric insurance onto a list of 10, and this is certainly not intended to be the definitive top 10. (Although unlike some lists of “top insurtech companies” I’ve come across, at least all these companies are all still in business at the time of writing.) None of the main brokers are mentioned, but the big three (or should that be two?) are key in working with insurers and insureds to help communicate and structure all but the smallest risks. As a supporter of InsTech London, Aon gets a shout out here as one of the longest-standing experts in this field.

There are other companies we’re watching closely and have had on stage at InsTech London. Please let me know of other (decent) companies you are aware of with parametric solutions.

And look out for more live events on this topic soon. I’ll also be hosting chats on post-pandemic coverages. Registration on BrightTALK.

Finally, if you are a company that would like to be considered for a future article, being a member of InsTech London or having a great photo of your equipment or your tech….

If you enjoyed this, found it useful or maybe both, then you may find something of interest in my other articles. You can also hear me talking to the industry’s leaders and innovators each week on the InsTech London podcast channel (available on Apple, iTunes, Spotify etc). And for a weekly check-in on what’s going on and what we think about it, you can get our two-minute, handcrafted newsletter delivered to you each Wednesday morning – sign up here.

An Opportunity in Resilience Analytics?

In my post last month, I discussed why the insurtech revolution should be focusing more on addressing the protection gap, thereby growing the pool of insurable risks, rather than figuring out how best to eat the insurance incumbents’ lunch.

At a conference in February, Tom Bolt of Lloyd’s noted that an increase of 1% in insurance penetration can lead to a 13% drop in uninsured losses and a 22% drop in taxpayers’ share of the loss. The key to increasing penetration is lowering distribution costs to make products more affordable. That is where insurtech can come in. Many recent startups have business models looking to tackle the excessive intermediation costs that exist in the current insurance value chain.

Sadly, when a catastrophe strikes areas of low insurance penetration, those communities not only suffer from the difficulties of having to seek aid—which can take three-plus months to reach affected zones—but also face the prospect of a significant drag to economic growth. It is unsurprising, therefore, that governments in vulnerable countries are keen to improve their “resilience” and seek solutions to better prepare themselves for catastrophes by working with the likes of the World Bank, the UN and the recently established Insurance Development Forum (IDF). Interestingly, AIR Worldwide announced recently the Global Resilience Practice, which will be led by former U.S. presidential adviser Dr. Daniel Kaniewski.

See also: InsurTech Need Not Be a Zero-Sum Game  

As well as providing low-cost distribution models in new markets, a related opportunity I see for insurtech is working together with the insurance industry in the growing field of resilience analytics. As Robert Muir-Wood recently pointed out on RMS’ blog, the claims data gathered by insurers — which historically has been used for the pricing and managing of risk — have the potential to also be used to reduce the potential for damage before the event. Insurtech companies could work with government authorities to pool this claims data, leveraging it with other key data from external sources and then using the results to influence urban resilience strategies. There are inevitable doubts over the willingness of insurers to share their data, but agile and thoughtful startups are likely better placed to be able to find insights in a world of abundant unstructured data than the more technologically challenged incumbents.

The current size of the protection gap is a failure of the insurance industry, and any companies that can help address it will not only be first movers in new markets but will also be adding social value and much-needed resilience to vulnerable communities all over the world.

Competing in an Age of Data Symmetry

For centuries, people have lived in a world where data was largely proprietary, creating asymmetry. Some had it. Others did not. Information was a currency. Some organizations held it, and profited from it. We are now entering an era of tremendous data balance — a period of data symmetry that will rewrite how companies differentiate themselves.

The factors that move the world toward data symmetry are time, markets, investment and disruption.

Consider maps and the data they contained. Not long ago, paper maps, travel books and documentaries offered the very best views of geographic locations. Today, Google allows us to cruise nearly any street in America and get a 360° view of homes, businesses and scenery. Electronic devices guide us along the roadways and calculate our ETA. A long-established map company such as Rand McNally now has to compete with GPS up-and-comers, selling “simple apps” with the same information. They all have access to the same data. When it comes to the symmetry of geographic data, the Earth is once again flat.

Data symmetry is rewriting business rules across industries and markets every day. Insurance is just one industry where it is on the rise. For insurers to overcome the new equality of data access, they will need to understand both how data is becoming symmetrical and how they can re-envision their uniqueness in the market.

It will be helpful to first understand how data is moving from asymmetrical to symmetrical.

Let’s use claims as an example. Until now, the insurer’s best claims data was found in its own stockpile of claims history and demographics. An insurer that was adept at managing this data and applied actuarial science would find itself in a better position to assess risk. Competitively, it could rise to the top of the pack by pricing appropriately and acquiring appropriately.

Today, all of that information is still very relevant. However, in the absence of that information, an insurer could also rely upon a flood of data streams coming from other sources. Risk assessment is no longer confined to historical data, nor is it confined to answers to questions and personal reports. Risk data can be found in areas as simple as cell phone location data — an example of digital exhaust.

Digital exhaust as a source of symmetry

Digital exhaust is the data trail that all of us leave on the digital landscape. Recently, the New York City Housing Authority wished to determine if the “named” renter was the one actually living in a rent-controlled apartment. A search of cell phone tower location records, cross-referenced to a renter’s information, was able to establish the validity of renter occupation. That is just one example of digital exhaust data being used as a verification tool.

Another example can be found in Google’s Waze app. Because I use Waze, Google now holds my complete driving history — a telematics treasure trove of locations, habits, schedules and preferences. The permissions language allows Waze to access my calendars and contacts. With all of this, in conjunction with other Google data sets, Google can create a fairly complete picture of me. This, too, is digital exhaust. As auto insurers are proving each day, cell phone data may be more informative to proper pricing than previous claims history. How long is it until auto insurers begin to look at location risk, such as too much time spent in a bar or frequent driving through high-crime ZIP codes? If ZIP codes matter for where a car is parked each night, why wouldn’t they matter for where it spends the day?

Data aggregators as a source of symmetry

In addition to digital exhaust, data aggregators and scoring are also flattening the market and bringing data symmetry to markets. Mortgage lenders are a good example from outside the industry. Most mortgage lenders pay far more attention to comprehensive credit scores than an individual’s performance within their own lending operation. The outside data matters more than the inside data, because the outside data gives a more complete picture of the risk, compiled from a greater number of sources.

Within insurance, we can find a dozen or more ways that data acquisition, consolidation and scoring is bringing data symmetry to the industry. Quest Diagnostics supplies scored medical histories and pharmaceutical data to life insurers — any of whom wish to pay for it. RMS, AIR Worldwide, EQECAT and others turn meteorological and geographical data into shared risk models for P&C insurers.

That kind of data transformation can happen in nearly any stream of data. Motor vehicle records are scored by several agencies. Health data streams could also be scored for life and health insurers. Combined scores could be automatically evaluated and placed into overall scores. Insurers could simply dial up or dial down their acceptance based on their risk tolerance and pricing. Data doesn’t seem to stay hidden. It has value. It wants to be collected, sold and used.

Consider all the data sources I will soon be able to tap into without asking any questions. (This assumes I have permissions, and barring changes in regulation.)

  • Real-time driving behavior.
  • Travel information.
  • Retail purchases and preferences.
  • Mobile statistics.
  • Exercise or motion metrics.
  • Household or company (internal) data coming from connected devices.
  • Household or company (external) data coming from geographic databases.

These data doors, once opened, will be opened for all. They are opening on personal lines first, but they will open on commercial lines, as well.

Now that we have established that data symmetry is real, and we see how it will place pressure upon insurers, it makes sense to look at how insurers will use data and other devices to differentiate themselves. In Part 2 of this blog, we’ll look at how this shift in data symmetry is forcing insurers to ask new questions. Are there ways they can expand their use of current data? Are there additional data streams that may be untapped? What does the organization have or do that is unique? The goal is for insurers to innovate around areas of differentiation. This will help them rise above the symmetry, embracing data’s availability to re-envision their uniqueness.

No, Insurance Will Not Be Disrupted

I recently had the pleasure of attending the Insurance Disrupted conference in Palo Alto (put on by the Silicon Valley Innovation Center in partnership with Insurance Thought Leadership). This was the single best insurance conference I have ever attended. I was surrounded by hundreds of hopeful, smart, problem-solving professionals from disparate backgrounds and industries all trying to make a difference in insurance without money being the prime motivator.

I was so encouraged by what transpired at the conference, the connections that I made and what I believe would be the promise of a new future that I began to pen this article on my flight home. But something just did not sit right with me as I wrote. Three weeks have gone by, and I am beginning to understand why I felt the way I did; at the end of the day, insurance will NOT be disrupted.

For all the promise of big data, the Internet of Things, autonomous vehicles and peer-to-peer insurance, there was nothing presented at this conference that struck me as disruptive in the way the tech industry is generally thinking of the term today. When technologists think of disruption, they immediately point to Uber and Airbnb, which disrupted the taxi/livery and travel accommodations industries. The taxi industry is literally fighting for its survival. No, that will not be the fate of insurance. Insurance will be a lot more difficult to shake up or disrupt.

Here’s why:

  1. At the core, insurance customers are leasing the potential to access capital. That capital is sitting in predominantly liquid assets. Not real estate, not taxi medallions. How do you make a big pile of money irrelevant?
  2. The modern form of the industry is 300 years old, and the math is pretty solid (that’s why they call it actuarial science). We sell a product whose costs are unknown at the time of purchase. That means scale and immense capital is required to cover worst-case scenarios, which rules out any new business model not having that potential. Peer-to-peer providers just won’t be able to get sufficient scale to efficiently use capital to cover risk. And if they aggressively get scale, then they just become another insurance company, so what’s the point?
  3. Getting a better glimpse into those unknown expenses can create massive competitive advantages. This is where big data and the IoT creators are looking to disrupt, as big data and IoT will generate incredibly large data sets to be used to accurately predict, avoid and mitigate future losses. I have no doubt that these new technologies will make an impact on the industry, but I am less convinced of their disruptive nature. Insurers have already established non-actuarial, big data departments where fraud detections and credit scoring are just a couple of many predictive models being created. IoT devices will slowly be adopted by most insurers as they look to get competitive edges, but the follow-the-leader paradigm of the industry will mean that any edge will disappear quickly, and we will all be running hard just to stay in place. These technologies are impressive. I would classify them as a solid innovations to the industry, but not disruptive. (Disclaimer: I bought a smart battery from Roost.)
  4. Autonomous vehicles represent the one area where some chaos can occur. But notice I use the word “chaos” and not “disruption.” If autonomous vehicles can live up to expectations, then they will be a great service to society, reducing deaths and increasing efficiency. Risk will transfer from a personal lines business to commercial lines, and that could be chaotic for heavy personal lines auto writers such as State Farm and Progressive. But will this be disruptive? Will State Farm or Progressive be fighting for their survival the way that medallion owners in the New York City taxi system are? Again, I doubt it. State Farm is sitting on about $70 billion in surplus capital, and it generally writes at a 100 combined ratio, working the float and cash flow model. I think State Farm and large auto insurers like them will be just fine, and technologies such as autonomous vehicles will be more of an annoyance than an existential threat. And like others, I don’t think autonomous cars are nearly as ready to take over our roads as many seem to think.
  5. For better or worse, state-by-state regulation of insurance is intense and nebulous. Ask Zenefits. The battlefield is already uncertain, and scrutiny by a regulator with political ambitions can kill your disruptive product quickly. Any technology that you think you can create that could potentially benefit the majority of buyers while subsequently raising the price for some other group, alone, would be grounds for a regulator to squash you, as that vocal minority raises their collective voices. In Florida, the state may even create a company to compete against you, writing business at a loss. Insurance regulation might be the ultimate disruption killer.
  6. There was not one presentation on natural catastrophes, which happen to be my area of expertise. How we underwrite, manage and think about natural catastrophe risk has changed quite a bit over the past 20 years. In fact, CAT models have been and may continue to be the most disruptive force in insurance, and yet there is little technology can do to disrupt that area of the industry. I would have been very excited if we had discussions about new business models to help customers with the problems the industry is currently facing with getting adequate flood or earthquake cover to homeowners. If someone had proposed a new product that removed the exclusions of flood and earthquake from the homeowners policy, now, THAT would be disruptive! Alas, nothing on NatCat, and so we will continue to have thousands of homeless families following big storms and earthquakes.

I don’t think insurance will be disrupted, not in the way folks from Silicon Valley are used to doing it. But the future of insurance will look very different than today. Very digital. Streamlined. Less clunky, more efficient. If “disruption” comes to insurance, it is likely going to require the replacement of the current set of leaders with new ones cultured in this digital age and influenced by the successes of technology to make change happen to their business models.

Paul Vandermarck from RMS (a CAT modeling vendor) perhaps summed it up best when he said that no matter how all of this change to the industry plays out, we know of one sure winner: the customer. And that’s how it should be.

Riding Out the Storm: the New Models

In our last article, When Nature Calls, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. Those massive losses were a direct result of an industry overconfident in its ability to gauge the frequency and severity of catastrophic events. Insurers were using only history and their limited experience as their guide, resulting in a tragic loss of years’ worth of policyholder surplus.

The turmoil of this period cannot be overstated. Many insurers went insolvent, and those that survived needed substantial capital infusions to continue functioning. Property owners in many states were left with no affordable options for adequate coverage and, in many cases, were forced to go without any coverage at all. The property markets seized up. Without the ability to properly estimate how catastrophic events would affect insured properties, it looked as though the market would remain broken indefinitely.

Luckily, in the mid 1980s, two people on different sides of the country were already working on solutions to this daunting problem. Both had asked themselves: If the problem is lack of data because of the rarity of recorded historical catastrophic events, then could we plug the historical data available now, along with mechanisms for how catastrophic events behave, into a computer and then extrapolate the full picture of the historical data needed? Could we then take that data and create a catalog of millions of simulated events occurring over thousands of years and use it to tell us where and how often we can expect events to occur, as well as how severe they could be? The answer was unequivocally yes, but with caveats.

In 1987, Karen Clark, a former insurance executive out of Boston, formed Applied Insurance Research (now AIR Worldwide). She spent much of the 1980s with a team of researchers and programmers designing a system that could estimate where hurricanes would strike the coastal U.S., how often they would strike and ultimately, based on input insurance policy terms and conditions, how much loss an insurer could expect from those events. Simultaneously, on the West Coast at Stanford University, Hemant Shah was completing his graduate degree in engineering and attempting to answer those same questions, only he was focusing on the effects of earthquakes occurring around Los Angeles and San Francisco.

In 1988, Clark released the first commercially available catastrophe model for U.S. hurricanes. Shah released his earthquake model a year later through his company, Risk Management Solutions (RMS). Their models were incredibly slow, limited and, according to many insurers, unnecessary. However, for the first time, loss estimates were being calculated based on actual scientific data of the day along with extrapolated probability and statistics in place of the extremely limited historical data previously used. These new “modeled” loss estimates were not in line with what insurers were used to seeing and certainly could not be justified based on historical record.

Clark’s model generated hurricane storm losses in the tens of billions of dollars while, up until that point, the largest insured loss ever recorded did not even reach $1 billion! Insurers scoffed at the comparison. But all of that quickly changed in August 1992, when Hurricane Andrew struck southern Florida.

Using her hurricane model, Clark estimated that insured losses from Andrew might exceed $13 billion. Even in the face of heavy industry doubt, Clark published her prediction. She was immediately derided and questioned by her peers, the press and virtually everyone around. They said her estimates were unprecedented and far too high. In the end, though, when it turned out that actual losses, as recorded by Property Claims Services, exceeded $15 billion, a virtual catastrophe model feeding frenzy began. Insurers quickly changed their tune and began asking AIR and RMS for model demonstrations. The property insurance market would never be the same.

So what exactly are these revolutionary models, which are now affectionately referred to as “cat models?”

Regardless of the model vendor, every cat model uses the same three components:

  1. Event Catalog – A catalog of hypothetical stochastic (randomized) events, which informs the modeler about the frequency and severity of catastrophic events. The events contained in the catalog are based on millions of years of computerized simulations using recorded historical data, scientific estimation and the physics of how these types of events are formed and behave. Additionally, for each of these events, associated hazard and local intensity data is available, which answers the questions: Where? How big? And how often?
  2. Damage Estimation – The models employ damage functions, which describe the mathematical interaction between building structure and event intensity, including both their structural and nonstructural components, as well as their contents and the local intensity to which they are exposed. The damage functions have been developed by experts in wind and structural engineering and are based on published engineering research and engineering analyses. They have also been validated based on results of extensive damage surveys undertaken in the aftermath of catastrophic events and on billions of dollars of actual industry claims data.
  3. Financial Loss – The financial module calculates the final losses after applying all limits and deductibles on a damaged structure. These losses can be linked back to events with specific probabilities of occurrence. Now an insurer not only knows what it is exposed to, but also what its worst-case scenarios are and how frequently those may occur.


When cat models first became commercially available, industry adoption was slow. It took Hurricane Andrew in 1992 followed by the Northridge earthquake in 1994 to literally and figuratively shake the industry out of its overconfidence. Reinsurers and large insurers were the first to use the models, mostly due to their vast exposure to loss and their ability to afford the high license fees. Over time, however, much of the industry followed suit. Insurers that were unable to afford the models (or who were skeptical of them) could get access to all the available major models via reinsurance brokers that, at that time, also began rolling out suites of analytic solutions around catastrophe model results.

Today, the models are ubiquitous in the industry. Rating agencies require model output based on prescribed model parameters in their supplementary rating questionnaires to understand whether or not insurers can economically withstand certain levels of catastrophic loss. Reinsurers expect insurers to provide modeled loss output on their submissions when applying for reinsurance. The state of Florida has even set up a commission, the Florida Commission on Loss Prevention Methodology, which consists of “an independent body of experts created by the Florida Legislature in 1995 for the purpose of developing standards and reviewing hurricane loss models used in the development of residential property insurance rates and the calculation of probable maximum loss levels.”

Models are available for tropical cyclones, extra tropical cyclones, earthquakes, tornados, hail, coastal and inland flooding, tsunamis and even for pandemics and certain types of terrorist attacks. The first set of models started out as simulated catastrophes for U.S.-based perils, but now models exist globally for countries in Europe, Australia, Japan, China and South America.

In an effort to get ahead of the potential impact of climate change, all leading model vendors even provide U.S. hurricane event catalogs, which simulate potential catastrophic scenarios under the assumption that the Atlantic Ocean sea-surface temperatures will be warmer on average. And with advancing technologies, open-source platforms are being developed, which will help scores of researchers working globally on catastrophes to become entrepreneurs by allowing “plug and play” use of their models. This is the virtual equivalent of a cat modeling app store.

Catastrophe models have provided the insurance industry with an innovative solution to a major problem. Ironically, the solution itself is now an industry in its own right, as estimated revenues from model licenses now annually exceed $500 million (based on conversations with industry experts).

But how have the models performed over time? Have they made a difference in the industry’s ability to help manage catastrophic loss? Those are not easy questions to answer, but we believe they have. All the chaos from Hurricane Andrew and the Northridge earthquake taught the industry some invaluable lessons. After the horrific 2004 and 2005 hurricane seasons, which ravaged Florida with four major hurricanes in a single year, followed by a year that saw two major hurricanes striking the Gulf Coast – one of them being Hurricane Katrina, the single most costly natural disaster in history – there were no ensuing major insurance company insolvencies. This was a profound success.

The industry withstood a two-year period of major catastrophic losses. Clearly, something had changed. Cat models played a significant role in this transformation. The hurricane losses from 2004 and 2005 were large and painful, but did not come as a surprise. Using model results, the industry now had a framework to place those losses in proper context. In fact, each model vendor has many simulated hurricane events in their catalogs, which resemble Hurricane Katrina. Insurers knew, from the models, that Katrina could happen and were therefore prepared for that possible, albeit unlikely, outcome.

However, with the universal use of cat models in property insurance comes other issues. Are we misusing these tools? Are we becoming overly dependent on them? Are models being treated as a panacea to vexing business and scientific questions instead of as the simple framework for understanding potential loss?

Next in this series, we will illustrate how modeling results are being used in the industry and how overconfidence in the models could, once again, lead to crisis.