Tag Archives: catastrophe

Heading Toward a Data Disaster

On July 6, 1988, the Piper Alpha oil platform exploded. 167 people died. Much of the insurance was with what became known as the London Market Excess of Loss (LMX) Spiral, a tightly knit and badly managed web of insurance policies. Losses cascaded up and around the market. The same insurers were hit again and again. After 14 years, all claims had finally been settled. The cost exceeded $16 billion, more than 10 times the initial estimate.

The late 1980s were a bad time to be in insurance. Piper Alpha added to losses hitting the market from asbestos, storms in Europe and an earthquake in San Francisco. During this time, over 34,000 underwriters and Lloyd’s names paid out between £100,000 and £5 million. Many were ruined.

Never the same again

In the last 30 years, regulation has tightened, and analytics have improved significantly. Since 1970, 19 of the largest 20 catastrophes were caused by natural hazards. Only one, the World Trade Center attack in 2001, was man-made. No insurance companies failed as a result of any of these events. Earnings may have been depressed and capital taken a hit, but reinsurance protections behaved as expected.

But this recent ability to absorb the losses from physically destructive events doesn’t mean that catastrophes will never again be potentially fatal for insurers. New threats are emerging. The modeling tools of the last couple of decades are no longer sufficient.

Lumpy losses

Insurance losses are not evenly distributed across the market. Every year, one or more companies still suffer losses out of all proportion to their market share. They experience a “private catastrophe.” The company may survive, but the leaders of the business frequently experience unexpected and unwanted career changes.

See also: Data Prefill: Now You See It, Now You Don’t  

In the 1980s, companies suffered massive losses because the insurance market failed to appreciate the increasing connectivity of its own exposures and lacked the data and the tools to track this growing risk. Today, all companies have the ability to control their exposures to loss from the physical assets they insure. Managing the impact of losses to intangible assets is much harder.

A new class of modelers

The ability to analyze and manage natural catastrophe risk led to the emergence of a handful of successful natural catastrophe modeling companies over the last 20 years. A similar opportunity now exists for a new class of companies to emerge that can build the models to assess the new “man-made” risks.

Risk exposure is increasingly moving toward the intangible values. According to CB Insights, only 20% of the value of the S&P 500 companies today is made up of physical assets. It was 80% 40 years ago. The non-physical assets are more ephemeral, such as reputation, supply networks, IP and cyber.

Major improvements in safety procedures, risk assessment and the awareness of the destructive potential of insurance spirals makes a repeat of the type of loss seen after Piper Alpha extremely unlikely. The next major catastrophic losses for the insurance market are unlikely to be physical. They will occur because of a lack of understanding of the full reach, and contagion, of intangible losses.

The most successful new analytic companies of the next two decades will include those that are key to helping insurers measure and manage their own exposures to these new classes of risk.

The big data deception

Vast amounts of data are becoming available to insurers. Both free open data and tightly held, transactional data. Smart use of data is expected to radically change how insurers operate and create opportunities for new entrants into the market. Thousands of companies have already emerged in the last few years offering products to help insurers make better decisions about risk selection, price more accurately, service clients better, settle claims faster and reduce fraud.

But too much data, poorly managed, blurs critical signals. It increases the risk of loss. In less than 20 years, the industry has moved from being blinded by lack of data to being dazzled by the glare of too much.

The introduction of data governance processes and compliance officers became widespread in banks after the 2008 credit crunch. Most major insurance companies have risk committees and all are required to maintain a risk register. Yet ensuring that data management processes are of the highest quality is not always a board-level priority.

Looking at the new companies attracting attention and funding, very few appear to be offering solutions to help insurers solve this problem. Some, such as CyberCube, offer specific solutions to manage exposure to cyber risk across a portfolio. Others, such as Atticus DQPro, are quietly deploying tools across London and the U.S. to help insurers keep on top of their own evolving risks. Providing excellent data compliance and management solutions may not be as attention-grabbing as artificial intelligence or blockchain, but they offer a higher probability of being successful with innovations in an otherwise crowded space.

Past performance is no guide to the future, but, as Mark Twain noted, even if history doesn’t repeat itself, it often rhymes. Piper Alpha wasn’t the only nasty surprise in the last 30 years. Many events had a disproportional impact on one or more companies. The signs of impending disaster may have been blurred, but not invisible. Some companies suffered more than others. Jobs were lost. Each event spawned new regulation. But these events also created opportunities to build companies and products to prevent a future repeat. Looking for a problem to solve? Read on.

1. Enron Collapse (2001)

Enron, one of the most powerful and largest companies in the world, collapsed once shareholders realized the company’s success had been dramatically (and fraudulently) overstated. Insurers lost $3.5 billion from collapsed securities and insurance claims. Chubb and Swiss Re each reported losses of over $700 million. Jeff Skilling, CEO, spent 14 years in prison. One of the reasons for poor internal controls was that bonuses for the risk management team were influenced by appraisals from the people they were meant to be policing.

2. Hurricane Katrina and the Floating Casinos (2005)

At $83 billion, Hurricane Katrina is still the largest insured loss ever. No one anticipated the scale of the storm surge, the failure of the levies and the subsequent flooding. There were a lot of surprises. One of the large contributors to loss, from property damage and business interruption, were the floating casinos, ripped from their moorings and torn apart. Many underwriters had assumed the casinos were land-based, unaware that Mississippi’s 1990 law legalizing casinos had required all gambling to take place offshore.

3. Thai Flood Losses (2011)

After heavy rainfall lasting from June to October 2011, seven major industrial zones in Thailand were flooded to depths of up to 3 meters. The resulting insurance loss is the 13th-largest global insured loss ever ($16 billion in today’s value). Before 2011, many insurers didn’t record exposures in Thailand because the country was never considered a catastrophe-prone area. Data on the location and value of the large facilities of global manufacturers wasn’t offered or requested. The first time insurers realized that so many of their clients had facilities so close together was when the claims started coming in. French reinsurer CCR, set up primarily to reinsure French insurers, was hit with 10% of the total losses. Munich Re, along with Swiss Re, paid claims in excess of $500 million and called the floods a “wake-up call.”

See also: The Problems With Blockchain, Big Data  

4. Tianjin Explosion (2015)

With an insured loss of $3.5 billion, the explosions at the Tianjin port in China are the largest man-made insurance loss in Asia. The property, infrastructure, marine, motor vehicle and injury claims hit many insurers. Zurich alone suffered close to $300 million in losses, well in excess of its market share. The company admitted later that the accumulation was not detected because different information systems did not pick up exposures that crossed multiple lines of business. Martin Senn, the CEO, left shortly afterward.

5. Financial Conduct Authority Fines (2017 and onward)

Insurers now also face the risk of being fined by regulators and not just from GDPR-related issues. FCA, the U.K. regulator, levied fines of £230 million in 2017. Liberty Mutual Insurance was charged £5 million (failure in claims handling by a third party) and broker Blue Fin £4 million (not reporting a conflict of interest). Deutsche Bank received the largest fine of £163 million for failing to impose adequate anti-money laundering processes in the U.K., topped up later by a further fine of $425 million from the New York Department of Financial Services.

Looking ahead

“We’re more fooled by noise than ever before,” Nicholas Taleb writes in his book Antifragile.

We will see more data disasters and career-limiting catastrophes in the next 20 years. Figuring out how to keep insurers one step ahead looks like a great opportunity for anyone looking to stand out from the crowd in 2019.

Why Warren Buffett Is Surely Wrong

The Berkshire Hathaway annual report is one of my favorite reads. I always find a mountain of wisdom coupled with humility from one of my mentors, Warren Buffett. He doesn’t know he’s my mentor, but I treat him as one by reading and reflecting on what is in these annual letters. I would recommend you do the same; they’re available free of charge online.

There was a doozy of a sentence in the latest—right there on page 8—discussing the performance of Berkshire’s insurance operations, which make up the core of Berkshire’s business: “We believe that the annual probability of a U.S. mega-catastrophe causing $400 billion or more of insured losses is about 2%.” Sorry, Mr. Buffett, I have some questions for you on that one.

See also: Whiff of Market-Based Healthcare Change?  

After reading those words, I quickly ran a CATRADER industry exceedance probability (EP) curve for the U.S. My analysis included the perils of hurricane (including storm surge), earthquake (including tsunami, landslide, liquefaction, fire-following and sprinkler leakage), severe thunderstorm (which includes tornadoes, hail and straight-line wind), winter storm (which includes wind, freezing temperatures and winter precipitation) and wildfire. I used the latest estimates of take-up rates (the percentage of properties actually insured against these perils) and took into account demand surge (the increase in the price of labor and materials that can follow disasters).

Bottom line, we believe that the probability of $400 billion in insured losses from a single mega-catastrophe in a given year is far more remote—between 0.1% and 0.01% EP. Buffett is putting this at a 2% EP (or a 50-year return period), which is a gulf of difference. In fact, it is worth noting that, by AIR estimates, the costliest disaster in U.S. history in the last 100-plus years—indexed to today’s dollars and today’s exposures—was the Great Miami Hurricane of 1926. Were that to recur today, AIR estimates it would cost the industry roughly $128 billion. 

So, what is Buffett basing his view on? In 2009, he famously said “Beware of geeks bearing formulas…. Indeed, the stupefying losses in mortgage-related securities came in large part because of flawed, history-based models used by salesmen, rating agencies and investors.” Obviously, this was a reference to the 2008 financial crisis and the use of models by banks. His view on catastrophe models, however, has never been made public.

See also: Why Risk Management Certifications Matter 

Assuming Buffett’s point of view was not model-based, it would be good to know his methods. For example, was it based on some estimate of insured exposures and a PML%, as was common in the pre–catastrophe model days? But he didn’t get to that level of detail. It would be interesting to know if this is also the view of Ajit Jain, his head of insurance operations, and now vice chairman of Berkshire?

I have lots of questions, and I am not holding my breath for Buffett to respond to this blog. But it would be great to get your thoughts below.

Using Catastrophes to Rethink Claims (Part 2)

Digital technologies have the power to re-design insurance. A host of benefits lie within the front-end customer experience, but insurers stand to gain just as much from the digitalization of the back-end operations, including the claims process.

  • According to the Insurance Information Institute, 64% of insurance premiums (nearly 2/3 of every dollar) are used to pay claims and adjustment expenses.
  • Digital answers to loss reduction and loss prevention stand to yield the greatest profit for insurers.

This is critically important in an era where catastrophic claims seem to be more important and subsequently have the potential to do significant damage to an insurer’s bottom line.

In our last blog, we looked at the impact of digital technologies on the claims value chain. The hurricanes, hail storms, tornadoes and fires of 2017 have given all insurers a wake-up call to claims preparedness. We listed out many of the ways that technology has the potential to improve catastrophe management and claims experience.

In this blog, we’ll look more closely at operations. How can operations look at catastrophic events differently? How can it prepare for both internal and external impact? Can our redesigned claims experience create real value?

A holistic approach to catastrophic claims

From Aug. 12-23, Hurricane Harvey was tracked and classified between a tropical storm and a tropical wave. It suddenly started gaining strength as a tropical storm on Aug. 24. Overnight, it rapidly became a Category 4 hurricane, making landfall at Rockport, TX, on the 25th.

In the days prior, NASA and many U.S. insurers shared a concern. What would happen to the city of Houston? Hurricane Harvey was not only going to do catastrophic damage, displace thousands of people and potentially disrupt many businesses, it was going to hit the Johnson Space Center, home to the operations for the International Space Station (ISS). The ISS receives as many as 50,000 different commands from Houston in any one month, making corrective maneuvers and steering the station away from harmful debris. Systems could not go down without safety concerns for the ISS.

See also: Using Catastrophes to Rethink Claims  

NASA is adept at preparing for anything. When they aren’t launching rockets, monitoring rovers and steering the space station, they are running through scenarios to improve their ability to handle any situation. In this case, they did have a plan. They remained vigilant, and they rode out the storm.

Insurers now have NASA-like capabilities to use data and digital technologies to their advantage. They, too, can look at the entire sphere of a catastrophic event and find ways to protect themselves and their insureds while optimizing every asset.

Pre-Crisis Efforts

Insurers are known for standing by their clients in the aftermath, but what about standing by their clients before crisis hits?

Last month, in one of the biggest evacuations ever ordered in the U.S., roughly 6.3 million people in Florida — more than one-quarter of the state’s population — were told to clear out from areas threatened by Hurricane Irma. Another 540,000 were directed to move away from the Georgia coast.

Insurers can wait for the government to issue general warnings, but they can take control of communications with their customers by adding digital capabilities. Insurers are armed with ever-improving risk models. They have unprecedented access to customer-specific, property-specific and economic-related data as well as other types of valuable data, including climate and location data, traffic data and telematics data. Insurers are increasingly pulling this big data together for real-time analysis, cognitive learning, insight and decision-making … including to minimize or eliminate claims.

Insurers are also in a better position to help properly categorize storms and crises, leveraging digital technologies such as analytics and artificial intelligence. Using this data, insurers can personalize communications through digital channels that can reach the people that are in the risk zones with highly specific loss and risk prevention measures. Texts, e-mails, automated phone calls with clear directives can greatly help customers who are often panicking, lack detailed information or can be indecisive. This is the trending wave of insurance transformation at its best. Preventive communications can be targeted individually, giving insureds all of the vital information they need to protect themselves and their property, an area of increasing interest and expectation by customers we identified in The Rise of the New Insurance Customer and The Rise of the New Small Medium Business Customer research.

In Majesco’s recent white paper, Changing Insurance for the Digital Age, we discuss how the future is not only becoming more foreseeable, but it is also becoming more volatile. The result is that insurers will be shifting from risk coverage to risk prevention. The most sought-after portion of the claims value chain may be the network of data and technologies that prevent or reduce claims. These prevention services will soon be new sources of revenue.

 During the Catastrophe

During the 2016 Ft. McMurray wildfires in Alberta, Canada, an estimated 88,000 people were displaced from their homes. Most had no idea where to go, and many refugees found themselves without adequate short-term housing. The digitally enabled insurer can help direct insureds to likely locations of refuge and even offer housing discounts or pre-paid lodging to those who are displaced. This kind of communication and service will build strong loyalty among insureds, while providing marketing with a host of compelling stories with happy endings.

What happens, however, when the insurance organization itself is in the same line of storms, fires or earthquakes?

NASA, during Hurricane Harvey, had contingency plans in place. Should ground teams need to be evacuated, they would be moved temporarily to Round Rock, TX, then for a longer term to the space center in Huntsville, Alabama.

Many insurers had their claims adjuster crews prepared, and drone capabilities in place, but were less prepared if their systems and operations were affected, primarily due to their large on-premise operations.  While many insurers have disaster recovery plans, they must ask themselves if their operations can scale rapidly to handle the onslaught of calls, claims and needs. How will operations be managed remotely if staff are unable to get access? Which customer service centers may need to be relocated, and how will they handle the potential of consecutive catastrophic events … like what we saw in Houston and then Florida?

See also: Catastrophes and ‘Do Little’ Syndrome  

Cloud business platforms are perfect for handling claims operational requirements. They are always on, most often are located off-site, are easily scalable and often are managed by someone outside the organization … providing access to critical resources. While systems are attempting to handle hundreds of adjusters at once, customer service may also be handling thousands of customer inquiries. Insurers need the capacity and stability that a cloud platform can supply. Cloud platforms are the perfect foundation for constructing a catastrophe-proof claims value chain.

Post-Crisis Restoration

In addition, the right cloud platforms can handle the plug-and-play technologies that are increasingly used by claims departments for post-claim services. They can handle images and video from drones, photos from mobile phones and tablets, simulation data, cognitive computing decisions (with speed) and a high volume of communications — automated, human and chatbot.

These same technologies will aid in adjuster mobilization, prioritization and routing. In the next blog in this series, we will look specifically at how digital technologies will help claims to build relationships with customers. Where are the effective touchpoints in the claims process? How can an insurer “take control” in the relationship and gently guide policyholders into best practices and safer circumstances? What can insurers do to stand by their policyholders during the restoration or rebuilding process? At the same time, we will look at how insurers in the future will use data and AI to spot fraud and cut costs.

For a deeper look into the data strategies and predictive analytics that are having an impact on the complete insurance value chain, be sure to read Majesco’s report, Winning in a New Age of Insurance: Insurance Moneyball.

Where Have the Hurricanes Gone?

Last year’s hurricane season passed off relatively quietly. Gonzalo, a Category 2 hurricane, hit Bermuda in October 2014, briefly making the world’s headlines, but it did relatively little damage, apart from uprooting trees and knocking out power temporarily to most of the island’s inhabitants.

It is now approaching 10 years since a major hurricane hit the U.S., when four powerful hurricanes — Dennis, Katrina, Rita and Wilma — slammed into the country in the space of a few months in 2005.

There have been a number of reasons put forward for why there has been a succession of seasons when no major storms have hit the US.

It shouldn’t be so quiet. Why? Put simply, the warmer the Atlantic Ocean is, the more potential there is for storms to develop. The temperatures in the Atlantic basin (the expanse of water where hurricanes form, encompassing the North Atlantic Ocean, the Gulf of Mexico and the Caribbean Sea) have been relatively high for roughly the past decade, meaning that there should have been plenty of hurricanes.

There have been a number of reasons put forward for why there has been a succession of seasons when no major storms have hit the U.S. They include: a much drier atmosphere in the Atlantic basin because of large amounts of dust blowing off the Sahara Desert; the El Niño effect; and warmer sea surface temperatures causing hurricanes to form further east in the Atlantic, meaning they stay out at sea rather than hitting land.

Although this is by far the longest run in recent times of no big storms hitting the U.S., it isn’t abnormal to go several years without a big hurricane. “From 2000 to 2003, there were no major land-falling hurricanes,” says Richard Dixon, group head of catastrophe research at Hiscox. “Indeed, there was only one between 1997 and 2003: Bret, a Category 3 hurricane that hit Texas in 1999.”

There then came two of the most devastating hurricane seasons on record in 2004 and 2005, during which seven powerful storms struck the U.S.

The quiet before the storm

An almost eerie calm has followed these very turbulent seasons. Could it be that we are entering a new, more unpredictable era when long periods of quiet are punctuated by intense bouts of violent storms?

It would be dangerous to assume there has been a step change in major-land-falling hurricane behavior.

“Not necessarily,” Dixon says. “Neither should we be lulled into a false sense of security just because no major hurricanes — that is Category 3 or higher — have hit the U.S. coast.”

There have, in fact, been plenty of hurricanes in recent years — it’s just that very few of them have hit the U.S. Those that have — Irene in 2011 and Sandy in 2013 — had only Category 1 hurricane wind speeds by the time they hit the U.S. mainland, although both still caused plenty of damage.

The number of hurricanes that formed in the Atlantic basin each year between 2006 and 2013 has been generally in line with the average number for the period since 1995, when the ocean temperatures have risen relative to the “cold phase” that stretched from the early 1960s to the mid-1990s.

On average, around seven hurricanes have formed each season in the period 2006-2013, roughly three of which have been major storms. “So, although we haven’t seen the big land-falling hurricanes, the potential for them has been there,” Dixon says.

Why the big storms that have brewed have not hit the U.S. is a mixture of complicated climate factors — such as atmospheric pressure over the Atlantic, which dictates the direction, speed and intensity of hurricanes, and wind shear, which can tear a hurricane apart.

There have been several near misses: Hurricane Ike, which hit Texas in 2008, was close to being a Category 3, while Hurricane Dean, which hit Mexico in 2007, was a Category 5 — the most powerful category of storm, with winds in excess of 155 miles per hour.

That’s not to say there is not plenty of curiosity as to why there have recently been no powerful U.S. land-falling hurricanes. This desire to understand exactly what’s going on has prompted new academic research. For example, Hiscox is sponsoring postdoctoral research at Reading University into the atmospheric troughs known as African easterly waves. Although it is known that many hurricanes originate from these waves, there is currently no understanding of how the intensity and location of these waves change from year to year and what impact they might have on hurricane activity.

Breezy optimism?

The dearth of big land-falling hurricanes has both helped and hurt the insurance industry. Years without any large bills to pay from hurricanes have helped the global reinsurance industry’s overall capital to reach a record level of $575 billion by January 2015, according to data from Aon Benfield.

But, as a result, competition for business is intense, and prices for catastrophe cover have been falling; a trend that continued at the latest Jan. 1 renewals.

We certainly shouldn’t think that next year will necessarily be as quiet as the past few have been.

Meanwhile, the values at risk from an intense hurricane are rising fast. Florida — perhaps the most hurricane-prone state in the U.S. — is experiencing a building boom. In 2013, permissions to build $18.2 billion of new residential property were granted in Florida, the second-highest amount in the country behind California, according to U.S. government statistics.

“The increasing risk resulting from greater building density in Florida has been offset by the bigger capital buffer the insurance industry has built up,” says Mike Palmer, head of analytics and research at Hiscox Re. But, he adds: “It will still be interesting to see how the situation pans out if there’s a major hurricane.”

Of course, a storm doesn’t need to be a powerful hurricane to create enormous damage. Sandy was downgraded from a hurricane to a post-tropical cyclone before making landfall along the southern New Jersey coast in October 2012, but it wreaked havoc as it churned up the northeastern U.S. coast. The estimated overall bill has been put at $68.5 billion by Munich Re, of which around $29.5 billion was picked up by insurers.

Although Dixon acknowledges that the current barren spell of major land-falling hurricanes is unusually long, he remains cautious. “It would be dangerous to assume there has been a step change in major-land-falling hurricane behavior.”

Scientists predict that climate change will lead to more powerful hurricanes in coming years. If global warming does lead to warmer sea surface temperatures, then evidence shows that it tends to make big storms grow in intensity.

Even without the effects of climate change, the factors are still in place for there to be some intense hurricane seasons for at least the next couple of years, Dixon argues. “The hurricane activity in the Atlantic basin in recent years suggests to me that we’re still in a warm phase of sea surface temperatures — a more active hurricane period, in other words. So we certainly shouldn’t think that 2015 will necessarily be as quiet as the past few have been.”

Storm warning

Predictions of hurricanes are made on a range of timescales, and the skill involved in these varies dramatically. On short timescales (from days to as much as a week), forecasts of hurricane tracks are now routinely made with impressive results. For example, Hurricane Gonzalo was forecast to pass very close to Bermuda more than a week before it hit the island, giving its inhabitants a chance to prepare. Such advances in weather forecasting have been helped by vast increases in computing power and by “dynamical models” of the atmosphere.

These models work using a grid system that encompasses all or part of the globe, in which they work out climatic factors, such as sea surface temperature and atmospheric conditions, in each particular grid square. Using this information and a range of equations, they are then able to forecast the behavior of the atmosphere over coming days, including the direction and strength of tropical storms.

But even though computing power has improved massively in recent years, each of the grid squares in the dynamical models typically corresponds to an area of many square miles, so it’s impossible to take into account every cloud or thunderstorm in that grid that would contribute to a hurricane’s strength. This, combined with the fact that it is impossible to know the condition of the atmosphere everywhere, means there will always be an element of uncertainty in the forecast. And while these models can do very well at predicting a hurricane’s track, they currently struggle to do as good a job with storm intensity.

Pre-season forecasts

Recent years have seen the advent of forecasts aimed at predicting the general character of the coming hurricane season some months in advance. These seasonal forecasts have been attracting increasing media fanfare and go as far as forecasting the number of named storms, of powerful hurricanes and even of land-falling hurricanes.

Most are not based on complicated dynamical models (although these do exist) but tend to be based on statistical models that link historical data on hurricanes with atmospheric variables, such as El Niño. But as Richard Dixon, Hiscox’s group head of catastrophe research, says:  “There is a range of factors that can affect the coming hurricane season, and these statistical schemes only account for some of them. As a result, they don’t tend to be very skillful, although they are often able to do better than simply basing your prediction on the historical average.”

It would be great if the information contained in seasonal forecasts could be used to help inform catastrophe risk underwriting, but as Mike Palmer, head of analytics and research for Hiscox Re, explains, this is a difficult proposition. “Let’s say, for example, that a seasonal forecast predicts an inactive hurricane season, with only one named storm compared with an average of five. It would be tempting to write more insurance and reinsurance on the basis of that forecast. However, even if it turns out to be true, if the single storm that occurs is a Category 5 hurricane that hits Miami, the downside would be huge.”

Catastrophe models

That’s not to say that there is no useful information about hurricane frequency that underwriters can use to inform their underwriting. Catastrophe models provide the framework to allow them to do just that. These models have become the dominant tools by which insurers try to predict the likely frequency and severity of natural disasters. “A cat model won’t tell you what will happen precisely in the coming year, but it will let you know what the range of possible outcomes may be,” Dixon says.

The danger comes if you blindly follow the numbers, Palmer says. That’s because although the models will provide a number for the estimated cost, for example, of the Category 5 hurricane hitting Miami, that figure masks an enormous number of assumptions, such as the expected damage to a wooden house as opposed to a brick apartment building.

These variables can cause actual losses to differ significantly from the model estimates. As a result, many reinsurers are increasingly using cat models as a starting point to working out their own risk, rather than using an off-the-shelf version to provide the final answer.

2 Shortcuts for Quantifying Risk

Most companies that take up risk management start out with subjective frequency-severity assessments of each of their primary risks. These values are then used to construct a heat map, and the risks that are farthest away from the zero point of the plot are judged to be of most concern.

This is a good way to jump-start a discussion of risks and to develop an initial process for prioritizing early risk management activities. But it should never be the end point for insurers. Insurers are in the risk business.  The two largest categories of risks for insurers — insurance and investment — are always traded directly for money.  Insurers must have a clear view of the dollar value of their risks. And with any reflection, insurance risk managers will identify that there is actually never a single pair of frequency and severity that can accurately represent their risks. Each of the major risks of an insurer has many, many possible pairs of frequency and severity.

For example, almost all insurers with exposure to natural catastrophes have access to analysis of their exposure to loss using commercial catastrophe models. These models produce loss amounts at a frequency of 1 in 10, 1 in 20, 1 in 100, 1 in 200, 1 in 500, 1 in 1000 and any frequency in between. There is not a single one of these frequency severity pairs that by itself defines catastrophe risk for that insurer.

Once an insurer moves to recognizing that all of its risks have this characteristic, it can now take advantage of one of the most useful tools for portraying the risks of the enterprise, the risk profile. For a risk profile, each risk is portrayed according to the possible loss at a single frequency. One common value is a 1 in 100 frequency. In Europe, all insurers are focused by Solvency II regulations on the 1-in-200 loss. Ultimately, an insurer will want to develop a robust model like the catastrophe model for each of its risks to support the development of the risk profile. But before spending all of that money, there are two possible shortcuts that are available to rated insurers that will cost little to no additional money.

SRQ Stress Tests

In 2008, AM Best started asking each rated insurer to talk about its top five risks.

Then, in 2011, in the new ERM section to the supplemental rating questionnaire, Best asked insurers to identify the potential impact of the largest threat for six risk types. For many years, AM Best has calculated its estimate of the capital needed by insurers for losses in five categories and eventually added an adjustment for a sixth — natural catastrophe risk.

Risk profile is one of the primary areas of focus for good ERM programs and is closely related to these questions and calculations. Risk profile is a view of all the main risks of an insurer that allows management and other audiences the chance to compare the size of the various risks on a relative basis. Often, when insurers view their risk profile for the first time, they find that their profile is not exactly what they expected. As they look at their risk profile in successive periods, they find that changes to their risk profile end up being key strategic discussions. The insurers that have been looking at their risk profile for quite some time find the discussion with AM Best and others about their top risks to be a process of simplifying the detailed conversations that they have had internally instead of stretching to find something to say that plagues other insurers. The difference is usually obvious to the experienced listener from the rating agency.

Risk Profile From the SRQ Stress Tests

Most insurers will say that insurance (or underwriting) risk is the most important risk of the company. The chart below, showing information about the risk profile averaged for 31 insurers, paints a very different story. On average, underwriting risk was 24% of the risk profile and market risk was 30%. Twenty of the 31 companies had a higher value for market risk than underwriting risk. For those 20 insurers, this exercise in viewing their risk profile shows that management and the board should be giving equal or even higher amounts of attention to their investment risks.

Untitled

Stress tests are a good way for insurers to get started with looking at their risk profile. The six AM Best categories can be used to allow for comparisons with studies, or the company can use its own categories to make the risk profile line up with the main concerns of its strategic planning discussions. Be careful. Make sure that you check the results from the AM Best SRQ stress tests to make sure that you are not ignoring any major risks. To be fully effective, the risk profile needs to include all of the company’s risks. For 20 of these 31 insurers, that may mean acknowledging that they have more equity risk than underwriting risk – and planning accordingly.

Risk Profile From the BCAR Formula

The chart below portrays the risk profiles of a different group of 12 insurers. These risk profiles were determined using the AM Best BCAR formula without analyst adjustments. For this group of companies on this basis, premium risk is the largest single category. And while there are again six risk categories, they are a somewhat different list. The risk category of underwriting from the SRQ is here split into three categories of premium, reserve and nat cat. Together, those three categories represent more than 60% of the risk profile of this group of insurers. Operational, liquidity and strategic risks that make up 39% of the SRQ average risk profile are missing here. Reinsurer credit risk is shown here to be a major risk category, with 17% of the risk. Combined investment and reinsurer credit is only 7% of total risk in the SRQ risk profile.

Untitled

Why are the two risk profiles so different in their views about insurance and investment risks? This author would guess that insurers are more confident of their ability to manage insurance risks, so their estimate of that risk estimated in the stress tests is for less severe losses than the AM Best view reflected in the BCAR formula. And the opposite is true for investment, particularly equity risk. AM Best’s BCAR formula for equity risk is for only a 15% loss, while most insurers who have a stock portfolio had just in 2008 experienced 30% to 40% losses. So insurers are evaluating their investment risk as being much higher than AM Best believes.

Neither set seems to be the complete answer. From looking at these two groups, it makes sense to consider using nine or more categories: premiums, reserves, nat cat, reinsurer credit, bond credit, equities, operational, strategic and liquidity risk. Insurers with multiple large insurance lines may want to add several splits to the premium and reserve categories.

Using Risk Profile for Strategic Planning and Board Discussions

Risk profile can be the focus for bringing enterprise risk into the company’s strategic discussions. The planning process would start with a review of the expected risk profile at the start of the year and look at the impact on risk profile of any major proposed actions as a part of the evaluation of those plans. Each major plan can be discussed regarding whether it increases concentration of risks for the insurer or if it is expected to increase diversification. The risk profile can then be a major communication tool for bringing major management decisions and proposals to the board and to other outside audiences. Each time the risk profile is presented, management can provide explanations of the causes of each significant change in the profile, whether it be from management decisions and actions or because of major changes in the environment.

Risk Profile and Risk Appetite

Once an insurer has a repeatable process in place for portraying enterprise risk as a risk profile, this risk profile can be linked to the risk appetite. The pie charts above focus attention on the relative size of the main types of risks of the insurer. The bar chart below features the sum of the risks. Here the target line represents the expected sum of all of the risks, while the maximum is an aggregate risk limit based upon the risk appetite.

Untitled

In the example above, the insurer has a target for risk at 90% of a standard (in this case, the standard is for a 400% RBC level; i.e. the target is to have RBC ratio of 440%). The plan is for risk at a level that produces a 480% RBC level, and the maximum tolerance is for risk that would produce a 360% RBC. The 2014 actual risk taking has the insurer at a 420 RBC level, which is above the target but significantly below their maximum. After reviewing the 2014 actual results, management made plans for 2015 that would come in just at the 440% RBC target. That review of the 2014 actual included consideration of the increase in profits associated with the additional risk. When management made the adjustment to reach target for 2015, its first consideration was to reduce less profitable activities. Management was able to make adjustments that significantly improve return for risk taking at a fully utilized level of operation.