Tag Archives: risk-based pricing

How CAT Models Lead to Soft Prices

In our first article in this series, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. In the second article, we looked at how, beginning in the mid-1980s, people began developing models that could prevent recurrences of those staggering losses. In this article, we look at how modeling results are being used in the industry.

 

Insurance is a unique business. In most other businesses, expenses associated with costs of operation are either known or can be fairly estimated. The insurance industry, however, needs to estimate expenses for things that are extremely rare or have never happened before. Things such as the damage to a bridge in New York City from a flood or the theft of a precious heirloom from your home or the fire at a factory, or even Jennifer Lopez injuring her hind side. No other industry has to make so many critical business decisions as blindly as the insurance industry. Even in circumstances in which an insurer can accurately estimate a loss to a single policyholder, without the ability to accurately estimate multiple losses all occurring simultaneously, which is what happens during natural catastrophes, the insurer is still operating blindly. Fortunately, the introduction of CAT models greatly enhances both the insurer’s ability to estimate the expenses (losses) associated with a single policyholder and concurrent claims from a single occurrence.

When making decisions about which risks to insure, how much to insure them for and how much premium is required to profitably accept the risk, there are essentially two metrics that can provide the clarity needed to do the job. Whether you are a portfolio manager managing the cumulative risk for a large line of business or an underwriter getting a submission from a broker to insure a factory or an actuary responsible for pricing exposure, what these stakeholders need to minimally know is:

  1. On average, what will potential future losses look like?
  2. On average, what are the reasonable worst case loss scenarios, or the probable maximum loss (PML)?

Those two metrics alone supply enough information for an insurer to make critical business decisions in these key areas:

  • Risk selection
  • Risk-based pricing
  • Capacity allocation
  • Reinsurance program design

Risk Selection

Risk selection includes an underwriter’s determination of the class (such as preferred, standard or substandard) to which a particular risk is deemed to belong, its acceptance or rejection and (if accepted) the premium.

Consider two homes: a $1 million wood frame home and a $1 million brick home both located in Los Angeles. Which home is riskier to the insurer?  Before the advent of catastrophe models, the determination was based on historical data and, essentially, opinion. Insurers could have hired engineers who would have informed them that brick homes are much more susceptible to damage than wood frame homes under earthquake stresses. But it was not until the introduction of the models that insurers could finally quantify how much financial risk they were exposed to. They shockingly discovered that on average brick homes are four times riskier than wood frame homes and are twice as likely to sustain a complete loss (full collapse). This was data not well-known by insurers.

Knowing how two or more different risks (or groups of risks) behave at an absolute and relational level provides a foundation to insurers to intelligently set underwriting guidelines, which work toward their strengths and excludes risks they do not or cannot absorb, based on their risk appetite.

Risk-Based Pricing

Insurance is rapidly becoming more of a commodity, with customers often choosing their insurer purely on the basis of price. As a result, accurate ratemaking has become more important than ever. In fact, a Towers Perrin survey found that 96% of insurers consider sophisticated rating and pricing to be either essential or very important.

Multiple factors go into determining premium rates, and, as competition increases, insurers are introducing innovative rate structures. The critical question in ratemaking is: What risk factors or variables are important for predicting the likelihood, frequency and severity of a loss? Although there are many obvious risk factors that affect rates, subtle and non-intuitive relationships can exist among variables that are difficult, if not impossible, to identify without applying more sophisticated analyses.

Regarding our example involving the two homes situated in Los Angeles, catastrophe models tell us two very important things: what the premium to cover earthquake loss should roughly be and that the premium for masonry homes should be approximately four times larger than wood frame homes.

The concept of absolute and relational pricing using catastrophe models is revolutionary. Many in the industry may balk at our term “revolutionary,” but insurers using the models to establish appropriate price levels for property exposures have a massive advantage over public entities such as the California Earthquake Authority (CEA) and the National Flood Insurance Program (NFIP) that do not adhere to risk-based pricing.

The NFIP and CEA, like most quasi-government insurance entities, differ in their pricing from private insurers along multiple dimensions, mostly because of constraints imposed by law. Innovative insurers recognize that there are literally billions of valuable premium dollars at stake for risks for which the CEA, the NFIP and similar programs significantly overcharge – again, because of constraints that forbid them from being competitive.

Thus, using average and extreme modeled loss estimates not only ensures that insurers are managing their portfolios effectively, but enables insurers, especially those that tend to have more robust risk appetites, to identify underserved markets and seize valuable market share. From a risk perspective, a return on investment can be calculated via catastrophe models.

It is incumbent upon insurers to identify the risks they don’t wish to underwrite as well as answer such questions as: Are wood frame houses less expensive to insure than homes made of joisted masonry? and, What is the relationship between claims severity and a particular home’s loss history? Traditional univariate pricing analysis methodologies are outdated; insurers have turned to multivariate statistical pricing techniques and methodologies to best understand the relationships between multiple risk variables. With that in mind, insurers need to consider other factors, too, such as marketing costs, conversion rates and customer buying behavior, just to name a few, to accurately price risks. Gone are the days when unsophisticated pricing and risk selection methodologies were employed. Innovative insurers today cross industry lines by paying more and more attention to how others manage data and assign value to risk.

Capacity Allocation

In the (re)insurance industry, (re)insurers only accept risks if those risks are within the capacity limits they have established based on their risk appetites. “Capacity” means the maximum limit of liability offered by an insurer during a defined period. Oftentimes, especially when it comes to natural catastrophe, some risks have a much greater accumulation potential, and that accumulation potential is typically a result of dependencies between individual risks.

Take houses and automobiles. A high concentration of those exposure types may very well be affected by the same catastrophic event – whether a hurricane, severe thunderstorm, earthquake, etc. That risk concentration could potentially put a reinsurer (or insurer) in the unenviable position of being overly exposed to a catastrophic single-loss occurrence.  Having a means to adequately control exposure-to-accumulation is critical in the risk management process. Capacity allocation enables companies to allocate valuable risk capacity to specific perils within specific markets and accumulation zones to minimize their exposure, and CAT models allow insurers to measure how capacity is being used and how efficiently it is being deployed.

Reinsurance Program Design

With the advent of CAT models, insurers now have the ability to simulate different combinations of treaties and programs to find the right fit, maximizing their risk and return. Before CAT models, it would require gut instinct to estimate the probability of attachment of one layer over another or to estimate the average annual losses for a per-risk treaty covering millions of exposures. The models estimate the risk and can calculate the millions of potential claims transactions, which would be nearly impossible to do without computers and simulation.

It is now well-known how soft the current reinsurance market is. Alternative capital has been a major driving force, but we consider the maturation of CAT models as having an equally important role in this trend.

First, insurers using CAT models to underwrite, price and manage risk can now intelligently present their exposure and effectively defend their position on terms and conditions. Gone are the days when reinsurers would have the upper hand in negotiations; CAT models have leveled the playing field for insurers.

Secondly, alternative capital could not have the impact that it is currently having without the language of finance. CAT models speak that language. The models provide necessary statistics for financial firms looking to allocate capital in this area. Risk transfer becomes so much more fungible once there is common recognition of the probability of loss between transferor and transferee. No CAT models, no loss estimates. No loss estimates, no alternative capital. No alternative capital, no soft market.

A Needed Balance

By now, and for good reason, the industry has placed much of its trust in CAT models to selectively manage portfolios to minimize PML potential. Insurers and reinsurers alike need the ability to quantify and identify peak exposure areas, and the models stand ready to help understand and manage portfolios as part of a carrier’s risk management process. However, a balance between the need to bear risk and the need to preserve a carrier’s financial integrity in the face of potential catastrophic loss is essential. The idea is to pursue a blend of internal and external solutions to ensure two key factors:

  1. The ability to identify, quantify and estimate the chances of an event occurring and the extent of likely losses, and
  2. The ability to set adequate rates.

Once companies have an understanding of their catastrophe potential, they can effectively formulate underwriting guidelines to act as control valves on their catastrophe loss potential but, most importantly, even in high-risk regions, identify those exposures that still can meet underwriting criteria based on any given risk appetite. Underwriting criteria relative to writing catastrophe-prone exposure must be used as a set of benchmarks, not simply as a blind gatekeeper.

In our next article, we examine two factors that could derail the progress made by CAT models in the insurance industry. Model uncertainty and poor data quality threaten to raise skepticism about the accuracy of the models, and that skepticism could inhibit further progress in model development.

Is Price Optimization Really an Evil Idea?

There seems to be a lot of misperception about what price optimization really is, largely driven by publicized assumptions that it will only serve the best interests of the company and hurt the consumer.

Basically, price optimization boils down to applying analytics to available information to develop more quantitative and targeted pricing policies and processes. Price optimization is currently used extensively in many industries. The benefits and rewards to both the companies and the customers are plenty, with the customer rewards being highly visible.

Through the use of price optimization, retailers are able to present highly personalized and appealing offers to their customers based on past shopping and buying patterns coupled with predictions of customer wants and needs. Retailers are able to keep their best customers informed of sales and special offers that are of real value to them. The travel industry uses price optimization to manage profitability and, equipped with insights that give them the ability to fine-tune the metrics, are able to offer very attractive options to travelers. Capacity that would otherwise have gone unused attracts happy customers and often brings them back.

For the insurance industry, it is important to understand that price optimization does not replace risk-based pricing; rather, optimization is the next level of sophistication for risk-based pricing. With price optimization, insurers are able to explore product options and then find an optimal balance point among all options and constraints within complicated rating orders and large sets of data. This makes it possible to construct and present more appealing, more targeted product and service offerings. Personalized offerings can be shaped to meet personalized needs. The laws of large numbers can be optimized for the individual situation.

Today, price optimization is being used most often by insurers in personal lines — in many cases, those that are trying to innovate and capitalize on the next wave of analytics. The goal is to improve the bottom line and increase market share by using newly available types of analytics, models, tools and methods. These insurers don’t see price optimization as an independent exercise; they view it as a key part of the business’s journey to the next level of maturity. Recognizing that rate changes and the resulting customer reactions have an immediate and very significant tie to new business and renewals, and understanding that informed consumers expect offers that meet their personalized requirements, insurers see optimization as a journey that is essential for profitable growth in personal lines.

It is only a matter of time before the principles involved are applied to commercial risk pricing, especially for smaller and middle markets. As the comfort level increases and experience with the insights and tools matures, price optimization will likely become a significant aspect of the collaboration and negotiation process for mid-market and even large, complex cases.

The business benefits of price optimization are undeniable. Improved insights give insurers greater ability to achieve specific financial objectives for growth and profit. Fortified with intelligence, including a better understanding of customer demand and buying behaviors by segment, insurers can make business decisions and tradeoffs based on agreed-upon metrics rather than emotion and historical understandings that sometimes morph over the years.

While the benefits are clear, the reality is that price optimization is a complex endeavor. It involves deep analytics, advanced business intelligence and ready access to complete and accurate data. Many companies are spending lots of time and resources building sophisticated models of loss cost, expenses and customer demand, incorporating competitive position and market data. Price optimization brings them all together, aligning to specific business goals and the regulatory framework, enabling companies to clearly understand the trade-offs between various pricing strategies.

The extent of the use of price optimization in the insurance industry is small in terms of the number of companies that have implemented optimization or are conducting pilots. It is, however, important to note that price optimization is being adopted by the largest insurance companies — those that have the most market share — so the portion of the industry that being affected is significant. It won’t be long before a very large percentage of the premiums being written will be based on rates developed by using advanced analytics capabilities that involve price optimization.

In many insurance companies, there are both real and perceived hurdles that impede progress in price optimization. Project capacity is limited, and price optimization does not always make the list of top-priority efforts. For some insurers, there is an inherent cultural resistance to change, particularly when today’s models have been delivering growth. Price optimization is complex; it requires special skills — deep experience in predictive modeling and advanced analytics. Price optimization involves a transformation of the entire pricing process.

But the insurers that are embracing and implementing price optimization are finding ways to overcome these challenges. Obviously, most national insurers have the volume of data that is necessary to get pricing optimization right, but they can also be burdened with an overwhelming amount of data that originates from multiple sources and isn’t always clean and consistent. In contrast, it is not unusual for regional insurers to think they don’t have enough data. The reality is that most insurers do have more than enough data to build and use customer demand models.

Price optimization will work for more insurers than one might expect. Now is the time to lay the ground work for competing effectively in the long run.