Tag Archives: risk selection

Obamacare: Where Do We Stand Today?

The healthcare industry is changing – same old headline. Since we’ve been in the industry, the “unsustainable” cost increases have been the talk every year, yet somehow we have not reached a tipping point. So what’s different now? How has ACA affected the healthcare industry, and more specifically the insurance companies?

The drafters of ACA set up a perfect adverse-selection scenario: Come one, come all, with no questions asked. First objective met: 20 million individuals now have coverage.

Next objective: Provide accurate pricing for these newly insured.

Insurance companies have teams of individuals who assess risk, so they can establish an appropriate price for the insurance protection. We experience this underwriting process with every type of insurance – home, life, auto. In fact, we see this process with every financial institution, like banks, mortgage companies and credit card companies. If a financial institution is to serve (and an insurance company is a financial entity), it has to manage risks, e.g., lend money to people who can repay the loan. Without the ability to assess the risk of the 20 million individuals, should we be surprised that one national insurance carrier lost $475 million in 2015, while another lost $657 million on ACA-compliant plans?

If you’re running a business and a specific line has losses, your choices are pretty clear – either clean it up or get out.

See Also: Healthcare Quality and How to Define It

Risk selection is complex. When you add this complexity to the dynamics of network contracting tied to membership scale, there is a reason why numerous companies have decided to get out of health insurance. In 1975, there were more than 2,000 companies selling true health insurance plans, and now there are far fewer selling true health insurance to the commercial population. Among the ones that got out were some big names – MetLife, Prudential, Travelers, NYLife, Equitable, Mutual of Omaha, etc. And now we’re about to be down to a few national carriers, which is consistent with other industries – airline, telecommunications, banking, etc.

Let’s play this one out for the 20 million newly covered individuals. The insurance companies have significant losses on ACA-compliant plans. Their next step – assess the enrolled risk and determine if they can cover the expected costs. For those carriers that decide to continue offering ACA-compliant plans, they will adjust the premiums accordingly. While the first-year enrollees are lulled into the relief of coverage, they then get hit with either a large increase or a notice to find another carrier. In some markets, the newly insured may be down to only one carrier option. The reason most individuals do not opt for medical coverage is that they can’t afford it. If premiums increase 15% or more, how many of the 20 million have to drop coverage because premiums are too expensive? Do we start the uninsured cycle all over again?

Net net, ACA has enabled more people to have health insurance, but at prices that are even less sustainable than before. ACA offers a web of subsidies to low-income people, which simply means each of us, including businesses, will be paying for part or all of their premium through taxes. As companies compete globally, this additional tax burden will affect the cost of services being sold. As our individual taxes increases, we reduce our spending. While ACA has the right intention of expanded coverage, the unintended consequences of the additional cost burden on businesses and individuals will have an impact on job growth.

While it’s hard for anyone to dispute the benefits of insurance for everyone, we first need to address the drivers behind the high cost of healthcare, so we can get the health insurance prices more affordable. Unfortunately, ACA steered us further in the wrong direction. Self-insured employers are the key to lead the way in true reform of the cost and quality of healthcare.

How to Think About the Rise of the Machines

The first machine age, the Industrial Revolution, saw the automation of physical work. We live in the second machine age, where there is increasing augmentation and automation of manual and cognitive work.

This second machine age has seen the rise of artificial intelligence (AI), or “intelligence” that is not the result of human cogitation. It is now ubiquitous in many commercial products, from search engines to virtual assistants. AI is the result of exponential growth in computing power, memory capacity, cloud computing, distributed and parallel processing, open-source solutions and global connectivity of both people and machines. The massive amounts and the speed at which structured and unstructured (e.g., text, audio, video, sensor) data is being generated has made a necessity of speedily processing and of generating meaningful, actionable insights from it.

Demystifying Artificial Intelligence

The term “artificial intelligence” is often misused. To avoid any confusion over what AI means, it’s worth clarifying its scope and definition.

  • AI and Machine Learning—Machine learning is just one area or sub-field of AI. It is the science and engineering of making machines “learn.” That said, intelligent machines need to do more than just learn—they need to plan, act, understand and reason.
  • Machine Learning and Deep Learning—”Machine learning” and “deep learning” are often used interchangeably. Deep learning is actually a type of machine learning that uses multi-layered neural networks to learn. There are other approaches to machine learning, including Bayesian learning, evolutionary learning and symbolic learning.
  • AI and Cognitive Computing—Cognitive computing does not have a clear definition. It can be viewed as a subset of AI that focuses on simulating human thought process based on how the brain works. It is also viewed as a “category of technologies that uses natural language processing and machine learning to enable people and machines to interact more naturally to extend and magnify human expertise and cognition.” Cognitive computing is a subset of AI, not an independent area of study.
  • AI and Data Science—Data science refers to the interdisciplinary field that incorporates statistics, mathematics, computer science and business analysis to collect, organize and analyze large amounts of data to generate actionable insights. The types of data (e.g., text, audio, video) and the analytic techniques (e.g., decision trees, neural networks) that both data science and AI use are very similar.

Differences, if any, may be found in the purpose. Data science aims to generate actionable insights to businesses, irrespective of any claims about simulating human intelligence, while the pursuit of AI may be to simulate human intelligence.

Self-Driving Cars

When the U.S. Defense Advanced Research Projects Agency (DARPA) ran its 2004 Grand Challenge for automated vehicles, no car was able to complete the 150-mile challenge. In fact, the most successful entrant covered only 7.32 miles. The next year, five vehicles completed the course. Now, every major car manufacturer plans to have a self-driving car on the road within five to 10 years, and the Google Car has clocked more than 1.3 million autonomous miles.

See Also: What You Must Know About Machine Learning

AI techniques—especially machine learning and image processing— help create a real-time view of what happens around an autonomous vehicle and help it learn and act from past experience. Amazingly, most of these technologies didn’t even exist 10 years ago.

1 2 3 4 5 6

Emerging risk identification through man-machine learning

“People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.” —Pedro Domingos, author of The Master Algorithm

Emerging Risks & New Product Innovation

Identifying emerging risks (e.g., cyber, climate, nanotechnology), analyzing observable trends, determining if there is an appropriate insurance market for these risks and developing new coverage products in response historically have been creative human endeavors. However, collecting, organizing, cleansing, synthesizing and even generating insights from large volumes of structured and unstructured data are now typically machine learning tasks. In the medium term,  combining human and machine insights offers insurers complementary, value-generating capabilities.

Man-Machine Learning

Artificial general intelligence (AGI) that can perform any task a human can is still a long way off. In the meantime, combining human creativity with mechanical analysis and synthesis of large volumes of data—in other words, man-machine learning (MML)—can yield immediate results.

For example, in MML, the machine learning component sifts through daily news from a variety of sources to identify trends and potentially significant signals. The human-learning component provides reinforcement and feedback to the ML component, which then refines its sources and weights to offer broader and deeper content. Using this type of MML, risk experts can identify emerging risks and monitor their significance and growth. MML can further help insurers identify potential customers, understand key features, tailor offers and incorporate feedback to refine product introduction.

Computers That “See”

In 2009, Fei-Fei Li and other AI scientists at Stanford AI Laboratory created ImageNet, a database of more than 15 million digital images, and launched the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). The ILSVRC awards substantial prizes to the best object detection and object localization algorithms.

The competition has made major contributions to the development of “deep learning” systems, multilayered neural networks that can recognize human faces with more than 97% accuracy, as well as recognize arbitrary images and even moving videos. Deep learning systems can now process real-time video, interpret it and provide a natural language description.

Artificial Intelligence: Implications for Insurers

AI’s initial impact relates primarily to improving efficiencies and automating existing customer-facing, underwriting and claims processes. Over time, its impact will be more profound; it will identify, assess and underwrite emerging risks and identify new revenue sources.

  • Improving Efficiencies—AI is already improving efficiencies in customer interaction and conversion ratios, reducing quote-to-bind and FNOL-to-claim resolution times and increasing speed to market for products. These efficiencies are the result of AI techniques speeding up decision-making (e.g., automating underwriting, auto-adjudicating claims, automating financial advice, etc.).
  • Improving Effectiveness—Because of the increasing sophistication of its decision-making capabilities, AI will soon improve target prospects to convert them to customers, refine risk assessment and risk-based pricing, enhance claims adjustment and more. Over time, as AI systems learn from their interactions with the environment and with their human masters, they are likely to become more effective than humans, and the AI systems will replace them. Advisers, underwriters, call center representatives and claims adjusters will likely be most at risk.
  • Improving Risk Selection and Assessment—AI’s most profound impact could well result from its ability to identify trends and emerging risks and assess risks for individuals, corporations and lines of business.

Its ability to help carriers develop new sources of revenue from risk- and non-risk-based information will also be significant.

See Also: How Machine Learning Changes the Game

Starting the Journey

Most organizations already have a big data and analytics or data science group. (We have addressed elsewhere how organizations can create and manage these groups.) The following are specific steps for incorporating AI techniques within a broader data science group:

  1. Start from business decisions—Catalogue the key strategic decisions that affect the business and the related metrics that need improvement (e.g., better customer targeting to increase conversion ratio, reducing claims processing time to improve satisfaction, etc.).
  1. Identify appropriate AI areas—Solving any particular business problem will, very likely, involve more than one AI area. Ensure that you map all appropriate AI areas (e.g., NLP, machine learning, image analytics) to the problem you want to address.
  1. Think big, start small—AI’s potential to influence decision making is huge, but companies will need to build the right data, techniques, skills and executive decision-making to exploit it. Have an evolutionary path toward more advanced capabilities. AI’s full power will become available when the AI platform continuously learns from both the environment and people (what we call the “dynamic insights platform”).
  1. Build training data sets—Create your own proprietary data set for training staff and measuring the accuracy of your algorithms. For example, create your own proprietary database of “crash images” and benchmark the accuracy of your existing algorithms against them. You should consistently aim to improve the accuracy of the algorithms against comparable human decisions.
  1. Pilot with parallel runs—Build a pilot of your AI solution using existing vendor solutions or open-source tools. Conduct parallel runs of the AI solution with human decision makers. Compare and iteratively improve the performance/accuracy of the AI solution.
  1. Scale and manage change—Once the AI solution has proven itself, scale it with the appropriate software/hardware architecture and institute a broad change management program to change the internal decision-making mindset.

5 Value Levers for Auto Telematics

Telematics could be one of the most relevant digital innovations in the insurance industry, directly affecting results. Worldwide diffusion of telematics-based motor insurance policies is currently at an early stage, but the best practices achieved levels of penetration higher than 20% of the motor portfolio. The diffusion is growing fast, with well-recognized benefits for the motor insurance value chain.

Looking across countries at best practices, it is possible to identify five value-creation levers:

  1. Risk selection
  2. Pricing (risk-based)
  3. Value-added services
  4. Loss control
  5. Loyalty and behavior modification programs

1. Risk selection

Telematics can be indirectly or directly used to select risks at an underwriting stage. As a matter of fact, products subjected to steady monitoring through telematics indirectly discourage purchase by risky clients, hence limiting adverse selection and fraudulent intent.

Data collection can directly improve the overall quality of the underwriting process, allowing price adjustments or covenants and options related to what the monitoring finds.

For instance, Progressive’s Snapshot provides:

  • a device that measures client driving style;
  • a predictive approach based on data collection;
  • a discount based on information gathered.

2. Pricing (risk-based)

Through telematics, a steadfast monitoring of “quantity” and “level” of risk has become possible. The risk can be calculated on the basis of information monitored continuously, directly determining pricing for individual customers. This may cover usage. Premiums can be adjusted within the year the policy covers, or there can be a discount the following year.

There are solutions such as PAYD (pay as you drive) policies that monitor mileage (with different weights for different time and itineraries) and compute a premium adjustment. PHYD (pay how you drive) policies, instead, integrate information gathered on mileage with an analysis of the client driving style, defined through both mileage and driving behaviors (the number and the intensity of accelerations and stops, driving timetables, speed and other variables).

3. Value-added services

Value-added services can be offered to the insured by the insurer or partners to exploit data detected and sent via telematics. Some examples related to the automobile business are:

  • Car antitheft systems through an installed back box;
  • Emergency services with automatic claim detection or buttons for direct-dialing the assistance center;
  • The possibility to link the telematics device to a payment system (and confirm via smartphone app) to authorize all car-related transactions, such as parking, tolls and refueling.

4. Loss control

Telematics — based on a box installed within the car — also allows for the use of data detected by sensors to limit the loss ratio of the motor portfolio. In this sense, telematics enables the development of claims management processes that are faster and more efficient, by anticipating:

  • The actual verification of the claim (anticipation of the first notice of loss);
  • The direct contact with the client for description of the claim;
  • The attempt to use agreed body shops.

The use of structured information coming from telematics sensors optimizes claim evaluation, improving fraud detection and providing more information during any eventual in-court processes.

5. Loyalty and behavior modification programs

Behavioral programs are basically approaches that exploit information gathered on comportment to direct clients toward less risky solutions.

This can be fairly achieved through the inclusion of telematics devices and measurement of risky behaviors.

Discovery’s Vitality Drive has applied this approach with a proposition based on:

  • “Black box” requested by the client to have access to the loyalty system, with a monthly fee;
  • Drive style monitoring and reporting through feedback;
  • Incentives for other “virtuous behaviors” (car maintenance, driving courses, …);
  • Cash-back fuel expense, related to the score of the driving style and of other monitored behaviors.

The telematics business evolution — from a niche underwriting solution focused on younger and low-mileage drivers to a mainstream solution broadly applied on motor portfolios — requires the creation of an integrated approach based all the five levers. This approach has the potential to be a real game changer in the motor insurance business.

Survey: Predictive Modeling Lifts Profits

The breadth and depth of predictive modeling applications have grown, but, of equal importance, the percentage of participants reporting a positive impact on profitability has dramatically increased, Towers Watson’s most recent predictive modeling survey finds.

Our 2014 Predictive Modeling Benchmarking Survey indicates the use of predictive modeling in risk selection and rating has increased significantly for all lines of business over the last year, continuing a long-term trend. For instance, in the personal auto business, 97% of participants said that in 2014 they used predictive modeling in underwriting/risk selection or rating/pricing, compared with 80% in 2013, a 17-percentage-point increase. For standard commercial property/commercial multiperil (CMP)/business-owner peril (BOP), the number jumped 19 percentage points, to 51%, during the same time period (Figure 1). In fact, the percentage of participants that currently use predictive modeling increased for every line of business covered in the survey.

Figure 1. The use of predictive modeling in risk selection/rating has increased significantly for all lines of business over the last year

Does your company group currently use or plan to use predictive modeling in underwriting/risk selection or rating/pricing for the following lines of business?

Sophisticated risk selection and rating techniques are particularly important in personal lines, where models have now penetrated most of the market. An overwhelming 92% of survey participants cited these techniques as essential drivers of performance or success. To a significant degree, this was also true for small to mid-sized commercial carriers, with 44% citing sophisticated risk selection and rating techniques as essential and another 42% identifying them as very important.

Even as the use of predictive modeling extends to more lines of business, there is an increasing depth in its use. Predictive modeling applications are increasingly being deployed by insurance companies more broadly across their organizations as their confidence in modeling increases. For example, 57% of survey participants currently use predictive modeling techniques for underwriting and risk selection, and another 33% have plans to use them over the next two years. Although a more modest 28% currently use predictive modeling to evaluate fraud potential, a sizable additional 36% anticipate using it for this purpose over the next two years. Survey participants report plans to deploy predictive modeling applications in areas including claim triage, evaluation of litigation potential, target marketing and agency management. These applications will favorably affect loss costs, expenses and premium growth.

THE BOTTOM LINE

Eighty-seven percent of our survey participants report that predictive modeling improved profitability last year, an increase of eight percentage points over 2013 (Figure 2). The increase continues a pattern of growth over several years.

Figure 2. Companies implementing predictive models have increasingly seen favorable profitability impacts over time

What impact has predictive modeling had in the following areas?

Slide 9 of Executive Summary

A positive impact on rate accuracy helps explain the improvement. In fact, the percentage of carriers citing a positive impact on rate accuracy has increased every year since 2010, when 70% cited a positive impact. In three of the past four years, the percentage-point increase in carriers citing a positive impact has hovered around 10%. In this year’s survey, nearly all (98%) of the respondents reported that predictive modeling has improved their rate accuracy. Improved rate accuracy has both top- and bottom-line benefits: It boosts revenue because it enables insurers to price more effectively in very competitive markets, retaining existing customers and attracting potential customers with rates that accurately reflect their level of risk. At the same time, rate accuracy drives profit because it also helps carriers identify and write more profitable business,and not focus solely on market share and price.

More accurate rates also improve loss ratios, which have improved in parallel, according to our survey participants. In 2014, 91% of survey participants cited the favorable impact of predictive modeling on loss ratios, an increase of 14 percentage points over 2013. When premiums more accurately reflect risk, losses are more likely to be properly funded.

TOP-LINE GROWTH

The bottom-line fundamentals — profitability, rate accuracy and loss ratio improvement — identified in our survey are complemented by top-line benefits. Positive impacts were registered on renewal retention (55%), underwriting appetite (46%) and market share (41%).

THE NEXT STEP

Sophisticated risk selection and rating are cited as essential by many of our participants, but our survey indicates that, despite favorable trends, insurers are still far from leveraging sophisticated modeling techniques to their fullest, even in pricing. Two-thirds of participants aren’t currently using price integration (the overlay of customer behavior and loss cost models to create metrics that measure different rate scenarios) for any products. A few are past price integration and are currently implementing price optimization (harnessing a mathematical search algorithm to a price integration framework to maximize profit, volume and other business metrics) for some products.

The disparity between what is viewed as the optimal use of modeling techniques and the current level of implementation needs to be bridged if insurers want to leverage predictive modeling as a competitive advantage to identify and capture profitable business. Increasingly, insurers are making greater use of analytics including by peril rating (which replaces rating at the broad, line-of-business level with specific rating by coverage), proprietary symbol (customizing vehicle classifications for personal automobile policies) and territorial and credit analysis.

Those insurance companies that can’t employ sophisticated risk identification and management tools face the possibility of losing profitable business and adverse selection.

MORE PROGRESS IS STILL POSSIBLE

Profitability is hard-earned in the current competitive property/casualty market, and predictive modeling is recognized by a steadily growing number of companies as an invaluable tool to improve both top- and bottom-line performance that ultimately reflects in earnings growth. Our survey suggests that insurers are increasingly comfortable with predictive modeling and are using it in a growing number of capacities. However, participant responses also indicate that there are still many benefits offered by predictive modeling and other more sophisticated analytical tools that have not been achieved, such as treating data as an asset and more effectively using predictive modeling applications to improve claim and other functional results. Improving performance on these issues alone could make a significant difference in the profitability of insurance companies and offers all the more reason to explore new ways to benefit from data-driven analytics and predictive modeling.

ABOUT THE SURVEY

Towers Watson conducted a web-based survey of U.S. and Canadian property/casualty insurance executives from Sept. 3 through Oct. 22, 2014. The results discussed in this article represent the views of 52 U.S. insurance executives. Responding companies represent a significant share of the U.S. property/casualty insurance market for both personal lines carriers (17%) and commercial lines carriers (22%).

How CAT Models Lead to Soft Prices

In our first article in this series, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. In the second article, we looked at how, beginning in the mid-1980s, people began developing models that could prevent recurrences of those staggering losses. In this article, we look at how modeling results are being used in the industry.

 

Insurance is a unique business. In most other businesses, expenses associated with costs of operation are either known or can be fairly estimated. The insurance industry, however, needs to estimate expenses for things that are extremely rare or have never happened before. Things such as the damage to a bridge in New York City from a flood or the theft of a precious heirloom from your home or the fire at a factory, or even Jennifer Lopez injuring her hind side. No other industry has to make so many critical business decisions as blindly as the insurance industry. Even in circumstances in which an insurer can accurately estimate a loss to a single policyholder, without the ability to accurately estimate multiple losses all occurring simultaneously, which is what happens during natural catastrophes, the insurer is still operating blindly. Fortunately, the introduction of CAT models greatly enhances both the insurer’s ability to estimate the expenses (losses) associated with a single policyholder and concurrent claims from a single occurrence.

When making decisions about which risks to insure, how much to insure them for and how much premium is required to profitably accept the risk, there are essentially two metrics that can provide the clarity needed to do the job. Whether you are a portfolio manager managing the cumulative risk for a large line of business or an underwriter getting a submission from a broker to insure a factory or an actuary responsible for pricing exposure, what these stakeholders need to minimally know is:

  1. On average, what will potential future losses look like?
  2. On average, what are the reasonable worst case loss scenarios, or the probable maximum loss (PML)?

Those two metrics alone supply enough information for an insurer to make critical business decisions in these key areas:

  • Risk selection
  • Risk-based pricing
  • Capacity allocation
  • Reinsurance program design

Risk Selection

Risk selection includes an underwriter’s determination of the class (such as preferred, standard or substandard) to which a particular risk is deemed to belong, its acceptance or rejection and (if accepted) the premium.

Consider two homes: a $1 million wood frame home and a $1 million brick home both located in Los Angeles. Which home is riskier to the insurer?  Before the advent of catastrophe models, the determination was based on historical data and, essentially, opinion. Insurers could have hired engineers who would have informed them that brick homes are much more susceptible to damage than wood frame homes under earthquake stresses. But it was not until the introduction of the models that insurers could finally quantify how much financial risk they were exposed to. They shockingly discovered that on average brick homes are four times riskier than wood frame homes and are twice as likely to sustain a complete loss (full collapse). This was data not well-known by insurers.

Knowing how two or more different risks (or groups of risks) behave at an absolute and relational level provides a foundation to insurers to intelligently set underwriting guidelines, which work toward their strengths and excludes risks they do not or cannot absorb, based on their risk appetite.

Risk-Based Pricing

Insurance is rapidly becoming more of a commodity, with customers often choosing their insurer purely on the basis of price. As a result, accurate ratemaking has become more important than ever. In fact, a Towers Perrin survey found that 96% of insurers consider sophisticated rating and pricing to be either essential or very important.

Multiple factors go into determining premium rates, and, as competition increases, insurers are introducing innovative rate structures. The critical question in ratemaking is: What risk factors or variables are important for predicting the likelihood, frequency and severity of a loss? Although there are many obvious risk factors that affect rates, subtle and non-intuitive relationships can exist among variables that are difficult, if not impossible, to identify without applying more sophisticated analyses.

Regarding our example involving the two homes situated in Los Angeles, catastrophe models tell us two very important things: what the premium to cover earthquake loss should roughly be and that the premium for masonry homes should be approximately four times larger than wood frame homes.

The concept of absolute and relational pricing using catastrophe models is revolutionary. Many in the industry may balk at our term “revolutionary,” but insurers using the models to establish appropriate price levels for property exposures have a massive advantage over public entities such as the California Earthquake Authority (CEA) and the National Flood Insurance Program (NFIP) that do not adhere to risk-based pricing.

The NFIP and CEA, like most quasi-government insurance entities, differ in their pricing from private insurers along multiple dimensions, mostly because of constraints imposed by law. Innovative insurers recognize that there are literally billions of valuable premium dollars at stake for risks for which the CEA, the NFIP and similar programs significantly overcharge – again, because of constraints that forbid them from being competitive.

Thus, using average and extreme modeled loss estimates not only ensures that insurers are managing their portfolios effectively, but enables insurers, especially those that tend to have more robust risk appetites, to identify underserved markets and seize valuable market share. From a risk perspective, a return on investment can be calculated via catastrophe models.

It is incumbent upon insurers to identify the risks they don’t wish to underwrite as well as answer such questions as: Are wood frame houses less expensive to insure than homes made of joisted masonry? and, What is the relationship between claims severity and a particular home’s loss history? Traditional univariate pricing analysis methodologies are outdated; insurers have turned to multivariate statistical pricing techniques and methodologies to best understand the relationships between multiple risk variables. With that in mind, insurers need to consider other factors, too, such as marketing costs, conversion rates and customer buying behavior, just to name a few, to accurately price risks. Gone are the days when unsophisticated pricing and risk selection methodologies were employed. Innovative insurers today cross industry lines by paying more and more attention to how others manage data and assign value to risk.

Capacity Allocation

In the (re)insurance industry, (re)insurers only accept risks if those risks are within the capacity limits they have established based on their risk appetites. “Capacity” means the maximum limit of liability offered by an insurer during a defined period. Oftentimes, especially when it comes to natural catastrophe, some risks have a much greater accumulation potential, and that accumulation potential is typically a result of dependencies between individual risks.

Take houses and automobiles. A high concentration of those exposure types may very well be affected by the same catastrophic event – whether a hurricane, severe thunderstorm, earthquake, etc. That risk concentration could potentially put a reinsurer (or insurer) in the unenviable position of being overly exposed to a catastrophic single-loss occurrence.  Having a means to adequately control exposure-to-accumulation is critical in the risk management process. Capacity allocation enables companies to allocate valuable risk capacity to specific perils within specific markets and accumulation zones to minimize their exposure, and CAT models allow insurers to measure how capacity is being used and how efficiently it is being deployed.

Reinsurance Program Design

With the advent of CAT models, insurers now have the ability to simulate different combinations of treaties and programs to find the right fit, maximizing their risk and return. Before CAT models, it would require gut instinct to estimate the probability of attachment of one layer over another or to estimate the average annual losses for a per-risk treaty covering millions of exposures. The models estimate the risk and can calculate the millions of potential claims transactions, which would be nearly impossible to do without computers and simulation.

It is now well-known how soft the current reinsurance market is. Alternative capital has been a major driving force, but we consider the maturation of CAT models as having an equally important role in this trend.

First, insurers using CAT models to underwrite, price and manage risk can now intelligently present their exposure and effectively defend their position on terms and conditions. Gone are the days when reinsurers would have the upper hand in negotiations; CAT models have leveled the playing field for insurers.

Secondly, alternative capital could not have the impact that it is currently having without the language of finance. CAT models speak that language. The models provide necessary statistics for financial firms looking to allocate capital in this area. Risk transfer becomes so much more fungible once there is common recognition of the probability of loss between transferor and transferee. No CAT models, no loss estimates. No loss estimates, no alternative capital. No alternative capital, no soft market.

A Needed Balance

By now, and for good reason, the industry has placed much of its trust in CAT models to selectively manage portfolios to minimize PML potential. Insurers and reinsurers alike need the ability to quantify and identify peak exposure areas, and the models stand ready to help understand and manage portfolios as part of a carrier’s risk management process. However, a balance between the need to bear risk and the need to preserve a carrier’s financial integrity in the face of potential catastrophic loss is essential. The idea is to pursue a blend of internal and external solutions to ensure two key factors:

  1. The ability to identify, quantify and estimate the chances of an event occurring and the extent of likely losses, and
  2. The ability to set adequate rates.

Once companies have an understanding of their catastrophe potential, they can effectively formulate underwriting guidelines to act as control valves on their catastrophe loss potential but, most importantly, even in high-risk regions, identify those exposures that still can meet underwriting criteria based on any given risk appetite. Underwriting criteria relative to writing catastrophe-prone exposure must be used as a set of benchmarks, not simply as a blind gatekeeper.

In our next article, we examine two factors that could derail the progress made by CAT models in the insurance industry. Model uncertainty and poor data quality threaten to raise skepticism about the accuracy of the models, and that skepticism could inhibit further progress in model development.