Tag Archives: models

Why Warren Buffett Is Surely Wrong

The Berkshire Hathaway annual report is one of my favorite reads. I always find a mountain of wisdom coupled with humility from one of my mentors, Warren Buffett. He doesn’t know he’s my mentor, but I treat him as one by reading and reflecting on what is in these annual letters. I would recommend you do the same; they’re available free of charge online.

There was a doozy of a sentence in the latest—right there on page 8—discussing the performance of Berkshire’s insurance operations, which make up the core of Berkshire’s business: “We believe that the annual probability of a U.S. mega-catastrophe causing $400 billion or more of insured losses is about 2%.” Sorry, Mr. Buffett, I have some questions for you on that one.

See also: Whiff of Market-Based Healthcare Change?  

After reading those words, I quickly ran a CATRADER industry exceedance probability (EP) curve for the U.S. My analysis included the perils of hurricane (including storm surge), earthquake (including tsunami, landslide, liquefaction, fire-following and sprinkler leakage), severe thunderstorm (which includes tornadoes, hail and straight-line wind), winter storm (which includes wind, freezing temperatures and winter precipitation) and wildfire. I used the latest estimates of take-up rates (the percentage of properties actually insured against these perils) and took into account demand surge (the increase in the price of labor and materials that can follow disasters).

Bottom line, we believe that the probability of $400 billion in insured losses from a single mega-catastrophe in a given year is far more remote—between 0.1% and 0.01% EP. Buffett is putting this at a 2% EP (or a 50-year return period), which is a gulf of difference. In fact, it is worth noting that, by AIR estimates, the costliest disaster in U.S. history in the last 100-plus years—indexed to today’s dollars and today’s exposures—was the Great Miami Hurricane of 1926. Were that to recur today, AIR estimates it would cost the industry roughly $128 billion. 

So, what is Buffett basing his view on? In 2009, he famously said “Beware of geeks bearing formulas…. Indeed, the stupefying losses in mortgage-related securities came in large part because of flawed, history-based models used by salesmen, rating agencies and investors.” Obviously, this was a reference to the 2008 financial crisis and the use of models by banks. His view on catastrophe models, however, has never been made public.

See also: Why Risk Management Certifications Matter 

Assuming Buffett’s point of view was not model-based, it would be good to know his methods. For example, was it based on some estimate of insured exposures and a PML%, as was common in the pre–catastrophe model days? But he didn’t get to that level of detail. It would be interesting to know if this is also the view of Ajit Jain, his head of insurance operations, and now vice chairman of Berkshire?

I have lots of questions, and I am not holding my breath for Buffett to respond to this blog. But it would be great to get your thoughts below.

New Insurance Models: The View From Asia

Recently, I chaired the 4th annual Asia Insurance CIO Technology summit in Jakarta, Indonesia. The experience brought me into contact with an entirely different set of insurers and insurance technology players. I was rewarded with a fresh view on the challenges and opportunities of insurance during an era of disruptive innovation, as well as a new perspective on how Asian insurers are creating and launching products, defining new channels and new models to out-innovate the competition.

I should state at the outset that Asian insurers aren’t doing everything differently than North American and European insurers. It is a global era. In many ways, their competitive issues are similar. We are all having the same conversations. As I considered the similarities, however, it made the small differences stand out. Just as Asia is hours ahead of the Western world throughout the day, I had the strange feeling that I was listening to the ends of conversations that are only beginning in other parts of the world. Because populations, cultures, use of digital technology and the nature of businesses vary, I thought I would share a short list of insights from my eavesdropping in an effort to shed light on how disruption is being embraced elsewhere and how it could ripple through the industry. I’ll center my thoughts on models, mandates and marketing.

Models

Everyone is discussing models. Business models. Technology models. Distribution models. Transaction models. There is good reason. It’s a model v. model world, and Asia-Pacific insurers know that the model is the center of a business. For the outer layer to be responsive, the business model can’t be a slow-moving leviathan. Disruption has the disturbing tendency to render perfectly good models obsolete. Creating a responsive, obsolescent-proof business model is of great interest to Asian insurers, which are responding to radically different consumer expectations and competitive models than in prior decades.

Traditional insurers at the conference (as well as challengers) are aggressively rethinking the insurance business model. Some believe that insurance will be run more in an open ecosystem, becoming more fragmented and niche-focused, building on the micro concept. If an insurer can embed products in other business models/industries, especially those with high-frequency transactions, then they capture the opportunity for both a new distribution channel and a new product. New Distribution Channel + New Product = New Market Opportunity.

These are areas where insurers can see quantum leaps in growth, yet they are also the areas where insurers are most susceptible to start-ups beating them to the punch.

Mandates

Three clear mandates stood out above all others for Asian insurers – the role of CIOs, the necessity of new cyber security solutions and a new, enterprise-wide look at analytics.

For CIOs, the clarion call was for a rapid advancement and widening of scope for their role within the insurance organization. CIOs must become change agents and grow in influence. They must be active in technology review and adoption, more collaborative with CMOs regarding digital platforms and data sharing and more effective at translating business vision into system and process transformation.

Cybersecurity is a never-ending mandate that also seems to never have the perfect solution. It was universally agreed-upon that today’s security measures have the frustrating trait of being mostly temporary solutions. Blockchain technology (currently in use by Bitcoin, among others) was discussed as a more permanent solution for many security issues. Blockchain use makes transaction fraud nearly impossible. Verification of transaction authenticity is instant and can be performed by any trusted source, from any trusted location.

On a broader note, however, it was conceded that security is no longer just an IT issue, but it is a board-level, organization-wide imperative because security concerns the full enterprise. Boards must fund and address cybersecurity across three aspects: confidentiality, availability and integrity.

Enterprise-wide analytics was another organizational mandate. Some Asian insurers are moving toward using end-to-end analytics solutions that cross the enterprise in an effort to gain a single client view and execute a targeted pipeline, with unified campaigns and advertising. Analytics will also give them risk- and assessment-based pricing, improved predictability for loss prevention and better management of claims trends, recovery and services.

Marketing

Insurers are rapidly moving from product-driven to customer-driven strategies and from traditional distribution channels (such as agents) to an array of channels based on customer choice. At the same time that Asian insurers are looking at relevant business models, they are diving deeply into how marketing tactics may completely shift from a central hub to a decentralized “micro” model. The industry spark has been a short list of both established insurers and start-ups that are capturing new business through new marketing methods, new partnerships and new market spaces.

ZhongAn, for example, is selling return insurance for anything bought on Alibaba. Huatai Life is promoting unit-linked policies on JD.com and selling A&H insurance via a WeChat app. PICC Life has found a distribution partner in Qunar.com, an online travel information provider. These examples require a completely different, high-volume, interaction-based, data-rich, small-issue marketing plan. That kind of marketing will prove to be of great value to insurers that have added flexible, transaction-capable core insurance systems…that are cloud-based to scale rapidly.

Aggregators are now commonplace in insurance, and Asian insurers are looking at how this channel will affect their business, as well as how to use aggregators as a tool for competitive advantage. GoBear, currently selling in Singapore and Thailand, was given as a prime example of how aggregators represent the future of insurance shopping. GoBear isn’t just an aggregator. It is an innovator, revamping the concept of insurance relationships. GoBear Matchmaker, for example, will allow a prospect to pick insurance but also allow the insurer to pick prospects/clients. GoBear Groups will leverage groups/crowd sourcing.

What do these M’s add up to?

Insurance business models, mandates and marketing are all ripe for inspection and change. In some ways, Asian insurers are in a better position for these ground-shaking industry changes because so many of them recognize the stakes involved and the cultural shift required to thrive. Asian populations and culture are ready to embrace technology solutions to meet consumer demands. As all insurers globally address their models, mandates and marketing, it will be fascinating and educational to see how quickly the different markets adapt and are emerging as innovative leaders and how these regional innovations will influence other regions as they turn into global solutions.

One thing was clear to me in my time in Jakarta – Asian insurers are optimistic, active and excited about the road ahead.

Catastrophe Models Allow Breakthroughs

“In business there are two ways to make money; you can bundle or you can unbundle.” –Jim Barksdale

We have spent a series of articles introducing catastrophe models and describing the remarkable benefits they have provided the P&C industry since their introduction (article 1, article 2, article 3, article 4). CAT models have enabled the industry to pull the shroud off of quantifying catastrophic risk and finally given (re)insurers the ability to price and manage their exposure to the violent and unpredictable effects of large-scale natural and man-made events. In addition, while not a panacea, the models have leveled the playing field between insurers and reinsurers. Via the use of the models, insurers have more insight than even before into their exposures and the pricing mechanics behind catastrophic risk. As a result, they can now negotiate terms with confidence, whereas prior to the advent of the models and other similar tools, reinsurers had the upper hand with information and research.

We also contend that CAT models are the predominant cause of the reinsurance soft market via the entry of alternative capital from the capital markets. And yet, with all the value that CAT models have unleashed, we still have a collective sour taste in our mouths as to how these invaluable tools have benefited consumers, the ones who ultimately make the purchasing decisions and, thus, justify the industry’s very existence.

There are, in fact, now ways to benefit customers by, for instance, bundling earthquake coverage with homeowners insurance in California and helping companies deal with hidden volatility in their supply chains.

First, some background:

Bundling Risks

Any definition of insurance usually addresses the concept of risk transfer: the mechanism that ensures full or partial financial compensation for the loss or damage caused by event(s) beyond the control of the insured. In addition, the law of large numbers applies: the principle that the average of a large number of independent identically distributed random variables tends to fall close to the expected value. This result can be used to show that the entry of additional risks to an insured pool tends to reduce the variation of the average loss per policyholder around the expected value. When each policyholder’s contribution to the pool’s resources exceeds the expected loss payment, the entry of additional policyholders reduces the probability that the pool’s resources will be insufficient to pay all claims. Thus, an increase in the number of policyholders strengthens the insurance by reducing the probability that the pool will fail.

Our collective experiences in this world are risky, and we humans have consistently desired the ability to shed the financial consequences of risk to third parties. Insurance companies exist by using their large capital base, relying on the law of large numbers, but, perhaps most importantly, leveraging the concept of spread of risk, the selling of insurance in multiple areas to multiple policyholders to minimize the danger that all policyholders will experience losses simultaneously.

Take the peril of earthquake. In California, 85% to 90% of all homeowners do NOT maintain earthquake coverage even though earthquake is the predominant peril in that state. (Traditional homeowners policies exclude earth movement as a covered peril). News articles point to the price of the coverage as the limiting factor, and that makes sense because of that peril’s natural volatility. Or does it make sense?

Is the cost of losses from earthquakes in California considerably different than, say, losses from hurricanes in Florida, in which the wind peril is typically included in most homeowners insurance forms? Earthquakes are a lot more localized than hurricanes, but the loss severity can also be more pronounced in those localized regions. Hurricanes that strike Florida can be expected with higher frequency than large damage-causing earthquakes that shake California. In the final analysis, the average projected loss costs are similar between the two perils, but one has nearly a 100% take-up rate vs. the other at roughly 10%. But why is that so? The answer lies in the law of large numbers, or in this case the lack thereof.

Rewind the clock to the 1940s. If you were a homeowner then, the property insurance world looked very different than it does today. As a homeowner back then, you would need to virtually purchase separate policies for each peril sought: a fire, theft and liability policy and then a windstorm policy to adequately cover your home. The thought of packaging those perils into one convenient, comprehensive policy was thought to be cost-prohibitive. History has proven otherwise.

The bundling of perils creates a margin of safety from a P&C insurer’s perspective. Take two property insurers who offer fire coverage. Company A offers monoline fire, whereas Company B packages fire as part of a comprehensive homeowners policy. If both companies use identical pricing models, then Company B can actually charge less for fire protection than Company A simply because the additional premium from Company B affords peril diversification. Company B has the luxury of using premiums from other perils to help offset losses, whereas Company A is stuck with only its single-source fire premium and, thus, must make allowances in its pricing that it could be wrong. At the same time, Company B must also make allowances in the event its pricing is wrong, but can apply smaller allowances because of the built-in safety margin.

This brings us back to the models. It is easy to see why earthquake and other perils, such as flood, was excluded from homeowners policies in the past. Without models, it was nearly impossible to estimate future losses with any sort of reliable precision, leaving insurers the inability to collect enough premium to compensate for the inevitable catastrophic event. Enter the National Flood Insurance Program (NFIP), which stepped in to offer flood coverage but never fundamentally approached it from a sound underwriting perspective. Instead, in an effort to make the coverage affordable to the masses, the NFIP severely underpriced its only product and is now tens of billions of dollars in the red. Other insurers bravely offered the earthquake peril via endorsement and were devastated after the Northridge earthquake in 1994. In both cases, various market circumstances, including the lack of adequate modeling capabilities, contributed to underpricing and adverse risk selection as the most risk-prone homeowners gobbled up the cheap coverage.

Old legacies die hard, but models stand ready to help responsibly underwrite and manage catastrophic risk, even when the availability of windstorm, earthquake and flood insurance has been traditionally limited and expensive.

The next wave of P&C industry innovation will come from imaginative and enterprising companies that use CAT models to economically bundle risks designed to lower the costs to consumers. We view a future where more CAT risk will be bundled into traditional products. As they continue to improve, CAT models will afford the industry the confidence needed to include earthquake and flood cover for all property lines at full limits and with flexible, lower deductibles. In the future, earthquake and flood hazards will be standard covered perils in traditional property forms, and the industry will one day look back from a product standpoint and wonder why it had not evolved sooner.

Unbundling Risks

Insurance policies as contracts can be clumsy in handling complicated exposures. For example, insurers have the hardest time handling supply chain and contingent business interruption exposures, and rightly so. Because of globalization and extreme competition, multinational companies are continuously seeking value in the inputs for their products. A widget in a product can be produced in China one year, the Philippines the next, Thailand the following year and so on. It is time-consuming and resource intensive to keep track of not only how much of a company’s widgets are manufactured, but also what risks exist surrounding the manufacturing plant that could interrupt production or delivery. We would be hard-pressed to blame underwriters for wanting to exclude or significantly sublimit exposures related to supply chain or business interruption; after all, underwriters have enough difficulty just to manage the actual property exposures inherent in these types of risks.

It is precisely this type of opportunity that makes sense for the industry to create specialized programs. Unbundle the exposure from the remainder of the policy and treat it as a separate exposure with dedicated resources to analyze, price and manage the risk.

Take a U.S. semiconductor manufacturer with supply exposure in Southeast Asia. As was the case with the 2011 Thailand floods or the 2011 Tohoku earthquake and tsunami, this hypothetical manufacturer is likely exposed to supply chain risks of which it is unaware. It is also likely that the property insurance policy meant to indemnify the manufacturer for covered losses in its supply chain will fall short of expectations. An enterprising underwriter could carve out this exposure and transfer it to a new form. In that form, the underwriter can work with the manufacturer to clarify policy wording, liberalize coverage, simplify claims adjusting and provide needed additional capacity. As a result, the manufacturer gets a risk transfer mechanism that more precisely aligns with the balance sheet affecting risks it is exposed to. The insurer gets a new line of business that can provide a significant source of new revenue using tools such as CAT models and other analytics to price and manage those specific risks. By applying some ingenuity, the situation can be a win/win all around.

What if you are a manufacturer or importer and rely on the Port of Los Angeles or Miami International Airport (or any other major international port) to transport your goods in and out of markets? This is another area where commercial policies handle business exposure poorly, or not even at all. CAT models stand ready to provide the analytics required to transfer the risks of these choke points from business balance sheets to insurers. All that is required is vision to recognize the opportunity and the sense to use the toolsets now available to invent solutions rather than relying on legacy group think.

At the end of the day, the next wave of innovation will not come directly from models or analytics. While the models and analytics will continue to improve, real innovation will come from creative individuals who recognize the risks that are causing market discomfort and then use these wonderful tools to build products and programs that effectively transfer those risks more effectively than ever. Those same individuals will understand that the insured comes first, and that rather than retrofitting dated products to suit a modern-day business problem, the advent of new products and services is an absolute necessity to maintain the industry’s relevance. The only limiting factor preventing true innovation in property insurance is imagination and a willingness to no longer cling to the past.

3 Warning Signs of Adverse Selection

The top 25 insurers consume 70% of the market share in workers’ compensation, and, as the adoption of data and predictive analytics continues to grow in the insurance industry, so does the divide between insurers with competitive advantage and those without it. One of the largest outcomes of this analytics revolution is the increasing threat of adverse selection, which occurs when a competitor undercuts the incumbent’s pricing on the best risks and avoids writing poor performing risks at inadequate prices.

Every commercial lines carrier faces it, whether it knows it or not. A relative few are actively using adverse selection offensively to carve out new market opportunities from less sophisticated opponents. An equally small crowd knows that they are the unwilling victims of adverse selection, with competitors currently replacing their best long-term risks with a bunch of poor-performing accounts.

It’s the much larger middle group that’s in real trouble — those that are having their lunch quietly stolen each and every day, without even realizing it.

Three Warning Signs of Adverse Selection
Adverse selection is a particularly dangerous threat because it is deadly to a portfolio yet only recognizable after the damage has been done. However, there are specific warning signs to look out for that indicate your company is vulnerable:

  1. Loss Ratios and Loss Costs Climb – When portfolio loss ratios are climbing, it is easy to blame market conditions and the competition’s “irrational pricing.” If you or your colleagues are talking about the crazy pricing from the competition, it could be a sign that your competitor has better information to assess the same risks. For example, in 2009, Travelers Insurance, known to be utilizing predictive analytics for pricing, had a combined ratio of 89% while all of P&C had a combined ratio of 101%.
  2. Rates Go Up, and Volumes Declines – As loss ratios increase along with losses per earned exposure, the actuarial case emerges: Manual rates are inadequate to cover expected future costs. In this situation, tension grows among the chief decision makers. Raising rates will put policy retention and volumes at risk, but failing to raise rates will cut deeply into portfolio profitability. Often in the early stages of this warning sign, insurers opt to raise rates, which makes it tougher on both acquisition and retention. After another policy cycle, there is often a lurking surprise: The actuary will find that the rate increase was insufficient to cover the higher projected future losses. At this point, adversely selected insurers raise rates again (assuming their competitors are doing the same). The cycle repeats, and adverse selection has taken hold.
  3. Reserves Become Inadequate – When actuaries express signs of mild reserve inadequacy, the claims department often argues that reserving practices haven’t changed, but their loss frequency and severity have increased. This leads to major decreases in return on assets (ROA) and forces insurers to downsize and focus on a niche specialization to survive, with little hope of future growth. The fundamental problem leading to this occurrence is that the insurer cannot identify and price risk with the accuracy that competitors can.

Predictive Analytics Evens the Playing Field
The easiest way to prevent your business from being adversely selected is starting with the foundation of your risk management — the underwriting. Traditional insurance companies rely only on their own data to price risks, but more analytically driven companies are using a diversified set of data to prevent sample bias.

For small to mid-sized businesses that can’t afford to build out their internal data assets, there are third-party sources and solutions that can provide underwriters with the insight to make quicker and smarter pricing decisions. Having access to large quantities of granular data allows insurers to assess risk more accurately and win the right business for the best price while avoiding bad business.

Additionally, insurers are using predictive analytics to expand their scope of influence in insurance. With market share consolidation on the rise, insurers in niche markets of workers’ compensation face even more pressure of not only protecting their current business, but also achieving the confidence to underwrite risks in new markets to expand their book of business. According to a recent Accenture survey, 72% of insurers are struggling with maintaining underwriting and pricing discipline. The trouble will only increase as insurers attempt to expand into new territories without the wealth of data needed to write these new risks appropriately. The market will divide into companies that use predictive models to price risks more accurately and those that do not.

At the very foundation of any adversely selected insurer is the inability to price new and renewal business accurately. Overhauling your entire enterprise overnight to be data-driven and equipped to utilize advanced data analytics is an unreasonable goal. However, beginning with a specific segment of your business is not only reasonable but will help you fight against adverse selection and lower loss ratio.

This article first appeared on wci360.com.

Top 6 Myths About Predictive Modeling

Even if you’ve been hiding under a rock the past 25 years, it’s almost impossible to avoid hearing about how companies are turning around their results through better modeling or how new companies are entering into insurance using the power of predictive analytics.

So now you’re ready to embrace what the 21st century has to offer and explore predictive analytics as a mainstream tool in property/casualty insurance. But misconceptions are still commonplace.

Here are the top six myths dispelled:

Myth: Predictive modeling is mostly a technical challenge.
Fact: The predictive model is only one part of the analytics solution. It’s just a tool, and it needs to be managed well to be effective.

The No. 1 point of failure in predictive analytics isn’t technical or theoretical (i.e., something wrong with the model) but rather a failure in execution. This realization shifts the burden of risk from the statisticians and model builders to the managers and executives. The carrier may have an organizational readiness problem or a management and measurement problem. The fatal flaw that’s going to derail a predictive analytics project isn’t in the model, but in the implementation plan.

Perhaps the most common manifestation of this is when the implementation plan around a predictive model is forced upon a group:

  • Underwriters are told that they must not renew accounts above a certain score
  • Actuaries are told that the models are now going to determine the rate plan
  • Managers are told that the models will define the growth strategy

In each of these cases, the plan is to replace human expertise with model output. This almost never ends well. Instead, the model should be used as a tool to enhance the effectiveness of the underwriter, actuary or manager.

Myth: The most important thing is to use the right kind of model.
Fact: The choice of model algorithm and the calibration of that model to the available data are almost never the most important things. Instead, the biggest challenge is merely having a credible body of data upon which to build a model. In “The Unreasonable Effectiveness of Data,” Google research directors Halevy, Norvig and Pereira wrote:

“Invariably, simple models and a lot of data trump more elaborate models based on less data.”

No amount of clever model selection and calibration can overcome the fundamental problem of not having enough data. If you don’t have enough data, you still have some options: You could supplement in-house data with third-party, non-insurance data, append insurance industry aggregates and averages or possibly use a multi-carrier data consortium, as we are doing here at Valen.

Myth: It really doesn’t matter which model I use, as long as it’s predictive.
Fact: Assuming you have enough data to build a credible model, there is still a lot of importance in choosing the right model — though maybe not for the reason you’d think.

The right model might not be the one that delivers the most predictive power; it also has to be the model that has a high probability of success in application. For example, you might choose a model that has transparency and is intuitive, not a model that relies on complex machine-learning techniques, if the intuitive model is one that underwriters will use to help them make better business decisions.

Myth: Predictive modeling only works well for personal lines.
Fact: Personal lines were the first areas of success for predictive modeling, owing to the large, homogeneous populations that they serve. But commercial lines aren’t immune to the power of predictive modeling. There are successful models producing risk scores for workers’ compensation, E&S liability and even directors & officers risks. One of the keys to deploying predictive models to lines with thin policy data is to supplement that data, either with industry-wide statistics or with third-party (not necessarily insurance) data.

Myth: Better modeling will give me accurate prices at the policy level.
Fact: Until someone invents a time machine, the premiums we charge at inception will always be wrong. For policies that end up being loss-free, we will charge too much. For the policies that end up having losses, we will charge too little. This isn’t a bad thing, however. In fact, this cross-subsidization is the fundamental purpose of insurance and is necessary.

Instead of being 100% accurate at the policy level, the objective we should aim for in predictive analytics is to segment the entire portfolio of risks into smaller subdivisions, each of which is accurately priced. See the difference? Now the low-risk policies can cross-subsidize one another (and enjoy a lower rate), and the high-risk policies will also cross-subsidize one another (but at a high rate). In this way, the final premiums charged will be fairer.

Myth: Good models will give me the right answers.
Fact: Good models will answer very specific questions, but, unless you’re asking the right questions, your model isn’t necessarily going to give you useful answers. Take time during the due diligence phase to figure out what the key questions are. Then when you start selecting or building models, you’ll be more likely to select a model with answers to the most important questions.

For example, there are (at least) two very different approaches to loss modeling:

  • Pure premium (loss) models can tell you which risks have the highest potential for loss. They don’t necessarily tell you why this is true, or whether the risk is profitable.
  • Loss ratio models can tell you which risks are the most profitable, where your rate plan may be out of alignment with risk or where the potential for loss is highest. However, they may not necessarily be able to differentiate between these scenarios.

Make sure that the model is in perfect alignment with the most important questions, and you’ll receive the greatest benefit from predictive analytics.