Tag Archives: manufacturer

Catastrophe Models Allow Breakthroughs

“In business there are two ways to make money; you can bundle or you can unbundle.” –Jim Barksdale

We have spent a series of articles introducing catastrophe models and describing the remarkable benefits they have provided the P&C industry since their introduction (article 1, article 2, article 3, article 4). CAT models have enabled the industry to pull the shroud off of quantifying catastrophic risk and finally given (re)insurers the ability to price and manage their exposure to the violent and unpredictable effects of large-scale natural and man-made events. In addition, while not a panacea, the models have leveled the playing field between insurers and reinsurers. Via the use of the models, insurers have more insight than even before into their exposures and the pricing mechanics behind catastrophic risk. As a result, they can now negotiate terms with confidence, whereas prior to the advent of the models and other similar tools, reinsurers had the upper hand with information and research.

We also contend that CAT models are the predominant cause of the reinsurance soft market via the entry of alternative capital from the capital markets. And yet, with all the value that CAT models have unleashed, we still have a collective sour taste in our mouths as to how these invaluable tools have benefited consumers, the ones who ultimately make the purchasing decisions and, thus, justify the industry’s very existence.

There are, in fact, now ways to benefit customers by, for instance, bundling earthquake coverage with homeowners insurance in California and helping companies deal with hidden volatility in their supply chains.

First, some background:

Bundling Risks

Any definition of insurance usually addresses the concept of risk transfer: the mechanism that ensures full or partial financial compensation for the loss or damage caused by event(s) beyond the control of the insured. In addition, the law of large numbers applies: the principle that the average of a large number of independent identically distributed random variables tends to fall close to the expected value. This result can be used to show that the entry of additional risks to an insured pool tends to reduce the variation of the average loss per policyholder around the expected value. When each policyholder’s contribution to the pool’s resources exceeds the expected loss payment, the entry of additional policyholders reduces the probability that the pool’s resources will be insufficient to pay all claims. Thus, an increase in the number of policyholders strengthens the insurance by reducing the probability that the pool will fail.

Our collective experiences in this world are risky, and we humans have consistently desired the ability to shed the financial consequences of risk to third parties. Insurance companies exist by using their large capital base, relying on the law of large numbers, but, perhaps most importantly, leveraging the concept of spread of risk, the selling of insurance in multiple areas to multiple policyholders to minimize the danger that all policyholders will experience losses simultaneously.

Take the peril of earthquake. In California, 85% to 90% of all homeowners do NOT maintain earthquake coverage even though earthquake is the predominant peril in that state. (Traditional homeowners policies exclude earth movement as a covered peril). News articles point to the price of the coverage as the limiting factor, and that makes sense because of that peril’s natural volatility. Or does it make sense?

Is the cost of losses from earthquakes in California considerably different than, say, losses from hurricanes in Florida, in which the wind peril is typically included in most homeowners insurance forms? Earthquakes are a lot more localized than hurricanes, but the loss severity can also be more pronounced in those localized regions. Hurricanes that strike Florida can be expected with higher frequency than large damage-causing earthquakes that shake California. In the final analysis, the average projected loss costs are similar between the two perils, but one has nearly a 100% take-up rate vs. the other at roughly 10%. But why is that so? The answer lies in the law of large numbers, or in this case the lack thereof.

Rewind the clock to the 1940s. If you were a homeowner then, the property insurance world looked very different than it does today. As a homeowner back then, you would need to virtually purchase separate policies for each peril sought: a fire, theft and liability policy and then a windstorm policy to adequately cover your home. The thought of packaging those perils into one convenient, comprehensive policy was thought to be cost-prohibitive. History has proven otherwise.

The bundling of perils creates a margin of safety from a P&C insurer’s perspective. Take two property insurers who offer fire coverage. Company A offers monoline fire, whereas Company B packages fire as part of a comprehensive homeowners policy. If both companies use identical pricing models, then Company B can actually charge less for fire protection than Company A simply because the additional premium from Company B affords peril diversification. Company B has the luxury of using premiums from other perils to help offset losses, whereas Company A is stuck with only its single-source fire premium and, thus, must make allowances in its pricing that it could be wrong. At the same time, Company B must also make allowances in the event its pricing is wrong, but can apply smaller allowances because of the built-in safety margin.

This brings us back to the models. It is easy to see why earthquake and other perils, such as flood, was excluded from homeowners policies in the past. Without models, it was nearly impossible to estimate future losses with any sort of reliable precision, leaving insurers the inability to collect enough premium to compensate for the inevitable catastrophic event. Enter the National Flood Insurance Program (NFIP), which stepped in to offer flood coverage but never fundamentally approached it from a sound underwriting perspective. Instead, in an effort to make the coverage affordable to the masses, the NFIP severely underpriced its only product and is now tens of billions of dollars in the red. Other insurers bravely offered the earthquake peril via endorsement and were devastated after the Northridge earthquake in 1994. In both cases, various market circumstances, including the lack of adequate modeling capabilities, contributed to underpricing and adverse risk selection as the most risk-prone homeowners gobbled up the cheap coverage.

Old legacies die hard, but models stand ready to help responsibly underwrite and manage catastrophic risk, even when the availability of windstorm, earthquake and flood insurance has been traditionally limited and expensive.

The next wave of P&C industry innovation will come from imaginative and enterprising companies that use CAT models to economically bundle risks designed to lower the costs to consumers. We view a future where more CAT risk will be bundled into traditional products. As they continue to improve, CAT models will afford the industry the confidence needed to include earthquake and flood cover for all property lines at full limits and with flexible, lower deductibles. In the future, earthquake and flood hazards will be standard covered perils in traditional property forms, and the industry will one day look back from a product standpoint and wonder why it had not evolved sooner.

Unbundling Risks

Insurance policies as contracts can be clumsy in handling complicated exposures. For example, insurers have the hardest time handling supply chain and contingent business interruption exposures, and rightly so. Because of globalization and extreme competition, multinational companies are continuously seeking value in the inputs for their products. A widget in a product can be produced in China one year, the Philippines the next, Thailand the following year and so on. It is time-consuming and resource intensive to keep track of not only how much of a company’s widgets are manufactured, but also what risks exist surrounding the manufacturing plant that could interrupt production or delivery. We would be hard-pressed to blame underwriters for wanting to exclude or significantly sublimit exposures related to supply chain or business interruption; after all, underwriters have enough difficulty just to manage the actual property exposures inherent in these types of risks.

It is precisely this type of opportunity that makes sense for the industry to create specialized programs. Unbundle the exposure from the remainder of the policy and treat it as a separate exposure with dedicated resources to analyze, price and manage the risk.

Take a U.S. semiconductor manufacturer with supply exposure in Southeast Asia. As was the case with the 2011 Thailand floods or the 2011 Tohoku earthquake and tsunami, this hypothetical manufacturer is likely exposed to supply chain risks of which it is unaware. It is also likely that the property insurance policy meant to indemnify the manufacturer for covered losses in its supply chain will fall short of expectations. An enterprising underwriter could carve out this exposure and transfer it to a new form. In that form, the underwriter can work with the manufacturer to clarify policy wording, liberalize coverage, simplify claims adjusting and provide needed additional capacity. As a result, the manufacturer gets a risk transfer mechanism that more precisely aligns with the balance sheet affecting risks it is exposed to. The insurer gets a new line of business that can provide a significant source of new revenue using tools such as CAT models and other analytics to price and manage those specific risks. By applying some ingenuity, the situation can be a win/win all around.

What if you are a manufacturer or importer and rely on the Port of Los Angeles or Miami International Airport (or any other major international port) to transport your goods in and out of markets? This is another area where commercial policies handle business exposure poorly, or not even at all. CAT models stand ready to provide the analytics required to transfer the risks of these choke points from business balance sheets to insurers. All that is required is vision to recognize the opportunity and the sense to use the toolsets now available to invent solutions rather than relying on legacy group think.

At the end of the day, the next wave of innovation will not come directly from models or analytics. While the models and analytics will continue to improve, real innovation will come from creative individuals who recognize the risks that are causing market discomfort and then use these wonderful tools to build products and programs that effectively transfer those risks more effectively than ever. Those same individuals will understand that the insured comes first, and that rather than retrofitting dated products to suit a modern-day business problem, the advent of new products and services is an absolute necessity to maintain the industry’s relevance. The only limiting factor preventing true innovation in property insurance is imagination and a willingness to no longer cling to the past.

4 Technologies That Are Changing Risk

This summarizes a session from RIMS that was headlined by Google Risk Manager Kelly Crowder as well as Google Global Safety Manager Erike Young. I served as the event host and moderator, teeing up the subject matter. We focused on four major areas of technology that are driving transformative change in the way we do things and, thus, changing risk. Disruptive technology, as the panel pointed out, forces risk managers and insurers to imagine and forecast how various advancements affect: safety; risk assessment; regulatory and legal parameters; and insurance implications.

Albert Einstein set the course for the future when he said: “The true sign of intelligence is not knowledge but imagination.” Ideas can reach beyond probable or practical restraints.

Google takes that notion to heart at Google X, a semi-secret lab located in Silicon Valley that aims via research and development to advance scientific knowledge and fuel discoveries that can change the world. “What if” abstract concepts, also known to Google as “moonshots,” are tireless experiments that often fail but that occasionally produce disruptive technology. The mantra is “fail fast, fail often, fail forward.” Learn and change. Sergey Brin, one of Google’s co-founders, and scientist Astro Teller (Captain of Moonshots) seek to improve existing technologies by a factor of 10. Google began with the self-driving car in 2010. Google X now includes a life sciences division involved in bionics.

As with the radical transportation shift to horseless carriages 130 years ago, the technologies are changing risk in profound ways, but the positive and negative impact of new technology can be hard to predict.

Starting with Botsourcing and Robotics, the panel highlighted the trend of companies to utilize robots and artificial intelligence for a wide array of service industries, manufacturers, medical providers and first responders, which seek safer, more efficient and cost-effective ways of serving clients or conducting business. While more dangerous occupational risks and blue-collar jobs are expected to be safer and more efficient, it remains uncertain whether the demand for labor will continue to grow as technology marches forward. Within 10 years, more than 40% of the workforce is expected to be affected by or replaced with robotics.

One positive sign noted in the presentation is that many American companies using robotics and 3D printing technologies, are transferring production facilities from overseas back to the U.S. and creating homeland jobs in the process. New job skills will become necessary to sustain broad-based prosperity. With respect to the highly advanced robots expected to integrate into society, the panel if their cognition will ever replace emotionally oriented skills. Will the warmth of human interaction remain a value in the future?

Another area of advancement is Surveillance and Wearable Biometrics. The Internet of Things represents the embedding of physical objects with sensors and connectivity. Devices like smart thermostats, as Google pointed out, are able to learn from our behavior patterns to anticipate our needs at home or work on a 24- hour basis. Our security and monitoring systems are tied to public safety, medical providers and our smartphones. Data collection is growing at an enormous pace, effectively tracking our every move. This, as pointed out, has created concern for privacy and for the increasing vulnerability to cyber threats.

Fixed and mobile surveillance cameras have facial identification technology. Unmanned aerial vehicles (UAV’s), also known as drones, can be preprogrammed to operate autonomously, although the panel pointed out that current FAA restrictions require an operator following visual line-of sight rules below 400 feet of altitude. It’s expected that, within the next few years, there will be autonomous drone surveillance and product delivery systems.

Utilities can use drones to monitor power transmission lines at 1/10th the cost of a helicopter and with safety and efficiency impossible with helicopters. Public safety departments can use UAVs to assess damages as well as risks. Four U.S. insurers are currently using human-operated drones to assess property damage claims arising from natural disasters. The panel showed photos of UAVs that look like insects that are the size of a fingertip.

Wearable biometrics are much more sophisticated than Apple watches and Fitbits. Google explained the company’s quest to improve health monitoring systems. With 9.3% of the U.S. population alone (29 million) suffering from diabetes, Google sells a revolutionary contact lens, developed with Novartis, that monitors glucose levels and corrects vision similar to an autofocus camera. Other panel photos show tattoo-like patches thinner than a human hair that stick to the skin. Using microfluidic construction, these nearly invisible patches monitor EKG and EEG bodily functions and transmit the data 24/7 wirelessly. Similar monitors, known as smarty pants, can be sewn into underclothes and bras.

Exoskeleton Technologies are being developed by more than a dozen major manufacturers, as the panel demonstrated, and their products are expanding human capacity and endurance far beyond most expectations. These are wearable machines that combine human intelligence and machine power to achieve nearly any conceivable task without falling. Used by the military, public safety, hazmat teams and industries and for medical rehabilitation, exoskeletons let humans perform feats that would have been physically impossible a few years ago. Neuro interfaces with bio-logical signals allow paraplegics to relearn lost functions. Some patients can actually experience running a four-minute mile or play certain sports. Lifting is painless and commonplace with weights of 40 to 60 pounds, with new technology allowing a person to run without falling down with 200 pounds of weight on their back. A la “Iron Man,” exoskeleton suits are being designed into wearable fabrics with micro energy packs.

This area of technology has the greatest potential of protecting workers from soft tissue strains and back injuries. In addition, it serves a dual purpose of advancing an injured worker’s rehabilitation and recovery process without the inherent risk of getting reinjured. As pointed out, experts expect industrial injuries to be reduced as much as 70% as exoskeleton technology is woven into the workplace as personal protective equipment (PPE). Perhaps a bigger question, with an aging workforce and population, is the unknown cost and whether employers, insurers or individuals will bear the expense.

The fourth and final technology covered by the panel was Autonomous Transportation Systems and Devices. Google pioneered self-driving vehicles and leads in the development of its associated technology, but autonomous vehicles are now being produced and tested by a growing number of manufacturers. In March 2015, Delphi sent a driverless Audi SUV on a 3,400-mile trip through 15 states from San Francisco to New York City in eight days without an accident. Auto manufacturers are approaching self-driving features on an incremental basis with self-braking, self-parking and other autonomous safety-related features. Google has inspired a jump to a fully autonomous vehicle with no steering wheel or brake. These self-driving vehicles perform 7,000 safety processes per second at high speeds with far safer results than any human driver.

The impact of self-driving vehicles, including trucks, is expected to be commonplace within 20 years or sooner. A recent national survey of drivers indicated 44% are looking forward to autonomous vehicles. Respondents cited safety as their first priority. Their second reason was their expectation that they would not be paying for car insurance, which averages $820 per licensed vehicle per year in the U.S. Statisticians expected a drastic reduction of injuries as well as reduced violations like DUI, speeding and running red lights. With 35,000 motor vehicle deaths each year in the U.S., increased safety coupled with increased freeway efficiencies of ultimately more than 10 fold are issues that will make this a disruptive technology that will seem long overdue.

As the Google risk management team pointed out, insurers don’t know how to react or respond to the inevitable switch to autonomous vehicles. Even on a road test basis, auto insurance underwriters are scratching their heads trying to assess the risk implications.

As the panel pointed out to the inquisitive audience during the Q&A session, it may be relatively simple to determine the impact of new technology from a measurable, scientific basis. But the big challenge for risk managers is imagining the implications these various technological advancements will have on our organizations, workforce and insurers. Auto insurers have at least $500 billion in annual premiums at stake in the U.S. alone. What will happen to that revenue when we shed our need to get behind the wheel every day?

Google also pointed out that each of the technological areas cover a wide range of regulatory implications. While they attempt to notify every conceivable regulatory entity as they develop and test new products, it’s clear that there often aren’t clear legal or regulatory guidelines in place. How will regulators be able to promulgate new rules, regulations and laws as these science fiction-like inventions come to reality?

As Dr. Seuss said so profoundly, “Think and Wonder. Wonder and Think.”

ITL and its 400-plus thought leaders are providing the kind of wisdom and insight we will need to help bring all the parties together to solve these challenges. We welcome you to the conversation.

RIMS 2015