Tag Archives: Guy Carpenter

Cyber: Black Hole or Huge Opportunity?

You own a house. It burns down. Your insurer only pays out 15% of the loss.

That’s a serious case of under-insurance. You’d wonder why you bothered with insurance in the first place. In reality, massive under-insurance is very rare for conventional property fire losses. But what about cyber insurance? In 2017, the total global economic loss from cyber attacks was $1.5 trillion, according to Cambridge University Centre for Risk Studies. But only 15% of that was insured.

I chaired a panel on cyber at the Insurtech Rising conference in September. Sarah Stephens from JLT and Eelco Ouwerkerk from Aon represented the brokers. Andrew Martin from Dyanrisk and Sidd Gavirneni from Zeguro, the two cyber startups. I asked them why we are seeing such a shortfall. Are companies not interested in buying or is the insurance market failing to deliver the necessary protection for cyber today? And is this an opportunity for insurtech start-ups to step in?

High demand, but not the highest priority

We’ll hit $4 billion in cyber insurance premium by the end of this year. Allianz has predicted $20 billion by 2025. And most industry commentators believe 30% to 40% annual growth will continue for the next few years.

A line of business growing at more than 30% per year, with combined ratios around 60%, at a time when insurers are struggling to find new sources of income is not to be sniffed at.

But the risks are getting bigger. My panelists had no problem in rattling off new threats to be concerned with as we look ahead to 2019. Crypto currency hacks, increasing use of cloud, ransomware, GDPR, greater connectivity through sensors, driverless cars, even blockchain itself could be vulnerable. Each technical innovation represents a new threat vector. Cyber insurance is growing, but so is the gap between the economic and insured loss.

The demand is there, but there are a lot of competing priorities. Today’s premiums represent less than 0.1% of the $4.8 trillion global property/casualty market. Let’s try to put that in context. If the ratio of premium between cyber and all other insurance was the same as the ratio of time spent thinking about cyber and other types of risk, how long would a risk manager allocate to cyber risk? Even someone thinking about insurance all day, every day for a full working year would spend less than seven minutes a month on cyber.

It’s not because we are unaware of the risks. Cyber is one of the few classes of insurance that can affect everyone. The NotPetya virus attack, launched in June 2017, caused $2.7 billion of insured loss by May 2018, according to PCS, and losses continues to rise. That makes it the sixth largest catastrophe loss in 2017, a year with major hurricanes and wildfires. Yet the NotPetya event is rarely mentioned as an insurance catastrophe and appears to have had no impact on availability of cover or terms. Rates are even reported to be declining significantly this year.

See also: How Insurtech Boosts Cyber Risk  

Large corporates are motivated buyers. They have an appetite for far greater coverage than limits that cap out at $500 million. Less than 40% of SMEs in the U.S. and U.K. had cyber insurance at the end of 2017, but that is far greater penetration than five years ago. The insurance market has an excess of capital to deploy. As the tools evolve, insurance limits will increase. Greater limits mean more premium, which in turn create more revenue to justify higher fees for licensing new cyber tools. Everyone wins.

Maybe.

Growing cyber insurance coverage is core to the strategy of many of the largest insurers.

Cyber risk has been available since at least 2004. Some of the major insurers have had an appetite for providing cyber cover for a decade or more. AIG is the largest writer, with more than 20% of the market. Chubb, Axis, XL Catlin and Lloyd’s insurer Beazley entered the market early and continue to increase their exposure to cyber insurance. Munich Re has declared that it wants to write 10% of the cyber insurance market by 2020 (when it estimates premium will be $8 billion to $10 billion). All of these companies are partnering with established experts in cyber risk, and start-ups, buying third party analytics and data. Some, such as Munich Re, also offer underwriting capacity to MGAs specializing in cyber.

The major brokers are building up their own skills, too. Aon acquired Stroz Friedberg in 2016. Both Guy Carpenter and JLT announced relationships earlier this year with cyber modeling company and Symantec spin off CyberCube. Not every major insurer is a cyber enthusiast. Swiss Re CEO Christian Mumenthaler declared that the company would stay underweight in its cyber coverage. But most insurers are realizing they need to be active in this market. According to Fitch, 75 insurers wrote more than $1 million each of annual cyber premiums last year.

But are the analytics keeping up?

Despite the existence of cyber analytic tools, part of the problem is that demand for insurance is constrained by the extent to which even the most credible tools can measure and manage the risk. Insurers are rightly cautious, and some skeptical, as to the extent to which data and analytics can be used to price cyber insurance. The inherent uncertainties of any model are compounded by a risk that is rapidly evolving, driven by motivated “threat actors” continually probing for weaknesses.

The biggest barrier to growth is the ability to confidently diversify cyber insurance exposures. Most insurers, and all reinsurers, can offer conventional insurance at scale because they expect losses to come from only a small part of their portfolio. Notwithstanding the occasional wildfire, fire risks tend to be spread out in time and geography, and losses are largely predicable year to year. Natural catastrophes such as hurricanes or floods can create unpredictable and large local concentrations of loss but are limited to well-known regions. Major losses can be offset with reinsurance.

Cyber crosses all boundaries. In today’s highly connected world, corporate and country boundaries offer few barriers to a determined and malicious assailant. The largest cyber writers understand the risk for potential contagion across their books. They are among the biggest supporters of the new tools and analytics that help understand and manage their cyber risk accumulation.

What about insurtech?

Insurer, investor or startup – everyone today is looking for the products that have the potential to achieve breakout growth. Established insurers want new solutions to new problems; investment funds are under pressure to deploy their capital. A handful of new companies are emerging, either to offer insurers cyber analytics or to sell cyber insurance themselves. Some want to do both. But is this sufficient?

The SME sector is becoming fertile ground for MGAs and brokers starting up or refocusing their offerings. But with such a huge, untapped market (85% of loss not insured), why aren’t cyber startups dominating the insurtech scene by now? The number of insurtech companies offering credible analytics for cyber seems disproportionately small relative to the opportunity and growth potential. Do we really need another startup offering insurance for flight cancellation, bicycle insurance or mobile phone damage?

While the opportunity for insurtech startups is clear, this is a tough area to succeed in. Building an industrial-strength cyber model is hard. Convincing an insurer to make multimillion-dollar bets on the basis of what the model says is even more difficult. Not everyone is going to be a winner. Some of the companies emerging in this space are already struggling to make sustainable commercial progress. Cyber risk modeler Cyence roared out from stealth mode fueled by $40 million of VC funding in September 2016 and was acquired by Guidewire a year later for $265 million. Today, the company appears to be struggling to deliver on its early promises, with rumors of clients returning the product and changes in key personnel.

The silent threat

The market for cyber is not just growing vertically. There is the potential for major horizontal growth, too. Cyber risks affect the mainstream insurance markets, and this gives another source of threat, but also opportunity.

Most of the focus on cyber insurance has been on the affirmative cover – situations where cyber is explicitly written, often as a result of being excluded from conventional contracts. Losses can also come from ” silent cyber,” the damage to physical assets triggered by an attack that would be covered under a conventional policy where cyber exclusions are not explicit. Silent cyber losses could be massive. In 2015, the Cambridge Risk Centre worked with Lloyd’s to model a power shutdown of the U.S. Northeast caused by an attack on power generators. The center estimated a minimum of $243 billion economic loss and $24 billion in insured loss.

In the current market conditions, cyber can be difficult to exclude from more traditional coverage such as property fire policies, or may just be overlooked. So far, there have been only a handful of small reported losses attributed to silent cyber. But now regulators are starting to ask companies to account for how they manage their silent cyber exposures. It’s on the future list of product features for some of the existing models. Helping companies address regulatory demands is an area worth exploring for startups in any industry.

See also: Breaking Down Silos on Cyber Risk  

Ultimately, we don’t yet care enough

We all know cyber risk exists. Intuitively, we understand an attack on our technology could be bad for us. Yet, despite the level of reported losses, few of us have personally, or professionally, experienced a disabling attack. The well-publicized attacks on large, familiar corporations, including, most recently, British Airways, have mostly affected only single companies. Data breach has been by far the most common type of loss. No one company has yet been completely locked out of its computer systems. WannaCry and NotPetya were unusual in targeting multiple organizations, with far more aggressive attacks that disabled systems, but on a very localized basis.

So, most of us underestimate both the risk (how likely), and the severity (how bad) of a cyber attack in our own lives. We are not as diligent as we should be in managing our passwords or implementing basic cyber hygiene. We, too, spend less than seven minutes a month thinking about our cyber risk.

This lack of deep fear about the cyber threat (some may call it complacency) goes further than increasing our own vulnerabilities. It also the reason we have more startups offering new ways to underwrite bicycles than we do companies with credible analytics for cyber.

Rationally, we know the risk exists and could be debilitating. Emotionally, our lack of personal experience means that cyber remains “interesting” but not “compelling” either as an investment or startup choice.

Getting involved

So, let’s not beat up the incumbents again. Insurance has a slow pulse rate. Change is geared around an annual cycle of renewals. It evolves, but slowly. Insurers want to write more cyber risk, but not blindly. The growth of the market relies on the tools to measure and manage the risk. The emergence of a new breed of technology companies, such as CyberCube, that combine deep domain knowledge in cyber analytics with an understanding of insurance and catastrophe modeling, is setting the standard for new entrants.

Managing cyber risk will become an increasingly important part of our lives. It’s not easy, and there are few shortcuts, but there are still plenty of opportunities to get involved helping to manage, measure and insure the risk. When (not if) a true cyber mega-catastrophe does happen, attitudes will change rapidly. Those already in the market, whether as investors, startups or forward thinking insurers, will be best-positioned to meet the urgent need for increased risk mitigation and insurance.

How to Avoid Failed Catastrophe Models

Since commercial catastrophe models were introduced in the 1980s, they have become an integral part of the global (re)insurance industry. Underwriters depend on them to price risk, management uses them to set business strategies and rating agencies and regulators consider them in their analyses. Yet new scientific discoveries and claims insights regularly reshape our view of risk, and a customized model that is fit-for-purpose one day might quickly become obsolete if it is not updated for changing business practices and advances in our understanding of natural and man-made events in a timely manner.

Despite the sophisticated nature of each new generation of models, new events sometimes expose previously hidden attributes of a particular peril or region. In 2005, Hurricane Katrina caused economic and insured losses in New Orleans far greater than expected because models did not consider the possibility of the city’s levees failing. In 2011, the existence of a previously unknown fault beneath Christchurch and the fact the city sits on an alluvial plain of damp soil created unexpected liquefaction in the New Zealand earthquake. And in 2012, Superstorm Sandy exposed the vulnerability of underground garages and electrical infrastructure in New York City to storm surge, a secondary peril in wind models that did not consider the placement of these risks in pre-Sandy event sets.

Such surprises affect the bottom lines of (re)insurers, who price risk largely based on the losses and volatility suggested by the thousands of simulated events analyzed by a model. However, there is a silver lining for (re)insurers. These events advance modeling capabilities by improving our understanding of the peril’s physics and damage potential. Users can then often incorporate such advances themselves, along with new technologies and best practices for model management, to keep their company’s view of risk current – even if the vendor has not yet released its own updated version – and validate enterprise risk management decisions to important stakeholders.

See also: Catastrophe Models Allow Breakthroughs  

When creating a resilient internal modeling strategy, (re)insurers must weigh cost, data security, ease of use and dependability. Complementing a core commercial model with in-house data and analytics and standard formulas from regulators, and reconciling any material differences in hazard assumptions or modeled losses, can help companies of all sizes manage resources. Additionally, the work protects sensitive information, allows access to the latest technology and support networks and mitigates the impact of a crisis to vital assets – all while developing a unique risk profile.

To the extent resources allow, (re)insurers should analyze several macro- and micro-level considerations when evaluating the merits of a given platform. On the macro level, unless a company’s underwriting and claims data dominated the vendor’s development methodology, customization is almost always desirable, especially at the bottom of the loss curve where there is more claim data; if a large insurer with robust exposure and claims data is heavily involved in the vendor’s product development, the model’s vulnerability assumptions and loss payout and developments patterns will likely mirror that of the company itself, so less customization is necessary. Either way, users should validate modeled losses against historical claims from both their own company and industry perspectives, taking care to adjust for inflation, exposure changes or non-modeled perils, to confirm the reasonability of return periods in portfolio and industry occurrence and aggregate exceedance-probability curves. Without this important step, insurers may find their modeled loss curves differ materially from observed historical results, as illustrated below.

A micro-level review of model assumptions and shortcomings can further narrow the odds of a “shock” loss. As such, it is critical to precisely identify risks’ physical locations and characteristics, as loss estimates may vary widely within a short distance – especially for flood, where elevation is an important factor. When a model’s geocoding engine or a national address database cannot assign location, there are several disaggregation methodologies available, but each produces different loss estimates. European companies will need to be particularly careful regarding data quality and integrity as the new General Data Protection Regulation, which may mean less specific location data is collected, takes effect.

Equally as important as location is a risk’s physical characteristics, as a model will estimate a range of possibilities without this information. If the assumption regarding year of construction, for example, differs materially from the insurer’s actual distribution, modeled losses for risks with unknown construction years may be under- or overestimated. The exhibit below illustrates the difference between an insurer’s actual data and a model’s assumed year of construction distribution based on regional census data in Portugal. In this case, the model assumes an older distribution than the actual data shows, so losses on risks with unknown construction years may be overstated.

There is also no database of agreed property, contents or business interruption valuations, so if a model’s assumed valuations are under- or overstated, the damage function may be inflated or diminished to balance to historical industry losses.

See also: How to Vastly Improve Catastrophe Modeling  

Finally, companies must also adjust “off-the-shelf” models for missing components. Examples include overlooked exposures like a detached garage; new underwriting guidelines, policy wordings or regulations; or the treatment of sub-perils, such as a tsunami resulting from an earthquake. Loss adjustment difficulties are also not always adequately addressed in models. Loss leakage – such as when adjusters cannot separate covered wind loss from excluded storm surge loss – can inflate results, and complex events can drive higher labor and material costs or unusual delays. Users must also consider the cascading impact of failed risk mitigation measures, such as the malfunction of cooling generators in the Fukushima nuclear power plant after the Tohoku earthquake.

If an insurer performs regular, macro-level analyses of its model, validating estimated losses against historical experience and new views of risk, while also supplementing missing or inadequate micro-level components appropriately, it can construct a more resilient modeling strategy that minimizes the possibility of model failure and maximizes opportunities for profitable growth.

The views expressed herein are solely those of the author and do not reflect the views of Guy Carpenter & Company, LLC, its officers, managers, or employees.

You can find the article originally published on Brink.

Do Brokers, Agents Owe Fiduciary Duty?

Insurers, insureds and even their attorneys frequently incorrectly assume that insurance agents and brokers owe fiduciary duties to their insureds. While the law is not completely clear regarding the applicability of agency principles and fiduciary duties in this area, legal precedent can offer some guidance on the issue.

Currently, there is no appellate precedent permitting an insured to sue its agent or broker under a common law action for breach of fiduciary duty. However, the California courts have yet to be willing to rule that the cause of action based on common law agency principles is completely inapplicable to brokers and agents.

Demonstrative of the court’s unwillingness to create a bright-line rule is the heavily litigated case of Workmen’s Auto Insurance Company v. Guy Carpenter & Co., Inc. In 2011, the court of appeal in Workmen’s answered the question regarding fiduciary duties of brokers and agents definitively in the negative, ruling that “an insurance broker cannot be sued for breach of fiduciary duty.” The ruling finally provided the guidance and rule necessary to put the issue to rest. However, the relief was short-lived; in 2012, after a rehearing that affirmed the court’s ruling, the opinion was vacated and depublished, again leaving the law in this area without any clear precedent to follow. After rehearing, the court deleted the quotation stating the new rule from its summary of opinion, instead stating “these authorities do not close the door on fiduciary duty claims against insurance brokers.”

Prior to Workmen’s, several cases made steps toward supporting the idea that no fiduciary duty is owed. In Kotlar v. Hartford Fire Insurance Company, the court held an insurance broker need only use reasonable care to represent its client, and declined to apply a higher standard such as that applied to an attorney. The court found that the broker’s duties are defined by negligence law, not fiduciary law. In Hydro-Mill Company, Inc. v. Hayward, Tilton & Rolapp Insurance Associates, Inc. the court expanded on Kotlar, finding that the standard of professional negligence applied, but refused to recognize a separate cause of action for breach of fiduciary duty against the insurance broker.

The California Supreme Court previously held in Vu v. Prudential Property & Casualty Insurance Company that the insurer-insured relationship “is not a true ‘fiduciary relationship’ in the same sense as the relationship between trustee and beneficiary, or attorney and client.” The court went on to state that any special or additional duties applicable to the broker or agent were only the result of the unique nature of the insurance contract, and “not because the insurer is a fiduciary.” The court in Hydro-Mill applied the concept in Vu, finding that if an insurer does not owe fiduciary duties, then a broker and agent could not.

In Jones v. Grewe, an insured sued its broker for misrepresenting coverage, negligence and breach of fiduciary duty. The court declined to recognize the cause of action for breach of fiduciary duty, holding that the broker had only the obligation to use reasonable care and no fiduciary duty absent an express agreement or a holding out otherwise.

Despite the fact that it appears from these precedents that the courts are unwilling to find a fiduciary duty exists, the California Supreme Court in Liodas v. Sahadi ruled that the existence of a fiduciary relationship could not be ruled upon, as the issue is not one of law, but of fact. This rule appears to be restated in the Workmen’s unpublished opinion.

While it is clear the courts are hesitant to find a fiduciary duty is owed by agents and brokers to their insured, the fact remains that it is still a possibility and, under the right circumstances and facts, could be found. Thus, it is important that agents and brokers not only use reasonable care when procuring insurance, but that they do not hold themselves out as being a fiduciary, or expressly agree to the existence of a fiduciary relationship. So long as agents and brokers do not create circumstances in which a fiduciary relationship is agreed to or implied, the law will infer no such relationship exists.

5 Innovations in Microinsurance

Earlier this year, a group of eight leading insurers and brokers established a consortium to promote microinsurance ventures in developing countries, unsurprisingly called Microinsurance Venture Incubator (MVI). Together, AIG, Aspen Insurance, XL Catlin, Guy Carpenter, Marsh & McLennan, Hamilton Insurance, Transatlantic Reinsurance and Zurich plan to launch 10 microinsurance ventures over the next 10 years.

While conventional insurance targets middle to high-income urban dwellers, microinsurance targets rural residents living on the edge of poverty. Most popular are microinsurance products that offer life, health, accident or property insurance.

However, to really be the “can-do” coverage for the poor, it is not enough for microinsurance to be affordable and accessible; it also has to be tailored to the unique environment in which it is being offered. After all, context is king.

So with the context of “poor people deserve innovation too,” here are five examples of innovative microinsurance schemes that target different risk pools:

1. The Use of Technology to Combat Fraud

Insurers providing livestock insurance in India have been struggling with high claims ratios, mostly because of fraud. Typically, to get coverage, a veterinarian would place an external plastic tag on the animal’s ear as an indication that that specific animal is insured. However, this produced zero controls in place, and insurers learned that these plastic tags somehow made their way to dead cattle, way too frequently.

Nowadays, India’s IFFCO-Tokio (ITGI) insurance company is using radio frequency identification (RFID) chips that are injected under the skin of the animal (which is less painful than tagging!). These chips are accessible through a reader, which allows an insurance official to easily verify that the RFID reading coincides with the identification number on the policy, when a farmer reports a claim. This results in fewer fraudulent cases and faster claim processing.

Almost a fairy tale ending if it wasn’t for the high price of these microchips. Nonetheless ITGI is using a combination of external plastic tags and RFID chips to control their costs yet still prevent excessive fraud. It’s working.

2. Forming Index-Based Insurance to Build Trust

Another promising innovation is index-based insurance, where an external indicator triggers payments to clients rather than the traditional “I’m calling to report a claim.”

Kilimo Salama, AKA Safe Farming, combines mobile phone payment system with solar powered weather stations to offer farmers in Kenya “pay as you plant” insurance.

Here’s how it works:

  • A farmer goes to an approved dealer and buys a bag of fertilizer, which he pays 5% extra for to get climate coverage.
  • The dealer scans a special bar code, which immediately registers the policy with the insurance provider and sends a text message confirming the insurance policy to the farmer’s mobile phone.
  • When data transmitted from a particular weather station indicates drought or other extreme condition is taking place, the farmer registered with that station automatically receives payouts via a mobile money transfer service.
  • Similarly, a more recent entrant called ClimateSecure says it will “work hand-in-hand with [its] clients, meteorologists, financial experts and other brokers in order to build indexes that most accurately reflect [their] clients’ risk.”

3. Targeting the Cash Poor by Relaxing Liquidity Constraints

In China, pork composes roughly 48% of livestock production, with most pigs generally raised in small numbers by rural families in their backyards, forcing Chinese hog farmers to face the risk of hog diseases. Yet, despite the obvious benefits of microinsurance products, the demand is still low because of cash constraints and a lack of trust in insurance providers.

Yet a pig insurance scheme, which offered credit vouchers that allowed farmers to take up insurance while delaying the premium payment until the end of the insured period, coinciding with when pigs are sold, saw their insurance premiums go up by 11%.

By the same token, telecommunications companies embed insurance premiums in their service contracts, with the advantage of offering (oftentimes free) coverage as part of a pre-existing plan. In Africa, for instance, free insurance is linked to phone data usage; the more airtime one buys, the more coverage he/she gets.

4. Product Bundling to Attract Customers

The 2014 winner of the prestigious Hult Prize, NanoHealth, is a social enterprise that not only offers microinsurance but also tackles chronic diseases by providing door-to-door diagnostics via its network of community health workers, which it equips with a low-cost point-of-care device called Doc-in-a-Bag. This startup is slowly but surely creating India’s largest slum-based electronic medical record system and disease landscape map.

5. Coverage Within Reach via Garbage in, Coverage out

Forget bitcoin, garbage is the new currency with this Indonesian startup called Garbage Clinical Insurance (GCI), which was founded by a 26 year-old doctor named Gamal Albinsaid. Through GCI, community residents are encouraged to recycle and get healthcare coverage at the same time because trash is translated to funds that can later be used to pay for medical insurance.

In sum, in this micro world of microinsurance, where only 260 million of the world’s low-income citizens are covered, words like big data and claim history could not matter less. What matters is how quickly an insurer can scale, how low can its margins go and how clearly can it communicate its offering to the low-income farmer all in the name of for-profit social enterprise.

Expect more entrants.