Tag Archives: Risk modeling

How Machine Learning Transforms Insurance

We like our insurance carriers to be risk-averse. So it should come as no surprise they are often last to innovate. Insurers need to feel very comfortable with their risk predictions before making a change. Well, machine learning is writing a new chapter in the old insurance book. There are three key reasons why this is happening now:

  1. New insurtech players are grabbing market share and setting new standards. Traditional carriers have no choice but to follow suit.
  2. Customers are expecting Netflix/Spotify-like personalization, and have no problem changing providers — this trend is expected to grow as we see more millennials maturing out of their parents’ policies.
  3. Getting started with machine learning is becoming VERY easy because of open source frameworks, accelerated hardware, pre-trained models available via APIs, validated algorithms and an explosion of online training.

As with any innovation, it only takes two things for widespread adoption:

  1. Potential to improve business goals.
  2. Ease of establishing pilots.

With time, we see that successful pilots become products. Teams are hired/trained, resources are allocated, business goals gain more “appetite” and models are tweaked.

For P&C carriers. we see the opportunity for improving business goals and easily pilot machine learning in the following areas:

Risk Modeling

Given the complex and behavioral nature of risk factors, machine learning is ideal for predicting risk. The challenge lies in regulatory oversight and the fact that most historic data is still unstructured. This is why we often see machine learning applied to new products such as those using data from IoT sensors in cars (telematics) and home (connected home). But innovative carriers are not limited. They use pre-trained machine learning models to structure their piles of unstructured data: APIs to transcribe coupled with natural language understanding (NLU) extract features from recorded call center calls, handwriting and NLP/NLU tools for written records, leading toward identifying new risk factors using unsupervised learning models.

See also: 4 Ways Machine Learning Can Help  

Underwriting

Carriers can get an actuarial lift even without designing and filing new actuarial models. Using machine learning to better predict risk factors in existing (filed) models. For example, a carrier may have already filed a mileage-based rate-plan for auto insurance but rely on user-reported or less accurate estimates to determine mileage. Machine learning can help predict mileage driven, in a less biased and more accurate way. Similarly, APIs to pre-trained chatbots using lifelike speech and translators can turn website underwriting forms into more engaging and personalized chats that have a good chance to reduce soft fraud.

Claims Handling

Claims handling is a time-intensive task often involving manual labor by claims adjusters onsite. Innovative carriers already have policy holders take pictures and videos of their damaged assets (home, car…) and compare with baseline or similar assets. Carriers could easily leverage existing APIs for image processing, coupled with bot APIs to build a high-precision model, even at the expense of low recall. Compared with having 100% of the book handled manually, a triage bot that automates even a mere 20% of the claims (with high precision) can enable carriers to start with a low-risk service that’s on par with new insurtech players and improve ratios over time. Such a tool can even be leveraged by adjusters, reducing their time and cost.

Coverages

While personalized pricing may be regulator-challenged, personalizing the insurance product offering is expected in this Netflix/Spotify age. As basic coverage is commoditizing, carriers differentiate their products based on riders and value-added services, not to mention full product offerings based on life events. Carriers can (with consent, of course) leverage social media data to tailor and personalize the offering. Similarly, marketing departments can use readily available recommendation algorithms to match and promote content about the benefits of certain riders/value-adds to relevant customers at the relevant time.

Distribution

The world of insurance distribution is growing in complexity. Carriers are struggling with the growing power of intermediaries, and agents are having hard time optimizing their efforts due to lack of predictability of loss commissions. Point-of-sale and affiliation programs are growing, and with them the need for new distribution incentive models. Both traditional and new distribution channels could benefit from machine learning. Brokers, point-of-sale partners and carriers can leverage readily available machine learning models and algorithms designed for retail, to forecast channel premiums. Carriers can grow direct channels without growing headcount, using pre-trained chatbots, NLU and lifelike speech APIs.

See also: Machine Learning: A New Force  

Machine learning is part of our everyday lives. Innovative insurers are now jumping on the ML wagon with an ever-growing ease; which carriers will be left behind?

Insurtech Innovator – HazardHub

“HazardHub combined risk data information for every property in the U.S. with cutting edge technology so we can deliver risks for any property, across the country, with an API for lightning speed, giving unprecedented levels of insight about properties in the U.S.,” says Bob Frady, CEO of HazardHub

View more Insurtech Innovator videos

Learn more about Innovator’s Edge

How to Apply ERM to Cyber Risks

The advent of new technologies has enabled risk stakeholders to perform enhanced data analytics to gain more insights into the customer, risk assessment, financial risk management and quantification of operational risk.

Companies manage many risks aligned to their risk profile and risk appetite. They do so by risk awareness and risk assessment. The visionaries and early adopters do so dynamically by use of mathematics (stochastically or actuarially) and simulations for the future based on the historical loss data to correlate all the risks of the enterprise into one holistic view. Factors to consider include:

Cyber risk. Operational risk affects every organization on an equal basis and is often quantified as a percentage of gross written premiums. Cyber risks are no different from any other risk in terms of risk management and risk transfer. However, IT departments, even with the best of intentions, can increase  cyber risk by their strategy — and there is no silver bullet to protect the company. Keyless signature infrastructure (KSI) enables companies to plan data breach strategies where systems administrators are no longer involved in the security process. This will bring great comfort to risk managers who see  new technology being introduced that will increase cyber risk.

Risk mitigation. Insurance and reinsurance are not alternatives to enterprise risk management (ERM).  Risk transfer programs should be used to address structural residual risk. From EY’s experience, companies can identify risks and adopt leading practices to ease the process of finding the right cover at the right price — with the correct reinsurance optimization. The insurance industry should insist upon this enterprise level of risk mitigation before it issues cover for large risks and data breaches.

Risk modeling. The exercise in Figure 1 uses a robust industrial risk modeling tool to look at cyber risk.  The red is the tail value at risk (TVAR) and the area that needs to be mitigated by risk transfer mechanisms. Reinsurance, the most obvious mechanism, is not the replacement for leading-practice risk management. The assumption is that data integrity standards have already been adopted here, so we are looking at the residual risk mitigation following that implementation.

Capture

The bottom graph shows the situation prior to reinsurance, where small claims are aggregated and a long tail cuts into the companies’ risk-based capital limits. The top graph shows a leaner risk situation after the application of reinsurance, bringing it back in the comfort zone.

The standard deviation process will depend on how the regulator views cyber risk and solvency. Currently, solvency models are geared on average to a 1-in-200-year event, which may be suitable for earthquake and other peril risks but is likely to be different for cyber risks and to vary by country risk appetite.

Other risk transfer mechanisms. In addition to reinsurance, cyber captives are used to address continuing risk. A point worth noting is the potential to mathematically create a “cyber index” in the same manner that weather and stock market indices appear in the macroeconomic models representing market risk exposure correlation to other enterprise risks. This cyber index could be created from the data patterns of the cyber catastrophe models and other data and then used as a threshold to trigger a data breach claims process following notification of a data breach.

Special-purpose vehicles (SPVs). This risk transfer approach is used in conjunction with capital market investors and sponsors, and it is similar to the catastrophe bond investments that protect countries from earthquake risk. It creates a bond shared by government and private industry to pay and share claims by loss bands in the event of a large or black-swan event. While these partnerships are very effective, such bonds often have a 10-year span, and a shorter life-span vehicle will be more suitable to cyber.

Sidecars. For natural catastrophes, these two-year vehicles have been referred to as sidecars, an SPV derivative of a captive where investors invest in a risk via A-rated hedge funds. If the event has not taken place within a given time frame, investors receive their money back with interest. This makes cyber risk part of an uncorrelated portfolio investment for chief investment officers. They can also base investment on the severity level of the attack, so investments are not lost on all events.

It will take time for this SPV approach to evolve over reinsurance and captives, but with good data quality, proper event models, ratings and adoption of KSI and other standards in the IT space, the capability to use capital markets to risk-transfer cyber risks will emerge. Data integrity standards would increase investor confidence in such SPVs.

For the full report on which this article is based, click here.