Tag Archives: risk model

data symmetry

Competing in an Age of Data Symmetry

For centuries, people have lived in a world where data was largely proprietary, creating asymmetry. Some had it. Others did not. Information was a currency. Some organizations held it, and profited from it. We are now entering an era of tremendous data balance — a period of data symmetry that will rewrite how companies differentiate themselves.

The factors that move the world toward data symmetry are time, markets, investment and disruption.

Consider maps and the data they contained. Not long ago, paper maps, travel books and documentaries offered the very best views of geographic locations. Today, Google allows us to cruise nearly any street in America and get a 360° view of homes, businesses and scenery. Electronic devices guide us along the roadways and calculate our ETA. A long-established map company such as Rand McNally now has to compete with GPS up-and-comers, selling “simple apps” with the same information. They all have access to the same data. When it comes to the symmetry of geographic data, the Earth is once again flat.

Data symmetry is rewriting business rules across industries and markets every day. Insurance is just one industry where it is on the rise. For insurers to overcome the new equality of data access, they will need to understand both how data is becoming symmetrical and how they can re-envision their uniqueness in the market.

It will be helpful to first understand how data is moving from asymmetrical to symmetrical.

Let’s use claims as an example. Until now, the insurer’s best claims data was found in its own stockpile of claims history and demographics. An insurer that was adept at managing this data and applied actuarial science would find itself in a better position to assess risk. Competitively, it could rise to the top of the pack by pricing appropriately and acquiring appropriately.

Today, all of that information is still very relevant. However, in the absence of that information, an insurer could also rely upon a flood of data streams coming from other sources. Risk assessment is no longer confined to historical data, nor is it confined to answers to questions and personal reports. Risk data can be found in areas as simple as cell phone location data — an example of digital exhaust.

Digital exhaust as a source of symmetry

Digital exhaust is the data trail that all of us leave on the digital landscape. Recently, the New York City Housing Authority wished to determine if the “named” renter was the one actually living in a rent-controlled apartment. A search of cell phone tower location records, cross-referenced to a renter’s information, was able to establish the validity of renter occupation. That is just one example of digital exhaust data being used as a verification tool.

Another example can be found in Google’s Waze app. Because I use Waze, Google now holds my complete driving history — a telematics treasure trove of locations, habits, schedules and preferences. The permissions language allows Waze to access my calendars and contacts. With all of this, in conjunction with other Google data sets, Google can create a fairly complete picture of me. This, too, is digital exhaust. As auto insurers are proving each day, cell phone data may be more informative to proper pricing than previous claims history. How long is it until auto insurers begin to look at location risk, such as too much time spent in a bar or frequent driving through high-crime ZIP codes? If ZIP codes matter for where a car is parked each night, why wouldn’t they matter for where it spends the day?

Data aggregators as a source of symmetry

In addition to digital exhaust, data aggregators and scoring are also flattening the market and bringing data symmetry to markets. Mortgage lenders are a good example from outside the industry. Most mortgage lenders pay far more attention to comprehensive credit scores than an individual’s performance within their own lending operation. The outside data matters more than the inside data, because the outside data gives a more complete picture of the risk, compiled from a greater number of sources.

Within insurance, we can find a dozen or more ways that data acquisition, consolidation and scoring is bringing data symmetry to the industry. Quest Diagnostics supplies scored medical histories and pharmaceutical data to life insurers — any of whom wish to pay for it. RMS, AIR Worldwide, EQECAT and others turn meteorological and geographical data into shared risk models for P&C insurers.

That kind of data transformation can happen in nearly any stream of data. Motor vehicle records are scored by several agencies. Health data streams could also be scored for life and health insurers. Combined scores could be automatically evaluated and placed into overall scores. Insurers could simply dial up or dial down their acceptance based on their risk tolerance and pricing. Data doesn’t seem to stay hidden. It has value. It wants to be collected, sold and used.

Consider all the data sources I will soon be able to tap into without asking any questions. (This assumes I have permissions, and barring changes in regulation.)

  • Real-time driving behavior.
  • Travel information.
  • Retail purchases and preferences.
  • Mobile statistics.
  • Exercise or motion metrics.
  • Household or company (internal) data coming from connected devices.
  • Household or company (external) data coming from geographic databases.

These data doors, once opened, will be opened for all. They are opening on personal lines first, but they will open on commercial lines, as well.

Now that we have established that data symmetry is real, and we see how it will place pressure upon insurers, it makes sense to look at how insurers will use data and other devices to differentiate themselves. In Part 2 of this blog, we’ll look at how this shift in data symmetry is forcing insurers to ask new questions. Are there ways they can expand their use of current data? Are there additional data streams that may be untapped? What does the organization have or do that is unique? The goal is for insurers to innovate around areas of differentiation. This will help them rise above the symmetry, embracing data’s availability to re-envision their uniqueness.

How Connected Cars Will Change Claims

There is a long road ahead before the full potential of telematics is reached, but, from an international perspective, it is clear that the Italian market has already accumulated the greatest experience in the use of telematics within the auto insurance value chain. One of the key characteristics of the Italian experience is the capacity of certain companies to innovate the way in which they deal with claims-thanks to the data collected from the black box.

The benefits of telematics data for handling claims are significant and can be divided into three main categories: a proactive approach, objective information and loss prevention and mitigation.

First, telematics offers insurance companies the unique opportunity to assume an active role that starts immediately after the incident. Traditionally, the company would wait to hear from the insured person that a crash has occurred.

Based on my experience, one aspect that turns out to be key to setting up the telematics approach is that it provides real-time data about the incident to the people in charge of claims management. Usually, this information only reaches the insurance company’s assistance department. This data is crucial for two subsequent processes:

  1. Provide a great customer experience after the crash. Think of how much information can be gathered directly from telematics data without having to ask the client for it. The whole experience delivered to the customer when interacting with the company is becoming more and more important; recent net promoter score studies show that the economic value of a “promoter client” is more than two times higher than a “detractor.”
  2. Anticipate activation of claims management. For example, the insurer can guide the client toward the preferred auto repair centers right after the accident. This maximizes the capacity to achieve savings within the context of an optimized customer experience that is meant to solve the customer’s issues.

Second, telematics makes it possible to gather a structured set of objective data that can improve the understanding of the dynamics of the claim. The data can also provide an estimate of the damage. This information improves the decision-making capacity of the claims management process. It also assists the claims manager in searching for detailed information (such as additional inspections), which further reduces the time required. The information extracted from telematics data is the main factor that improves the efficiency and effectiveness of the liquidation process. Last but not least, this information is highly valuable from a legal point of view.

These two characteristics combined allow a significant reduction of the time spent in managing the different phases of the claims process-time that has proven to be directly related to the amount the company pays. Separating the knowledge supplied by the telematics (regarding the dynamics of the claims event in the case of minor damage) and combining it with the final claim cost by car brand and model will allow the company to make a liquidation proposal just a few hours after the crash. On the one hand, there is a clear benefit in terms of costs; on the other hand, there is a significant improvement of the driver’s user experience.

Third, loss prevention and mitigation was the first area explored when telematics pilot projects began in Italy, with the focus on recovering stolen vehicles. Big data analysis has enhanced this capacity by allowing the automatic identification (based on data received from the telematics device) of a driving style that differs from that of the car’s owner.

This mitigating capacity no longer concerns only the professionally installed solutions. It has now partially extended to new self-installing solutions: The act of uninstalling the device activates an alert. Similarly, there is the value-added services option that mitigates the risks linked to the driver and his car. For example, weather condition alerts or vehicle maintenance notifications could help influence client behavior and lead to a lower risk rate for the driver.

3 Game Changers — and How to Survive

The follow-the-leader principle works on a trail that has proven to be relatively safe from perils and predators. However, when new frontiers are breached, a new kind of leadership is required for survival.

Insurers have generally been able to just follow the leader for ages, but now a new frontier has been breached. The insurance industry is vulnerable to three game changers that consumers are eager to embrace.

Drawing on remarks I made recently at a keynote for the National Association of Mutual Insurance Companies Annual Conference, here are the game changers:

The first big disrupter is data collection. Insurance is built on the principle of using accurate data and statistics to build underwriting financial models that serve to predict behavior and events from an actuarial or probability standpoint. London’s Edward Lloyd figured this out when he opened his coffee shop in 1688, and people started selling insurance to merchants and ship owners. His motto was fidentia, Latin for confidence. We now refer to “confidence factors” when estimating future losses.

Insurers have been notorious for using forms to collect data. But, today, a person is subjected to more new information in one day than a person in the Middle Ages saw in his entire life. If modern competitors to the insurance industry can obtain more accurate data in a faster and more in-depth manner, they may beat insurers at their own game.

With cloud computing and its infinite data storage/retrieval capability, trillions of bits of information relating to insureds are available. Data sources track things like profile patterns, such as personal Internet searches or satellite surveillance data. Relevant data can be mined and analyzed to build a risk model for every insurable consumer or business peril from property and vehicle insurance to earthquake and weather insurance.

The five biggest data collectors on the planet are Google, Apple, Facebook, Yahoo and Amazon. These high-tech companies have the ability, financial resources and potential desire to foray into the insurance industry. Keep in mind that in 2014 the world’s top 10 insurers received $1.2 trillion in revenue, yet surveys have shown that people around the world have grown to use and trust the products and services provided by the five biggest data collectors.

Accessibility and familiarity are allowing profitable new brands to replace old brands. Consumers also prefer and use third-party validation and independent comparisons found on websites.

What does this spell for the insurance industry? Sadly, consumers have grown more uncomfortable with reliance on and interaction with agent relationships. John Maynard Keynes once said: “The difficulty lies not so much in developing new ideas, as in escaping from old ones.”

The second emerging threat to insurance is botsourcing — the replacement of human jobs by robotics. The robots haven’t just hatched in agriculture or auto assembly plants — they’re expanding in a variety of skills, moving up the corporate ladder, showing awesome productivity and retention rates and increasingly shoving aside their human counterparts.

Google won a patent recently to start building worker robots with personalities. Move over, Siri.

Author and entrepreneur Martin Ford, in his book Rise of the Robots, argues that artificial intelligence (AI) and robotics will soon overhaul our economy. Increasingly, machines will be able to take care of themselves, and fewer jobs will be necessary.

Reassessment of the way we employ our workforce is essential to cope with this new industrial revolution. The lucrative insurance realm of personal and product liability insurance lines and workers’ comp is being tempered as human risk factors — especially in high-risk areas — give way to robotics. The saying goes: “Management is doing things right, but leadership is doing the right things.”

How will the insurance industry react to the accelerating technology of bot-sourcing?

The third emerging threat to the insurance industry that has received enormous attention this past year autonomous vehicles. More than a half-dozen carmakers, as well as Google and Uber, predict that self-driving vehicles will be commonplace on our roads between 2017 and 2020. Tesla Motors CEO and general future-tech proponent Elon Musk has predicted that human drivers could someday be outlawed. Humans cannot outperform an autonomous vehicle, which can assess and react to more than 7,000 driving threats per second. There are no incidents of driver impairment, reckless driving, DUIs, road rage, driver texting, speeding or inattention.

With a plethora of electronic distractions, increased safety can only be achieved when human drivers are removed from the equation. Automakers have employed an incremental approach to safety in their current models. These new technologies are clever and helpful but do not remove the risks. There’s a phenomenon called the Peltzman Effect, based on research from an economist at the University of Chicago who studied auto accidents. He found that, when you introduce more safety features like seatbelts into cars, the number of fatalities and injuries doesn’t drop. The reason is that people compensate for it. When you have a safety net in place, people will naturally take more risks. Today, 35,000 vehicle occupants die in the U.S. because of auto accidents. Autonomous vehicles are expected to cut auto-related deaths and injuries by 80% or more.

One of the biggest revenue sources to insurers is vehicle insurance. As autonomous vehicles take over our roads and highways, you need to address all the numerous unanswered questions relating to the risk playing field. Who will own the vehicles? How can you assess the potential liability of software failure or cyberattacks? Will insurers still have a role? Where will legal liabilities fall? Who will lead the call to sort these issues out?

Clearly, the lucrative auto insurance market will change drastically. Insurance and reinsurance company leadership will be an essential ingredient to address this disruptive technology.

As I told the conference: Count on Insurance Thought Leadership to play a significant role in addressing these and other disruptive technologies facing the insurance industry. A Chinese proverb says: “Not the cry, but the flight of a wild duck, leads the flock to fly and follow.”

Why Flood Is the New Fire (Insurance)

With our past few posts on ITL, we have been exploring how insurers can continue to bring more private capacity to U.S. flood (Note: Everything we talk about for U.S. flood is also relevant for Canada flood). We have explored here how technology, data and analytics exist to handle flood in an adequately sophisticated manner, and we have described here the market opportunity that exists. Now, it’s worth a look to explore how a flood program could be introduced, starting from scratch through cherry-picking mischaracterized risks and then to a full, mass-market solution.

What’s a FIRM? It’s not what you think

First, let’s take a quick look at how National Flood Insurance Program (NFIP) rates are determined: the Flood Insurance Rate Maps, or FIRMs. For the NFIP, FIRMs solve two core problems – identifying which properties must have flood insurance and how much to charge for it. The first function is for banks, giving them an easy answer for whether a property to be lent against requires flood insurance – this is what the Special Flood Hazard Area (SFHA) is for. Anything within the SFHA is deemed to be in a 100-year flood zone (basically, A and V zones), and requires flood insurance for a mortgage. The second function sets the pricing and conditions for the NFIP to sell the actual policies. The complexity of solving these two problems should not be underestimated for a country of this size. But it must be remembered that a FIRM is a marketing device and not a risk model.

Considering that FIRMs are a marketing device built on a huge scale, it makes perfect sense that some generalizations needed to be made on the delineation of the various flood zones. The banks needed a general guideline to know when flood insurance was needed, and the NFIP needed rates to be distributed in a way that could result in a broad enough risk pool to generate enough premium to be solvent. While the SFHA has served the banks well enough over the years, the rating of properties has not been so successful. There are plenty of reasons the NFIP is deep in debt (see page 6 of this report); suffice it to say that the rates set by FIRMs do not result in a solvent NFIP.

Cherry-picking

The fact that the FIRMs are a flawed rating device based on geographical generalizations means there are cherries to be picked. By applying location-based flood risk analytics to properties in the SFHA, a carrier can begin to find where the NFIP has overrated the risk. Using risk assessments based on geospatial analysis (such as measurements to water) and their own data (such as NFIP claims history), a carrier can undercut the NFIP on specific properties where the risk fits their own appetite. Note to cherry-pickers: Ensure you account for the height above ground of the building, because you won’t need elevation certificates for this type of underwriting. So far, cherry-picking has been focused on the SFHA for a couple reasons – homeowners need to have coverage, and the NFIP rates are the highest. There is no reason, though, that cherry picking can’t be done effectively in X zones and beyond.

Mass-market solution

The same data and analytics used for cherry-picking can be used more broadly to create a mass-market solution. By adjusting the dials on the flood risk analytics – and flood risk analytics really should be configurable – you can calibrate to calculate the flood risk at low-risk locations. In other words, flood risk can be parsed into however many bins are needed to underwrite flood risk on any property in the country. With the risk segmented, rates can be defined that can (and should) be applied as a standard peril on all homeowner policies. Flood risk can be underwritten like fire risk.

Insurers have traditionally been confident underwriting fire risk. But consider this: While fire is based on construction type, distance to fire hydrants and distance to fire station, flood risk can be assessed with parameters that can be measured with similar confidence but with greater correlation to a potential loss.

Flood will be the new fire

Insurers have been satisfied to leave flood risk to the Feds, and that was prudent for generations. But technology has evolved, and enterprising carriers can now craft an underwriting strategy to put flood risk on their books. Fire was once considered too high-risk to underwrite consistently, but as confidence grew on how to manage the risk it became a staple product of property insurers. Now, insurers are dipping their toes into flood risk. As others follow, confidence will grow, and flood will become the new fire.