Unless you’ve been living under a rock for the last few years you’ve heard a LOT about big data. But if you’re like most insurance
professionals, you didn’t go to school for computer science, and even though it sounds very cool you really haven’t gotten your head around a simple question:
What the heck is big data? And how will it affect insurance?
For the last several years, the world has been creating more data than it ever had in the past. Some call it the digital exhaust:
Everything we do leaves a digital trail, and with a smartphone in every pocket, a laptop in every backpack and near-universal access to giant clusters of computers in the cloud, the sheer amount of data we are able to collect on everyone and everything has grown exponentially. Data grew to such large quantities that it no longer fit in the memories computers use for processing it, so whole new tools had to be designed to handle it. We started creating and saving so much data that there was a qualitative change, and all of a sudden we became able to extract new insights and create new value due to the large scale of the amount of data that we can access. Things are now possible that simply could not have been done at a smaller scale.
One of the key changes that happened is that we started recording everything in a digital rather than analog way (computers instead of paper).
Why Is Big Data a Big Deal for Insurance?
We don’t make widgets. We help people and businesses manage their risks and help pay for the losses when they happen, and all of this is based on information, not on arranging physical atoms in any way. It’s literally a pure information business.
See also: What Comes After Big Data?
For centuries, when faced with very large numbers of data points, society has depended on using samples. This applies even more to the insurance
industry. Think back to CPCU 500; our ENTIRE business is based on the law of large numbers
and on making statistically valid predictions about risk
. (If you haven’t done your CPCU, stop reading this article right here and go get started on it! Here’s why
, here’s how
.) This will have huge implications for our industry.
Historically, we had to work with samples because it was very difficult or impossible to collect all of the data, and because we didn’t have tools that could work with gigantic sets of data. Having ALL the data related to something, instead of a sample of it, allows us to see much more detail. For example: In the old analog world, our actuaries figured out that 16- to 19-year-old drivers were more likely to have an auto accident, and this became a key part of how we price auto insurance
. In the new digital world of big data, we might be able to analyze every second a young person has ever driven
and make a personalized price for his very own level of risk
! That rate will be much more accurate because it is based not on some of the general data (the accidents had by insured
16- to 19-year-olds) but rather by ALL the specific data (every second of driving this person has ever done).
By its very definition, actuarial science, which our entire business is built on, is “the discipline that applies mathematics and statistical methods to assess risk,
” and one of the aims of statistics is to “confirm the richest findings using the smallest amount of data.” In other words,
The Why Doesn’t Matter, Only the What
In the old world of small data, society spent a lot of resources trying to figure out the why
behind things. Scientific and statistical studies started with a hypothesis, a prediction of how things worked, and then tested the available sample of data to see if that hypothesis was correct. If it wasn’t, then the hypothesis was modified and tried again. Most data was collected for a specific purpose, and it was very difficult to use it for other purposes without collecting a new sample. Today, with so much data around and more to come, hypotheses are no longer crucial. All that is needed is analysis for correlations.
Before big data, because of the more limited amount of computer power we had, most analysis was for linear relationships (this causes that); with the new tools of big data analysis and the faster computers available today, we can find more complicated non-linear relationships (a, b, c, d, e, f, g independently predict x a little bit but together they predict x very accurately).
It doesn’t matter that your system doesn’t know all the variables that go into a problem, only that it can predict the result. For example, Google has used big data to predict flu outbreaks faster than the Centers for Disease Control and Prevention (CDC) by letting the computer figure out which searches correlate with flu outbreaks. It doesn’t matter whether those people know that what they’re searching about is the flu, just that they’re searching on it and that when those hundreds of identified search terms happen in one area there’s a very good chance that area is experiencing a flu outbreak. In the new world of big data, the why
something happens doesn’t matter, it only matters that we are now able to find the hidden patterns and find it or predict it. Society will need to shed some of its obsession for causality in exchange for simple correlations.
One example of how an insurance
company is trying to use big data to improve its underwriting is Aviva, which studied the idea of using credit
report and marketing data to underwrite some life insurance
applicants instead of the traditional blood and urine lab analysis. The idea is to identify applicants with higher risk
of lifestyle diseases like high blood pressure, diabetes and even depression. The method uses lifestyle data that includes hundreds of variables such as hobbies, the websites people visit and the amount of television they watch, as well as estimates of their income. The traditional lab tests cost $125 per person while this new approach can be as cheap as $5. This is an example of a correlational relationship being valuable and more efficient than relying on a causal relationship for prediction of an outcome.
The More Data We Have, the Less Exact It Needs to Be
In the old world of small data, statisticians and data analysts were trained to clean out outliers and try to get data that was as clean as possible. With big data, we are looking at vastly more data, which means that we can get away with less precision. It’s a tradeoff; with less error from sampling, we can accept more measurement error. The old tools (spreadsheets, relational databases, SQL, business intelligence tools, etc) were created to work on exact data; the new tools are designed to work with large quantities of imperfect data. The need for perfect data was a side effect of the limited tools we used to manage small data.
See also: Eating the Big Data Elephant
Here’s a great example of why we can now get away with less exact data: Suppose we need to measure the temperature in a vineyard. If we only have one temperature sensor for the whole plot of land, we must make sure it’s accurate and working at all times: no messiness allowed. In contrast, if we have sensors for every one of hundreds of vines, we can use cheaper, less sophisticated sensors (as long as they don’t introduce a systematic bias). Any particular reading may be incorrect, but the aggregate of many readings will provide a more comprehensive
Data Is No Longer Stale After Its Original Use
One of the very limiting features of the old world of data is that once a dataset was built for a particular use, it was very difficult to use it for another, so you have to know what you’re looking for before collecting the data. Because you were collecting a sample of data and inputting it into a very structured format for future analysis, getting the right pieces of information was of paramount importance.
In the new world of big data, all data becomes a new raw material to create value in new and creative ways, most of which were impossible in the old world. Because we are collecting data on everything, and our tools are more sophisticated in ability to arrange and rearrange that data, we are more able to use the information in a variety of ways.
Think about it; that telematics device on your car collects a TON of data. Think about the data your smartphone collects about your habits each day. Every time you search on Google, it's recording not only what you search for but even the exact amount your mouse spent at different parts of the screen. Soon, we’ll even be able to track your eyes through the webcam when you visit our website. There’s just a TON of data out there that we’ll now be able to analyze and learn about our customers.
Being Free of Sampling Will Allow Us to Know More
Sampling quickly stops being useful when you want to drill deeper, to take a close look at some intriguing subcategory of the data. One of the key benefits of being able to collect ALL of the data about something is that we can dig further into the data and ask it fresh questions that we hadn’t even thought of when we started collecting the data. In the old paradigm of sampling, one would collect only what was directly asked for. If you noticed a pattern in that sample but needed something to explain or verify the pattern that you had not thought to ask for ahead of time, you would need to re-sample and get additional data to confirm what you found.
Data No Longer Needs to Be Structured
Traditionally, the way data was stored in spreadsheets and databases was structured, meaning that each field could fit a very specific type of data; a phone number field, for example, could only hold a 10-digit number. The problem is that only around 5% of all digital data in the world is structured in a form that neatly fits into a spreadsheet or database. That means we had no easy way to analyze the other 95%! Pretty much all data had to be cleaned up before analysis, which made everything smaller and more expensive.
In the new world of big data, new tools such as Hadoop
are able to analyze unstructured data in all shapes and sizes, 100% of data instead of just 5%. The tools can even analyze things like books, journals, metadata (data about data), audio, video and much more. Imagine being able to include every second of conversation digitally recorded from your call centers along with all your other data and analyze it all to find trends! This is one of the most powerful features of big data, and it is already being used in many call centers.
Messier Data Will Help Us Insure Messier Things
Big data’s ability to help us analyze messy data could help us insure harder-to-insure things. For example, ZestFinance, a company founded by a former chief information officer at Google, built technology that helps lenders underwrite small, short-term loans to people who have bad credit
scores. Turns out traditional credit
scoring is based on a few factors, while ZestFinance uses a huge amount of variables using big data, and it produces solid results; in 2012, the company's loan default rate was a third lower than the industry's.
Big data might allow us to better underwrite risks for which we don’t have very good data, such as people who can’t get a driver’s license
or commercial risks that currently can only be insured
in the surplus lines
market. Imagine all of the services, microinsurance and other innovations we’ll be able to develop.
From Indemnification to Risk Prevention
One of the techniques used with big data is predictive analytics, and pretty much every carrier
is experimenting with it.
The technique is being used to prevent big mechanical or structural failures: placing sensors on machinery, motors or infrastructure like bridges makes it possible to monitor the data patterns they give off, such as heat, vibration, stress and sound, and to detect changes that may indicate problems ahead.
The underlying concept is that when things break down, they generally don’t do so all at once, but gradually over time. If we have sensor data and correlational analysis, we can probably figure out that something is about to break before it actually does. This can allow us to prevent claims from ever happening, thus moving from insurance
as a loss-paying service to being a risk-
See also: Forget Big Data — Focus on Small Data
Acknowledgement: Much of this article comes from Big Data: A Revolution That Will Transform How We Live, Work, and Think
. Yes, you should read it! Yes, we get a small commission
if you buy it using that link, and it helps us run and improve InsNerds.
Want to support InsNerds?
InsNerds is free, it's a labor of love, and it takes a lot of time. We have a ton of fun doing it, and we would really appreciate your support in keeping it running. If you've been helped by one of our articles, if we've helped you grow in your career, if you agree that our content is improving the insurance
industry and that we are unique in what we do, please consider donating. We’ll use your donation to deliver even more career- and industry-changing content and to spread the word about that content far and wide in the insurance
You can make a recurring donation
as small as $1 or a one time donation.
This article originally published on InsNerds.com.