Tag Archives: John Johansen

Is the Data Talking, or Your Biases?

In April, a large life insurer announced plans to use Fitbit data and other health data to award points to insureds, providing impressive life insurance discounts for those who participated in “wellness-like” behaviors. The assumption is that people who own a Fitbit and who walk should have lower mortality. That sounds logical. But we’re in insurance. In insurance, logic is less valuable than facts proven with data.

Biases can creep into the models we use to launch new products. Everyone comes to modeling with her own set of biases. In some conference room, there is probably something like this on a whiteboard: “If we can attract people who are 10% more active, in general, we will drive down our costs by 30%, allowing us to discount our product by 15%.”

That is a product model. But that model was not likely based on tested data. It was likely a biased supposition pretending to be a model. Someone thought he used data, when all he did was to build a model to validate his assumptions.

Whoa.

That statement should make us all pause, because it is a common occurrence – not everything that appears to be valid data is necessarily portraying reality. Any data can be contorted to fit someone’s storyline and produce an impostor. The key is to know the difference between data cleansing/preparation and excessive manipulation. We continually have to ask if we are building models to fit a preconceived notion or if we are letting the data drive the business to where it leads us.

Biases hurt results. When I was a kid, my Superman costume didn’t make me Superman. It just let me collect some candy from the neighbors. Likewise, if insurers wish to enter into an alternate reality by using biased data, they shouldn’t expect results that match their expectations. Rose-colored glasses tend to make the world look rosy.

Here’s the exciting part, however. If we are careful with our assumptions, if we wisely use the new tools of predictive analytics and if we can restrain ourselves from jumping through our hypotheses and into the water too soon, objective data and analytics will transport us to new levels of reality! We will become hyper-knowledgeable instead of pseudo-hyper-knowledgeable.

Data, when it is used properly, is the key to new realms, the passport to new markets and to a secure source of future predictive understanding. First, however, we have to make it trustworthy.

Advocating good data stewardship and use.

In general, it should be easy to see when we’re placing new products ahead of market testing and analysis. When it comes to insurance, real math knows best. We’ve spent many decades perfecting actuarial science. We don’t want to toss out fact-based decisions now that we have even more complete, accurate data and better tools to analyze the data.

When we don’t use or properly understand data, weak assumptions begin to form. As more accurate data accumulates and we are forced to compare that data with our pre-conceived notions, we may be faced with the reality that our assumptions took us down the wrong path. A great example of this was long-term care insurance. Many companies rushed products to market, only later realizing that their pricing assumptions were flawed because of larger-than-expected claims. Some had to exit the business. The companies remaining in LTC made major price increases.

Auto insurers run into the same dangers (and more) with untested assumptions. For example, who receives discounts, and who should receive discounts? Recently, a popular auto insurer that was giving discounts to drivers with installed telematics, announced that it would begin increasing premiums on drivers who seemed to have risky driving habits. The company had assumed that those who chose to use telematics would be good drivers and that just having the telematics would cause them to drive more safely. The resulting data, however, proved that some discounts were unwarranted; just because someone was willing to be monitored didn’t mean she was a safe driver.

Now the company is basing pricing on actual data. It has also implemented a new pricing model by testing it in one state before rolling it out broadly – another step in the right direction.

When we either predict outcomes before analyzing the data or we use data improperly, we taint the model we’re trying to build. It’s easy to do. Biases and assumptions can be subtle, creeping silently into otherwise viable formulas.

Let’s say that I’m an auto insurer. Based on an analysis of the universe of auto claims, I decide to give 20% of my U.S. drivers (the ones with the lowest claims) a discount. I’m assuming that our mix of drivers is the same as the mix throughout the universe of drivers. After a year of experience, I find that I am having higher claims than I anticipated. When I apply my claims experience to my portfolio, I find that, actually, only the top 5% are a safe bet for a discount, based on a number of factors. Now I’ve given a discount to 15% more people than ought to have had it. Had I tested the product, I might have found that my top 20% of U.S. drivers were safe drivers but were also driving higher-priced vehicles – those with a generally higher cost per claim. The global experience didn’t match my regional reality.

Predictions based on actual historical experience, such as claims, will always give us a better picture than our “logical” forays into pricing and product development. In some ways, letting data drive your organizations decisions is much like the coming surge of autonomous vehicles. There will be a lot of testing, a little letting go (of the driver’s wheel) and then a wave of creativity surrounding how the vehicle can be used effectively. The result of letting the real data talk will be the profitability and longevity of superior models and a tidal wave of new uses. Decisions based on reality will be worth the wait.

Where Is Home for Analytics? (Part 2)

Last week, we spoke about how analytics in the insurance organization has been growing up in different locations and that it will continue to be interesting to see how and where analytics grows into maturity. Click here if you missed that blog and you would like to catch up. Today, we’re going to step into the future and look at the most likely scenario in most insurance organizations, with the caveat that this will be highly dependent upon carrier size, type, unique features, etc.

To look at the logical location of analytics central within the insurance organization, it will be helpful to understand who will be using and needing data analytics and business intelligence (BI) reporting and how frequently it will be needed. In most organizations, this will naturally require some sort of assessment, because data gathering and analytics are changing at a rapid rate, and, if there is no current oversight, a survey/report will be needed.

For our purposes here, I’m going to assume that, sooner or later, nearly every area of the insurance value chain is going to be a consumer of data analytics. When we discuss analytics with insurance organizations today, we operate under the notion that data and analytics systems should be built with the capability to plug into areas of the organization that aren’t clamoring for analytics yet. Operations or human resources might be excellent examples. Both are areas that may one day be composed of analytics power-users but today are only flirting with the fringes of data analytics. In the case of human resources, it may be making some data-driven decisions today, but often it is supplied through health insurance payers or other areas where it is pre-analyzed. As analytic capabilities grow, staffing choices and HR communications will benefit from entirely new levels of observation and reporting.

Once we make the case that anyone in the insurance organization could be a candidate for using analytics, we can also assume that data sources and analytics may frequently overlap from department to department. To address efficiencies, security and tool use throughout the organization, it may make sense to create an analytics department that operates as a central hub serving all other areas.

Let’s use an analogy. We’re in the midst of summer, and tomatoes or cucumbers may be growing in some of our back yards. With most vining plants, one root produces multiple fruits, varying in their maturity dates. Provided they are pollinated properly (go, bees!), the one vine will give many good tomatoes at various locations along the vine.

This is roughly comparable to what may happen in many insurers. Functions under the chief data officer will be responsible for gathering, housing and securing reliable data. Imagine that function as the roots and the soil of the vining plant. The data organization will then deliver the data to the analytics organization…the main vine that will turn the data “food” into the analytics “fruit.” The fruit is the business intelligence every area needs to run its portion of the business. If we want to carry the analogy one step further, we can also consider that the fruit contains the seeds of the next generation’s growth. So the analytics organization is not only going to produce good fruit but will also offer to plant its intelligence in areas where the business wants to see new growth.

Instead of having data gathering and analytics strewn all over the insurance greenhouse, there will be one location for warehousing and one central source for analytics. It is going to require oversight by the chief actuary, the chief data officer and in all probability a chief analytics officer. The chief marketing officer and the entire C-level will be a part of determining how this new unit is built to ensure timely and effective service to the organization. The analytics team will represent a unified core that will need to balance business needs with departmental priorities. In some ways, it will look much like today’s management team, only with one goal – transforming the organization to be data-driven while keeping information secure and flowing through an ever-improving analytics infrastructure.

Where is Real Home for Analytics?

One of the fascinating aspects of technology consulting is having the opportunity to see how different organizations address the same issues. These days, analytics is a superb example. Even though every organization needs analytics, they are not all coming to the same conclusions about where “Analytics Central” lies within the company’s structure. In some carriers, marketing picked up the baton first. In others, actuaries have naturally been involved and still are. In a few cases, data science started in IT, with data managers and analytical types offering their services to the company as an internal partner, modeled after most other IT services.

In several situations that we’ve seen, there is no Analytics Central at all. A decentralized view of analytics has grown up in the void – so that every area needing analytics fends for itself. There are a host of reasons this becomes impractical, so often we find these organizations seeking assistance in developing an enterprise plan for data and analytics. This plan accounts for more than just technology modernization and nearly always requires some fresh sketches on the org chart.

Whichever situation may represent the analytics picture in your company, it’s important to note that no matter where analytics begins or where it currently resides, that location isn’t always where it is going to end up.

Ten years ago, if you had asked any senior executive where data analytics would reside within the organization, he or she would likely have said, “actuarial.” Actuaries are, after all, the original insurance analytics experts and providers. Operational reporting, statistical modeling, mortality on the life side and pricing and loss development on the P&C side – all of these functions are the lifeblood that keep insurers profitable with the proper level of risk and the correct assumptions for new business. Why wouldn’t actuaries also be the ones to carry the new data analytics forward with the right assumptions and the proper use of data?

Yet, when I was invited to speak at a big data and analytics conference with more than 100 insurance executives and interested parties recently, there was not one actuary in attendance. I don’t know why — maybe because it was quarter-end — but I can only assume that, even though actuaries may want to be involved, their day jobs get in the way. Quarterly reserve reviews, important loss development analysis and price adequacy studies can already consume more time than actuaries have. In many organizations, the actuarial teams are stretched so thin they simply don’t have the bandwidth to participate in modeling efforts with unclear benefits.

Then there is marketing. One could argue that marketing has the most to gain from housing the new corps of data scientists. If one looks at analytics from an organizational/financial perspective, marketing ROI could be the fuel for funding the new tools and resources that will grow top-line premium. Marketing also makes sense from a cultural perspective. It is the one area of the insurance organization that is already used to blending the creative with the analytical, understanding the value of testing methods and messages and even the ancillary need to provide feedback visually.

The list of possibilities can go on and on. One could make a case for placing analytics in the business, keeping it under IT, employing an out-of-house partner solution, etc. There are many good reasons for all of these, but I suspect that most analytics functions will end up in a structure all their own. That’s where we’ll begin “Where is the Real Home for Analytics, Part II” in two weeks.

3 Analytics Strategies for the Middle Market

As if there isn’t enough pressure on middle market carriers today, with the big players combining to get even bigger and with the rolling up of supply chains — the carriers are now faced with a strategic imperative: Make sense of their data through analytics.

Meeting that imperative comes with a new competitive issue: fighting the war for talent to recruit and retain data scientists.

The demand for data scientists is spiking at a time when it can’t be met by supply. The largest organizations have enough scale to fund and attract a team of analysts, but what is the middle market insurer to do?

There are some straightforward strategies:

Count on partners

Many of the business demands for analytics will be met with software tools. The vendors for these solutions will be more than happy to have some data science types participate in your implementation and help to sort out your data. The same is true of marketing campaign vendors. They will have in their circles the experts needed to slice and segment targets, just like the large insurers can do on their own.

Services vendors

The services vendors are investing and building muscle in big data and analytics. Just as insurers augment their in-house actuarial talent when needed, we see the ecosystem of services vendors maturing nicely. You may pay more per hour than if you hired someone, but you only pay for what you need and you get a team that has “been there and done that.”

Decide not to decide

We talk to a lot of middle market companies that are looking at big data analytics. Some are saying that they aren’t seeing the demand for it from the business areas. They know this may mean that people aren’t doing enough to evangelize about analytics within the business, but analytics have no value if they don’t meet some kind of demand. If there’s no demand, push analytics out on the road map — but keep it on the road map. That allows you to revisit the subject when the labor market for data science talent is less frothy.

As is often the case, the reality is that most of the companies we see are doing some combination of these three strategies. They are engaging tool vendors for particular complementary needs, reaching into the service companies when that makes sense and putting the investment in their full-time staff until resources are more available.

At the end of the day, we see the middle market reacting creatively and nimbly to the challenge. But, hey, that’s what they do with all of the challenges they face, so why would this time be any different?

It’s Time to Discuss the Upside of Cyber

Based on what our clients are telling us, I can’t imagine that there are many boards of directors that haven’t recently talked about data. With everyone focusing on security issues and the risks inherent in not adequately plugging data vulnerabilities, every board has had its wake-up call.

Managing the downside is only one part of the issue. There is also great upside to be found in the data to drive strategic growth. Studies have shown that organizations that have already transformed themselves through data continue to get better at using data faster than others. The leaders are still increasing their leads. Where there is the opportunity for revolutionary data use, there is also the possibility of being left in the dust. For every WalMart and Uber, there’s a Sears and Yellow Cab company.

So, boards need to look at the upside and see data as the means to cross-enterprise improvement. For better market penetration, use your data. For greater operational efficiency, look to your data. For lower risk or reduced fraud or stronger service, create a data framework that will give you both utility and knowledge.

We are entering into an explosive period of data availability from outside the organization. If we use it well, it will yield insights that will make today’s decision-making look like the punch-card era.

Though these data conversations are started at the highest levels, they must be continued and fostered at every level. In coming weeks, we will be looking at the opportunities and consequences of data conversations — where to start, what to avoid, building a data culture and understanding data’s true value to your organization. I hope you’ll join the discussion.