Tag Archives: valen analytics summit

Best Practices for Predictive Models

There’s little doubt about the proven value in using predictive analytics for risk selection and pricing in P/C insurance. In fact, 56% of insurers at this year’s Valen Analytics Summit that are not currently using predictive analytics in underwriting plan to start within a year. However, many insurers haven’t spent enough energy planning exactly how they can implement analytics to get the results they want. It’s a common misconception that competitive advantage is won by simply picking the right model.

In reality, the model itself is just a small part of a much larger process that touches nearly every part of the insurance organization. Embracing predictive analytics is like recruiting a star quarterback; alone, he’s not enough to guarantee a win. He both requires a solid team and a good playbook to achieve his full potential.

The economic crash of 2008 emphasized the importance of predictive modeling as a means to replace dwindling investment income with underwriting gains. However, insurance companies today are looking at a more diverse and segmented market than pre-2008, which makes the “old way of doing things” no longer applicable. The insurance industry is increasing in complexity, and with so many insurers successfully implementing predictive analytics, greater precision in underwriting is becoming the “new normal.” In fact, a recent A.M. Best study shows that P/C insurers are facing more aggressive pricing competition than any other insurance sector.

Additionally, new competitors like Google, which have deep reservoirs of data and an established rapport and trust with the Millennial generation, means that traditional insurers must react to technologies faster than ever. Implementing predictive analytics is the logical place to start.

The most important first step in predictive modeling is making sure all relevant stakeholders understand the business goals and organizational commitment. The number one cause of failure in predictive modeling initiatives isn’t a technical or data problem, but instead a lack of clarity on the business objective combined with a defect in the implementation plan (or lack thereof).

red

ASSESSMENT OF ORGANIZATIONAL READINESS

If internal conversations are focused solely on the technical details of building and implementing a predictive model, it’s important to take a step back and make sure there’s support and awareness across the organization.

Senior-Level Commitment – Decide on the metrics that management will use to measure the impact of the model. What problems are you trying to solve, and how will you define success? Common choices include loss ratio improvement, pricing competitiveness and top-line premium growth. Consider the risk appetite for this initiative and the assumptions and sensitivities in your model that could affect projected results.

Organizational Buy-In – What kind of predictive model will work for your culture? Will this be a tool to aid in the underwriting process or part of a system to automate straight-through processing? Consider the level of transparency appropriate for the predictive model. It’s usually best to avoid making the model a “black box” if underwriters need to be able to interact with model scores on their way to making the final decisions on a policy.

Data Assets – Does your organization plan to build a predictive model internally, with a consultant or a vendor that builds predictive models on industry-wide data? How will you evaluate the amount of data you need to build a predictive model, and what external data sources do you plan to use in addition to your internal data? Are there resources available on the data side to provide support to the modeling team?

MODEL IMPLEMENTATION,/p>

After getting buy-in from around the organization, the next step is to lay out how you intend to achieve your business goals. If it can be measured, it can be managed. This step is necessary to gauge the success or failure post-implementation. Once you’ve set the goals for assessment, business and IT executives should convene and detail a plan for implementation, including responsibilities and a rollout timeline.

Unless you’re lucky enough to work with an entire group of like-minded individuals, this step must be taken with all players involved, including underwriting, actuarial, training and executive roles. Once you’ve identified the business case and produced the model and implementation plan, make sure all expected results are matched up with the planned deliverables. Once everything is up and running, it is imperative to monitor the adoption in real-time to ensure that the results are matching the initial model goals put in place.,/p>

UNDERWRITING TRAINING

A very important but often overlooked step is making sure that underwriters understand why the model is being implemented, what the desired outcomes are and what their role is in implementing it. If the information is presented correctly, underwriters understand that predictive modeling is a tool that can improve their pricing and risk selection as opposed to undermining the underwriters. But there are still some who rely solely on their own experience and knowledge who may feel threatened by a data-driven underwriting process. In fact, nearly half of the attending carriers at the 2015 Valen Summit cited lack of underwriting adoption as one of the primary risks in a predictive analytics initiative.

Insurers that have found the most success with predictive modeling are those that create a specific set of underwriting rules and showcase how predictive analytics are another tool to enhance their performance, rather than something that will replace them entirely. Not stressing this point can result in resistance from underwriters, and it is essential to have their buy-in. At the same time, it is also important to monitor the implementation of underwriting guidelines, ensuring that they are being followed appropriately.

KEEPING THE END IN MIND,/p>

Many of the challenges and complexities in the P/C marketplace are out of an individual insurer’s control. One of the few things insurers can control is their use of predictive modeling to know what they insure. It’s one of the best ways an insurer is able to protect its business from new competitors and maintain consistent profit margins.

Using data and analytics to evaluate your options allows you to test and learn, select the best approach and deliver results that make the greatest strategic impact.

While beginning a predictive analytics journey can be difficult and confusing at first glance, following these best practices will increase your chances of getting it right on the first try and ensuring your business goals will be met.

Big Data in Insurance: A Glimpse Into 2015

Bernard Marr is one of the big voices to pay attention to on the subject of big data. His recent piece “Big Data: The Predictions for 2015” is bold and thought-provoking. As a P&C actuary, I tend to look at everything through my insurance-colored glasses. So, of course, I immediately started thinking about the impact on insurance if Marr’s predictions come to pass this year.

As I share my thoughts below, be aware that the section headers are taken from his article; the rest of the content are my thoughts and interpretations of the impact to the insurance industry.

The value of the big data economy will reach $125 billion

That’s a really big number, Mr. Marr. I think I know how to answer my son the next time he comes to me looking for advice on a college major.

But what does this huge number mean for insurance? There’s a potential time bomb here for commercial lines because this $125 billion means we’re going to see new commerce (and new risks) that are not currently reflected in loss history – and therefore not reflected in rates.

Maybe premiums will go up as exposures increase with the new commerce – but that raises a new question: What’s the right exposure base for aggregating and analyzing big data? Is it revenue? Data observation count? Megaflops? We don’t know the answer to this yet. Unfortunately, it’s not until we start seeing losses that we’ll know for sure.

The Internet of Things will go mainstream

We already have some limited integration of “the Internet of Things” into our insurance world. Witness UBI (usage-based insurance), which can tie auto insurance premiums to not only miles driven, but also driving quality.

Google’s Nest thermostat keeps track of when you’re home and away, whether you’re heating or cooling, and communicates this information back to a data store. Could that data be used in more accurate pricing of homeowners insurance? If so, it would be like UBI for the house.

The Internet of Things can extend to healthcare and medical insurance, as well. We already have health plans offering a discount for attending the gym 12 times a month. We all have “a friend” who sometimes checks in at the gym to meet the quota and get the discount. With the proliferation of worn biometric devices (FitBit, Nike Fuel and so on), it would be trivial for the carrier to offer a UBI discount based on the quantity and quality of the workout. Of course, the insurer would need to get the policyholder’s permission to use that data, but, if the discount is big enough, we’ll buy it.

Machines will get better at making decisions

As I talk with carriers about predictive analytics, this concept is one of the most disruptive to underwriters and actuaries. There is a fundamental worry that the model is going to replace them.

Machines are getting better at making decisions, but within most of insurance, and certainly within commercial lines, the machines should be seen as an enabling technology that helps the underwriter to make better decisions, or the actuary to make more accurate rates. Expert systems can do well on risks that fit neatly into a standard underwriting box, but anything outside of that box is going to need some human intervention.

Textual analysis will become more widely used

A recurring theme I hear in talking to carriers is a desire to do claims analysis, fraud detection or claims triage using analysis of text in the claims adjusters’ files. There are early adopters in the industry doing this, and there have emerged several consultants and vendors offering bespoke solutions. I think that 2015 could be the year that we see some standardized, off-the-shelf solutions emerge that offer predictive analytics using textual analysis.

Data visualization tools will dominate the market

This is spot-on in insurance, too. Data visualization and exploration tools are emerging quickly in the insurance space. The lines between “reporting tool” and “data analysis tool” are blurring. Companies are realizing that they can combine key performance indicators (KPIs) and metrics from multiple data streams into single dashboard views. This leads to insights that were never before possible using single-dimension, standard reporting.

There is so much data present in so many dimensions that it no longer makes sense to look at a fixed set of static exhibits when managing insurance operations. Good performance metrics don’t necessarily lead to answers, but instead to better questions – and answering these new questions demands a dynamic data visualization environment.

Matt Mosher, senior vice president of rating services at A.M. Best, will be talking to this point in March at the Valen Analytics Summit and exploring how companies embracing analytics are finding ways to leverage their data-driven approach across the entire enterprise. This ultimately leads to significant benefits for these firms, both in portfolio profitability and in overall financial strength.

There will be a big scare over privacy

Here we are back in the realm of new risks again. P&C underwriters have long been aware of “cyber” risks and control these through specialized forms and policy exclusions.

With big data, however, comes new levels of risk. What happens, for example, when the insurance company knows something about the policyholder that the policyholder hasn’t revealed? (As a thought experiment, imagine what Google knows of your political affiliations or marital status, even though you’ve probably never formally given Google this information.) If the insurance company uses that information in underwriting or pricing, does this raise privacy issues?

Companies and organizations will struggle to find data talent

If this is a huge issue for big data, in general, then it’s a really, really big deal for insurance.

I can understand that college freshmen aren’t necessarily dreaming of a career as a “data analyst” when they graduate. So now put “insurance data analyst” up as a career choice, and we’re even lower on the list. If we’re going to attract the right data talent in the coming decade, the insurance industry has to do something to make this stuff look sexy, starting right now.

Big data will provide the key to the mysteries of the universe

Now, it seems, Mr. Marr has the upper hand. For the life of me, I can’t figure out how to spin prognostication about the Large Hadron Collider into an insurance angle. Well played.

Those of us in the insurance industry have long joked that this industry is one of the last to adopt new methods and technology. I feel we’ve continued the trend with big data and predictive analytics – at least, we certainly weren’t the first to the party. However, there was a tremendous amount of movement in 2013, and again in 2014. Insurance is ready for big data. And just in time, because I agree with Mr. Marr – 2015 is going to be a big year.