Tag Archives: predictive models

The Promise of Predictive Models

An innovation strategy around big data and artificial intelligence will uncover insights that allow smart carriers to acquire the most profitable clients and avoid the worst. Companies that develop the best portfolios of risks will ultimately enjoy a flight to quality while those left behind will compete for the scraps of insurability.

Insurers are also trying to individualize underwriting rather than use the traditional underwriting of risk categories.

As such, the insurance industry finds itself in a data arms race. Insurance carriers are leveraging their datasets and engaging with insurtechs that can help.

For the underwriter, big data analytics promise the ability to make better decisions with respect to risk selection and pricing. Underwriters have thought too many times that if they had just understood a particular area of risk better they would have charged a lower price and won the business; or had they had that little extra piece of information they would not have written an account that turned out to be unprofitable. Most certainly, underwriters would assert that with better information they would have charged a more appropriate price for a risk and most definitely would not have lost money.

One solution has been developing predictive underwriting risk selection and pricing models. By leveraging datasets previously unavailable, or in formats too unstructured to use, algorithmic models can better categorize and rank risks, allowing an underwriter to select and assign the most appropriate price that rewards better risks and surcharges those that are riskier. Better risks might be those that are simply less inherently risky than others (e.g., a widget manufacturer vs. an explosives manufacturer with respect to product liability or property coverage), or those whose behaviors and actions are more cautious. Through a predictive, data-driven
model, underwriters will be able to build profitable and sustainable portfolios of risks, allowing them to expand their writings to a broader customer base, pass along cost savings from automation to their clients, provide insights into means by which their insureds can reduce risk or identify new areas of coverage and product and bring more value to customers.

With this win-win situation at hand, the insurance industry has charged forward in data mining the decade’s worth of their own internal information, as well as accessing public databases, leveraging data brokers and partnering with insurtechs that have their own data lakes they can access. Algorithmic models then are being fine-tuned by actuaries, statisticians and behaviorists to find causation links and correlations between seemingly disparate data points with the intention of divining future loss outcomes. In this digital frenzy, what gets lost, however, is that there can be social costs from the methods by which all this data is used.

See also: 11 Keys to Predictive Analytics in 2021

Balancing Social Good With Social Cost

It is not false altruism to reward good risks, build resiliency in portfolios or discover insights that lead to new products and services. However, underwriters must recognize that they are inherently in the business of bias. While it is acceptable to be discerning between a safe driver and reckless one, it is unacceptable to build into underwriting decision a bias toward race and religion and many times gender or health conditions. It is therefore essential that underwriters, and the actuaries and data scientists who support them, act responsibly and be accountable for any social failures of the algorithmic models they employ.

With our predictive risk selection model in mind, consider some of the available data that could be processed:

–decades of workers’ compensation claims data

–policyholder names, addresses and other personally identifiable information (PII)

–DMV records

–Credit scores and reports

–Social media posts

–Telematics

–Wearable tech data

–Biometric data

–Genetic and genealogy information

–Credit card and purchasing history

Consult algorithmic accountability experts like law professor Frank Pasquale, and they will provide you with additional data sets you might not even know existed. Professor Pasquale described the availability of databases of everything from the seemingly innocuous (wine enthusiasts) to those that shock the conscience (victims of rape). With the myriad of data available and so much of it highly personal in nature, underwriters must recognize they have a responsibility to a new set of stakeholders beyond their company, clients, shareholders and regulators — namely, digital identities.

The next risk of social harm is in how that data is used. Predictive models seek to identify correlations between new points of data to predict loss potential. If correlations are wrong, not only could it jeopardize the underwriter’s ability to properly price a risk, but the correlations could result in an illegal practice like red-lining. This situation could occur accidentally, but a dataset could be used nefariously to circumvent a statute prohibiting use of certain information in decision making.

In California, there is a prohibition on using credit scores in underwriting certain risks. Perhaps a modeler for a personal lines insurance product draws information from a database of locations of check cashing stores or pawn shops and codes into the algorithm that anyone with an address in the same ZIP code is assumed to have bad credit. You would hope this would not happen, but insurance companies use outsourced talent, over which they have less control. Maybe a modeler works outside the U.S. and is innocently unfamiliar with our social norms as well as our regulatory statutes.

There are also social risks related to speed and complexity of predictive models. Dozens of datasets might be accessed, with different coded correlations and computations processed that are then weighted and ranked until a final series of recommendations or decisions are presented to the user. Transparency is difficult to attain.

If there is something ethically or statutorily wrong with a model, the speed at which processing can occur and the opaqueness of the algorithms can prolong any social harm.

Don’t Throw the Baby Out With the Bathwater

While regulation of big data analytics is not well-established, there are governance steps that insurance companies can take. Insurance companies can start by aligning their predictive models with their corporate values. Senior leadership should insist that decision-making technology adhere to all laws and regulations, but more generally will be fair. Fairness should apply to the process and to the rendered decisions. Standards should be established, customers treated with respect, professional obligations fulfilled and products represented accurately.

Insurance companies should audit their models and data to ensure a causation linkage to underwriting loss. Any data that does not support causation should be removed. Parallel processes employing traditional and artificial intelligence techniques should also be run to confirm that an appropriate confidence level of actuarial equivalence is met. Data should be scrubbed to anonymize personally identifiable information (PII) as much as necessary to support privacy expectations and statutes. To remove biases, audits should identify and require exclusion of information that acts as a proxy for statutorily disallowed data.

In essence, the models should be run through a filter of protected class categories to eliminate any illegal red-lining. Because models are developed by humans, who are inherently flawed, modelers should attempt to program their machine learning innovations to identify biases within code and self-correct for them.

From a base of fairness, carriers can take steps to promote transparency. By starting with an explanation of the model’s purpose, insurers can move toward outlining the decision-making logic, followed by subjecting the model to independent certification and finally by making the findings of the outside auditor available for review.

Insurers can look to trade associations and regulatory bodies for governance best practices, such as those the National Association of Insurance Commissioners (NAIC) announced in August 2020. The five tenets of the AI guidelines promote ethics, accountability, compliance, transparency and traceability.

See also: Our Big Problem With ‘Noise’

One regulation that could be developed would be imposing rate bands. Predictive engines would still reward superior risks and surcharge poorer-performing accounts, but rate bands would temper the extremes. This regulation would provide a balance between the necessity for mutualization of risk and individualization of pricing that could lead to unaffordability in certain cases.

Finally, insurance companies should recognize the importance of engaging with regulators early in the development of their AI strategies. A patchwork of regulation exists today, and insurance companies could find regulatory gaps that they might be tempted to exploit, but the law will catch up with the technology, and carriers should build trust with regulators from the onset, not after a market conduct exam identifies issues. Regulators do not wish to stifle innovation, but they do strive to protect consumers.

Once regulators are comfortable that models and rating plans will not unfairly discriminate nor jeopardize the solvency of the carrier, they can help enable technology advancements, especially if AI initiatives facilitate an expansion of the market through more capacity or new products, lowers overall market costs or provides insights that helps customers improve their risk profile.

In the data arms race that carriers are engaged in with each other, better risk selection and more accurate pricing are without question competitive advantages. Another, often-overlooked competitive advantage is an effective risk management program. Robust management of a company’s AI risks will reduce volatility in a portfolio and promote resiliency. With this foundation, a carrier can deftly outmaneuver competition and should be an additional strategy that is prioritized.

Best Practices for Predictive Models

There’s little doubt about the proven value in using predictive analytics for risk selection and pricing in P/C insurance. In fact, 56% of insurers at this year’s Valen Analytics Summit that are not currently using predictive analytics in underwriting plan to start within a year. However, many insurers haven’t spent enough energy planning exactly how they can implement analytics to get the results they want. It’s a common misconception that competitive advantage is won by simply picking the right model.

In reality, the model itself is just a small part of a much larger process that touches nearly every part of the insurance organization. Embracing predictive analytics is like recruiting a star quarterback; alone, he’s not enough to guarantee a win. He both requires a solid team and a good playbook to achieve his full potential.

The economic crash of 2008 emphasized the importance of predictive modeling as a means to replace dwindling investment income with underwriting gains. However, insurance companies today are looking at a more diverse and segmented market than pre-2008, which makes the “old way of doing things” no longer applicable. The insurance industry is increasing in complexity, and with so many insurers successfully implementing predictive analytics, greater precision in underwriting is becoming the “new normal.” In fact, a recent A.M. Best study shows that P/C insurers are facing more aggressive pricing competition than any other insurance sector.

Additionally, new competitors like Google, which have deep reservoirs of data and an established rapport and trust with the Millennial generation, means that traditional insurers must react to technologies faster than ever. Implementing predictive analytics is the logical place to start.

The most important first step in predictive modeling is making sure all relevant stakeholders understand the business goals and organizational commitment. The number one cause of failure in predictive modeling initiatives isn’t a technical or data problem, but instead a lack of clarity on the business objective combined with a defect in the implementation plan (or lack thereof).

red

ASSESSMENT OF ORGANIZATIONAL READINESS

If internal conversations are focused solely on the technical details of building and implementing a predictive model, it’s important to take a step back and make sure there’s support and awareness across the organization.

Senior-Level Commitment – Decide on the metrics that management will use to measure the impact of the model. What problems are you trying to solve, and how will you define success? Common choices include loss ratio improvement, pricing competitiveness and top-line premium growth. Consider the risk appetite for this initiative and the assumptions and sensitivities in your model that could affect projected results.

Organizational Buy-In – What kind of predictive model will work for your culture? Will this be a tool to aid in the underwriting process or part of a system to automate straight-through processing? Consider the level of transparency appropriate for the predictive model. It’s usually best to avoid making the model a “black box” if underwriters need to be able to interact with model scores on their way to making the final decisions on a policy.

Data Assets – Does your organization plan to build a predictive model internally, with a consultant or a vendor that builds predictive models on industry-wide data? How will you evaluate the amount of data you need to build a predictive model, and what external data sources do you plan to use in addition to your internal data? Are there resources available on the data side to provide support to the modeling team?

MODEL IMPLEMENTATION,/p>

After getting buy-in from around the organization, the next step is to lay out how you intend to achieve your business goals. If it can be measured, it can be managed. This step is necessary to gauge the success or failure post-implementation. Once you’ve set the goals for assessment, business and IT executives should convene and detail a plan for implementation, including responsibilities and a rollout timeline.

Unless you’re lucky enough to work with an entire group of like-minded individuals, this step must be taken with all players involved, including underwriting, actuarial, training and executive roles. Once you’ve identified the business case and produced the model and implementation plan, make sure all expected results are matched up with the planned deliverables. Once everything is up and running, it is imperative to monitor the adoption in real-time to ensure that the results are matching the initial model goals put in place.,/p>

UNDERWRITING TRAINING

A very important but often overlooked step is making sure that underwriters understand why the model is being implemented, what the desired outcomes are and what their role is in implementing it. If the information is presented correctly, underwriters understand that predictive modeling is a tool that can improve their pricing and risk selection as opposed to undermining the underwriters. But there are still some who rely solely on their own experience and knowledge who may feel threatened by a data-driven underwriting process. In fact, nearly half of the attending carriers at the 2015 Valen Summit cited lack of underwriting adoption as one of the primary risks in a predictive analytics initiative.

Insurers that have found the most success with predictive modeling are those that create a specific set of underwriting rules and showcase how predictive analytics are another tool to enhance their performance, rather than something that will replace them entirely. Not stressing this point can result in resistance from underwriters, and it is essential to have their buy-in. At the same time, it is also important to monitor the implementation of underwriting guidelines, ensuring that they are being followed appropriately.

KEEPING THE END IN MIND,/p>

Many of the challenges and complexities in the P/C marketplace are out of an individual insurer’s control. One of the few things insurers can control is their use of predictive modeling to know what they insure. It’s one of the best ways an insurer is able to protect its business from new competitors and maintain consistent profit margins.

Using data and analytics to evaluate your options allows you to test and learn, select the best approach and deliver results that make the greatest strategic impact.

While beginning a predictive analytics journey can be difficult and confusing at first glance, following these best practices will increase your chances of getting it right on the first try and ensuring your business goals will be met.

2 Ways to Innovate in Life Insurance

Individual life insurance ownership in U.S. has been decreasing over the past decade, and the figures are even more depressing when we look at the figures over the past 50 years. Life insurance ownership (both group and individual) among U.S. adults has dropped from 70% of individuals in 1960 to 59% in 2010. The number of individual policies owned by U.S. adults has dropped from 59% in 1960 to 36% in 2010, according to the Life Insurance and Market Research Association (LIMRA). The world has seen accelerated change over the past several decades, and, as entire industries transform, even leading and innovative companies can get trampled. The life insurance industry is no exception. The figures clearly demonstrate the slowing demand for life insurance. Are we seeing the “death” of life insurance, or is this just a temporary “blip” as the industry re-designs itself for changing demographics? Are there innovative business models that can change the situation?

The Case for Big Data and Analytics

The life insurance industry needs to innovate and needs to innovate fast. Innovation has to come from understanding end consumer needs better, reducing distribution costs in addressing these needs and developing products that are less complex to purchase. By leveraging new technologies, particularly new sources of data and new analytics techniques, insurers will be able to foresee some of these changes and prepare for disruptive change.

There are at least two distinct ways in which new sources of data and analytics can help in the life insurance sector.

  • Underwriting: Identifying prospects who can be sold life insurance without medical underwriting (preferably instantaneously) and speeding up the process for those who do require medical underwriting
  • New non-standard classes: Identifying and pricing prospects who have certain types of pre-existing conditions, e.g., cancer, HIV and diabetes.

Predictive Modeling in Underwriting

A predictive model essentially predicts a dependent variable from a number of independent variables using historically available data and the correlations between the independent variables and the dependent variable. This type of modeling is not new to life insurance underwriters as they have always predicted mortality risk for an individual, based on variables of historical data, such as age, gender or blood pressure.

With the availability of additional data about consumers, including pharmacy or prescription data, credit data, motor vehicle records (MVR), credit card purchase data and fitness monitoring device data, life insurers have potentially a lot of data that can be used in the new business process. Because of privacy and confidentiality considerations, most insurers are cautious in using personally identifiable data. However, there are a number of personally non-identifiable data (e.g., healthy living index computed by zip code) or household level balance sheet data that can be used to accelerate or “jet-underwrite” certain classes of life insurance.

Some insurance companies are already using new sources of sensor data and applying analytics to personalize the underwriting process and are reaping huge benefits. For example, an insurer in South Africa is using analytics to underwrite policies based on vitality age, which takes into account exercise, dietary and lifestyle behaviors, instead of calendar age. The insurer combines traditional health check-ups with diet and fitness checks, and exercise tracking devices to provide incentives for healthy behavior. Life insurance premiums change on a yearly basis. The company has successfully managed to change the value proposition of life insurance from death and living benefits to “well-being benefits,” attracting a relatively healthier and younger demographic. This new approach has helped this company progressively build significant market share over the past decade and exceed growth expectations in the last fiscal year, increasing profits by 18% and showing new-business increases of 13%.

Pricing Non-Standard Risk Classes

In the past, life insurers have excluded life insurance cover for certain types of conditions, like AIDS, cancer and stroke. With the advances in medical care and sensors that monitor vital signs of people with these conditions on a 24×7 basis, there is an opportunity to price non-standard risk classes. Websites that capture a variety of statistics on patients with specific ailments are emerging. Medical insurers and big pharmaceutical companies are leveraging this information to understand disease progress, drug interaction, drug delivery, patient drug compliance and a number of other factors to understand morbidity and mortality risks. Life insurers can tap into these new sources of data to underwrite life insurance for narrower or specialized pool of people.

For example, a life insurance company in South Africa is using this approach to underwrite life insurance for HIV or AIDS patients. They use extensive data and research on their HIV patients to determine mortality and morbidity risks, combine their offering with other managed care programs to offer non-standard HIV life insurance policies. They have been operating over the past four years and are branching out into new classes of risk including cancer, stroke and diabetes.

Surviving and Thriving in the World of Big Data

The examples we have provided are just scratching the surface of what is likely to come in the future. Insurers that want to leverage such opportunities should change their mindset and address the challenges facing the life insurance sector. Specifically, they should take the following actions:

  • Start from key business decisions or questions
  • Identify new sources of data that can better inform the decision-making process
  • Use new analytic techniques to generate insights
  • Demonstrate value through pilots before scaling
  • Fail forward — institute a culture of test-and-learn
  • Overcome gut instinct to become a truly data-driven culture

In summary, life insurance needs to innovate to be a relevant product category to the younger and healthier generation. Using new sources of big data and new analytic techniques, life insurers can innovate with both products and processes to bring down the cost of acquisition and also open up new growth opportunities.

What cycle-time improvements have you been able to achieve in the life new-business process? How well are you exploiting new data and analytic techniques to innovate in the life insurance space?

Predictive Analytics And Underwriting In Workers' Compensation

Insurance executives are grappling with increasing competition, declining return on equity, average combined ratios sitting at 115 percent and rising claims costs. According to a recent report from Moody’s, achieving profitability in workers’ compensation insurance will continue to be a challenge due to low interest rates and the decline in manufacturing and construction employment, which makes up 40% of workers’ comp premium.

Insurers are also facing significant changes to how they run underwriting. The industry is affected more than most by the aging baby boomer population. In the last 10 years, the number of insurance workers 55 or older has increased by 74 percent, compared to the 45 percent increase for the overall workforce. With 20 percent of the underwriter workforce nearing retirement, McKinsey noted in a May 2010 Report that we will need 25,000 new underwriters by 2014. Where will the new underwriters come from? And more importantly, what will be the impact on underwriting accuracy?

Furthermore, there’s no question that technology has fundamentally changed the pace of business. Consider the example of FirstComp reported by The Motley Fool in May 2011. FirstComp created an online interface for agents to request workers’ compensation quotes. What they found was remarkable. When they provided a quote within one minute of the agent’s request, they booked that policy 52% of the time. However, their success percentage declined with each passing hour that they waited. In fact, if FirstComp waited a full 24 hours to respond, their close rate plummeted to 30 percent. In October 2012, Zurich North America was nominated for the Novarica Research Council Impact Award for reducing the time it takes to quote policies. In one example, Zurich cut the time it took to quote a 110-vehicle fleet from 8 hours to 15 minutes.

In order to improve their companies’ performance and meet response time expectations from agents, underwriters need advanced tools and methodologies that provide access to information in real-time. More data is available to underwriters, but they need a way to synthesize “big data” to make accurate decisions more quickly. When you combine the impending workforce turnover with the need to produce quotes within minutes, workers’ comp carriers are increasingly turning toward the use of advanced data and predictive analytics.

Added to these new industry dynamics is the reality that both workers’ compensation and homeowners are highly unprofitable for carriers. According to Insurance Information Institute’s 2012 Workers’ Compensation Critical Issues and Outlook Report, profitable underwriting was the norm prior to the 1980s. Workers’ comp has not consistently made an underwriting profit for the last few decades for several reasons including increasing medical costs, high unemployment and soft market pressures.

What Is Predictive Analytics?
Predictive analytics uses statistical and analytical techniques to develop predictive models that enable accurate predictions about future outcomes. Predictive models can take various forms, with most models generating a score that indicates the likelihood a given future scenario will occur. For instance, a predictive model can identify the probability that a policy will have a claim. Predictive analytics enables powerful, and sometimes counterintuitive, relationships among data variables to emerge that otherwise may not be readily apparent, thus improving a carrier’s ability to predict the future outcome of a policy.

Predictive modeling has also led to the advent of robust workers’ compensation “industry risk models” — models built on contributory databases of carrier data that perform very well across multiple carrier book profiles.

There are several best practices that enable carriers to benefit from predictive analytics. Large datasets are required to build accurate predictive models and to avoid selection bias, and most carriers need to leverage third party data and analytical resources. Predictive models allow carriers to make data-driven decisions consistently across their underwriting staff, and use evidenced-based decision making rather than relying solely on heuristics or human judgment to assess risk.

Finally, incorporating predictive analytics requires an evolution in terms of people, process, and technology, and thus executive level support is important to facilitate adoption internally. Carriers who fully adopt predictive analytics are more competitive in gaining profitable market share and avoiding adverse selection.

Is Your Organization Ready For Predictive Analytics?
As with any new initiative, how predictive analytics is implemented will determine its success. Evidence-based decision-making provides consistency and improved accuracy in selecting and pricing risk in workers’ compensation. Recently, Dowling & Partners Securities, LLC, released a special report on predictive analytics and said that the “use of predictive modeling is still in many cases a competitive advantage for insurers that use it, but it is beginning to be a disadvantage for those that don’t.” The question for many insurance executives remains: Is this right for my organization and what do we need to do use analytics successfully?

There are a few important criteria and best practices to consider when implementing predictive analytics to help drive underwriting profitability.

  • Define your organization’s distinct capability as it relates to implementing predictive analytics within underwriting.
  • Secure senior management commitment and passion for becoming an analytic competitor, and keep that level of commitment for the long term. It will be a trial and error process, especially in the beginning.
  • Dream big. Organizations that find the greatest success with analytics have big, important goals tied to core metrics for the performance of their business.