The Promise of Predictive Models

Big data and AI will uncover insights that allow smart carriers to acquire the most profitable clients and avoid the worst.

Image

An innovation strategy around big data and artificial intelligence will uncover insights that allow smart carriers to acquire the most profitable clients and avoid the worst. Companies that develop the best portfolios of risks will ultimately enjoy a flight to quality while those left behind will compete for the scraps of insurability.

Insurers are also trying to individualize underwriting rather than use the traditional underwriting of risk categories.

As such, the insurance industry finds itself in a data arms race. Insurance carriers are leveraging their datasets and engaging with insurtechs that can help.

For the underwriter, big data analytics promise the ability to make better decisions with respect to risk selection and pricing. Underwriters have thought too many times that if they had just understood a particular area of risk better they would have charged a lower price and won the business; or had they had that little extra piece of information they would not have written an account that turned out to be unprofitable. Most certainly, underwriters would assert that with better information they would have charged a more appropriate price for a risk and most definitely would not have lost money.

One solution has been developing predictive underwriting risk selection and pricing models. By leveraging datasets previously unavailable, or in formats too unstructured to use, algorithmic models can better categorize and rank risks, allowing an underwriter to select and assign the most appropriate price that rewards better risks and surcharges those that are riskier. Better risks might be those that are simply less inherently risky than others (e.g., a widget manufacturer vs. an explosives manufacturer with respect to product liability or property coverage), or those whose behaviors and actions are more cautious. Through a predictive, data-driven
model, underwriters will be able to build profitable and sustainable portfolios of risks, allowing them to expand their writings to a broader customer base, pass along cost savings from automation to their clients, provide insights into means by which their insureds can reduce risk or identify new areas of coverage and product and bring more value to customers.

With this win-win situation at hand, the insurance industry has charged forward in data mining the decade’s worth of their own internal information, as well as accessing public databases, leveraging data brokers and partnering with insurtechs that have their own data lakes they can access. Algorithmic models then are being fine-tuned by actuaries, statisticians and behaviorists to find causation links and correlations between seemingly disparate data points with the intention of divining future loss outcomes. In this digital frenzy, what gets lost, however, is that there can be social costs from the methods by which all this data is used.

See also: 11 Keys to Predictive Analytics in 2021

Balancing Social Good With Social Cost

It is not false altruism to reward good risks, build resiliency in portfolios or discover insights that lead to new products and services. However, underwriters must recognize that they are inherently in the business of bias. While it is acceptable to be discerning between a safe driver and reckless one, it is unacceptable to build into underwriting decision a bias toward race and religion and many times gender or health conditions. It is therefore essential that underwriters, and the actuaries and data scientists who support them, act responsibly and be accountable for any social failures of the algorithmic models they employ.

With our predictive risk selection model in mind, consider some of the available data that could be processed:

--decades of workers' compensation claims data

--policyholder names, addresses and other personally identifiable information (PII)

--DMV records

--Credit scores and reports

--Social media posts

--Telematics

--Wearable tech data

--Biometric data

--Genetic and genealogy information

--Credit card and purchasing history

Consult algorithmic accountability experts like law professor Frank Pasquale, and they will provide you with additional data sets you might not even know existed. Professor Pasquale described the availability of databases of everything from the seemingly innocuous (wine enthusiasts) to those that shock the conscience (victims of rape). With the myriad of data available and so much of it highly personal in nature, underwriters must recognize they have a responsibility to a new set of stakeholders beyond their company, clients, shareholders and regulators -- namely, digital identities.

The next risk of social harm is in how that data is used. Predictive models seek to identify correlations between new points of data to predict loss potential. If correlations are wrong, not only could it jeopardize the underwriter’s ability to properly price a risk, but the correlations could result in an illegal practice like red-lining. This situation could occur accidentally, but a dataset could be used nefariously to circumvent a statute prohibiting use of certain information in decision making.

In California, there is a prohibition on using credit scores in underwriting certain risks. Perhaps a modeler for a personal lines insurance product draws information from a database of locations of check cashing stores or pawn shops and codes into the algorithm that anyone with an address in the same ZIP code is assumed to have bad credit. You would hope this would not happen, but insurance companies use outsourced talent, over which they have less control. Maybe a modeler works outside the U.S. and is innocently unfamiliar with our social norms as well as our regulatory statutes.

There are also social risks related to speed and complexity of predictive models. Dozens of datasets might be accessed, with different coded correlations and computations processed that are then weighted and ranked until a final series of recommendations or decisions are presented to the user. Transparency is difficult to attain.

If there is something ethically or statutorily wrong with a model, the speed at which processing can occur and the opaqueness of the algorithms can prolong any social harm.

Don’t Throw the Baby Out With the Bathwater

While regulation of big data analytics is not well-established, there are governance steps that insurance companies can take. Insurance companies can start by aligning their predictive models with their corporate values. Senior leadership should insist that decision-making technology adhere to all laws and regulations, but more generally will be fair. Fairness should apply to the process and to the rendered decisions. Standards should be established, customers treated with respect, professional obligations fulfilled and products represented accurately.

Insurance companies should audit their models and data to ensure a causation linkage to underwriting loss. Any data that does not support causation should be removed. Parallel processes employing traditional and artificial intelligence techniques should also be run to confirm that an appropriate confidence level of actuarial equivalence is met. Data should be scrubbed to anonymize personally identifiable information (PII) as much as necessary to support privacy expectations and statutes. To remove biases, audits should identify and require exclusion of information that acts as a proxy for statutorily disallowed data.

In essence, the models should be run through a filter of protected class categories to eliminate any illegal red-lining. Because models are developed by humans, who are inherently flawed, modelers should attempt to program their machine learning innovations to identify biases within code and self-correct for them.

From a base of fairness, carriers can take steps to promote transparency. By starting with an explanation of the model’s purpose, insurers can move toward outlining the decision-making logic, followed by subjecting the model to independent certification and finally by making the findings of the outside auditor available for review.

Insurers can look to trade associations and regulatory bodies for governance best practices, such as those the National Association of Insurance Commissioners (NAIC) announced in August 2020. The five tenets of the AI guidelines promote ethics, accountability, compliance, transparency and traceability.

See also: Our Big Problem With ‘Noise’

One regulation that could be developed would be imposing rate bands. Predictive engines would still reward superior risks and surcharge poorer-performing accounts, but rate bands would temper the extremes. This regulation would provide a balance between the necessity for mutualization of risk and individualization of pricing that could lead to unaffordability in certain cases.

Finally, insurance companies should recognize the importance of engaging with regulators early in the development of their AI strategies. A patchwork of regulation exists today, and insurance companies could find regulatory gaps that they might be tempted to exploit, but the law will catch up with the technology, and carriers should build trust with regulators from the onset, not after a market conduct exam identifies issues. Regulators do not wish to stifle innovation, but they do strive to protect consumers.

Once regulators are comfortable that models and rating plans will not unfairly discriminate nor jeopardize the solvency of the carrier, they can help enable technology advancements, especially if AI initiatives facilitate an expansion of the market through more capacity or new products, lowers overall market costs or provides insights that helps customers improve their risk profile.

In the data arms race that carriers are engaged in with each other, better risk selection and more accurate pricing are without question competitive advantages. Another, often-overlooked competitive advantage is an effective risk management program. Robust management of a company’s AI risks will reduce volatility in a portfolio and promote resiliency. With this foundation, a carrier can deftly outmaneuver competition and should be an additional strategy that is prioritized.


Christopher McKeon

Profile picture for user ChristopherMckeon

Christopher McKeon

Christopher J. McKeon is senior vice president, head of commercial casualty and risk management for Everest Insurance. He has spent over 25 years in the insurance industry, earning a variety of underwriting and management roles of increasing responsibility.

MORE FROM THIS AUTHOR

Read More