Tag Archives: big data

Underwriting Small Business Post-COVID

Despite the impact of COVID-19, the commercial insurance marketplace must provide the proper insurance coverage for small businesses that have survived, morphed, jump-started or stalled. Insurance carriers have needed to assemble a more complete snapshot of each small business’ individualized risks, but now that’s more important than ever. To do so, carriers need to go beyond traditional data sources to minimize the information gap and help to transform the underwriting of small businesses.

Assessing risks more accurately

One of the challenges in underwriting a small business is finding sufficient financial data about the business. According to a 2019 internal study conducted by LexisNexis Risk Solutions, approximately half of small businesses have a credit profile only in a single commercial credit bureau. When insurers exclusively use commercial credit for commercial rating, they are likely missing the true risk profile of the small business in their book. 

To properly protect a small business customer, insurance carriers need to make sure they’re collecting and analyzing available data. Fortunately, there is an abundance of data and analysis available to overcome this problem; it just needs to be aggregated, analyzed and provided in a readily digestible way. 

Gain from a multi-source strategy 

A multiple-source approach can address the gaps, but identifying and evaluating the right data sources is critical for pricing a risk fairly, for both the customer and the insurer. Our internal analysis shows that when insurers use three financial data sets in their underwriting, it results in an average scorable rate of 74% compared with just 52% with only one source.

Leveraging small business credit data also provides insurance carriers with extended visibility and financial data insights on small and micro businesses, and combining small business credit data with other available business data makes it even more powerful. Providing predictive modeling makes it easier for carriers to evaluate a business by its loss propensity at the point of quote, underwriting or renewal. 

With financial data from millions of small businesses, carriers can benchmark a customer against the industry at large and have financial insight that may not be found in commercial credit sources. This approach, with an incremental model of business data and small business credit data, can provide up to an 88% scorable rate coverage on small businesses and can match up to 96% when combined with business owner financial data.

See also: COVID-19 Trio Tops Global Business Risks

Create an inclusive approach

For commercial carriers looking to improve their book of business, begin by understanding your current and future target market. How do these types of businesses compare with similar entities in your book of business, and what financial products do they use?

Next, select the right sources of data for a particular business. Credit bureaus, non-traditional financial sources and personal financial data can all be used to better align to your book of business.

Lastly, create an underwriting program that leverages these data sources to better segment small businesses based on a more precise view of the business’ or business owner’s financial profile. This design, a predictive model, is built specifically to help you more quickly and confidently assess risks. Taking advantage of segmentation can increase the effectiveness of your program and improve your loss ratio contingencies. 

Insurers looking to remain competitive within the small business market need to evaluate the right mix of information on both the business and its owner to price the risks of each small business they insure more accurately. Embracing change and seeking predictive models with industry data can improve risk assessment and support more dependable decision-making.

The Promise of Predictive Models

An innovation strategy around big data and artificial intelligence will uncover insights that allow smart carriers to acquire the most profitable clients and avoid the worst. Companies that develop the best portfolios of risks will ultimately enjoy a flight to quality while those left behind will compete for the scraps of insurability.

Insurers are also trying to individualize underwriting rather than use the traditional underwriting of risk categories.

As such, the insurance industry finds itself in a data arms race. Insurance carriers are leveraging their datasets and engaging with insurtechs that can help.

For the underwriter, big data analytics promise the ability to make better decisions with respect to risk selection and pricing. Underwriters have thought too many times that if they had just understood a particular area of risk better they would have charged a lower price and won the business; or had they had that little extra piece of information they would not have written an account that turned out to be unprofitable. Most certainly, underwriters would assert that with better information they would have charged a more appropriate price for a risk and most definitely would not have lost money.

One solution has been developing predictive underwriting risk selection and pricing models. By leveraging datasets previously unavailable, or in formats too unstructured to use, algorithmic models can better categorize and rank risks, allowing an underwriter to select and assign the most appropriate price that rewards better risks and surcharges those that are riskier. Better risks might be those that are simply less inherently risky than others (e.g., a widget manufacturer vs. an explosives manufacturer with respect to product liability or property coverage), or those whose behaviors and actions are more cautious. Through a predictive, data-driven
model, underwriters will be able to build profitable and sustainable portfolios of risks, allowing them to expand their writings to a broader customer base, pass along cost savings from automation to their clients, provide insights into means by which their insureds can reduce risk or identify new areas of coverage and product and bring more value to customers.

With this win-win situation at hand, the insurance industry has charged forward in data mining the decade’s worth of their own internal information, as well as accessing public databases, leveraging data brokers and partnering with insurtechs that have their own data lakes they can access. Algorithmic models then are being fine-tuned by actuaries, statisticians and behaviorists to find causation links and correlations between seemingly disparate data points with the intention of divining future loss outcomes. In this digital frenzy, what gets lost, however, is that there can be social costs from the methods by which all this data is used.

See also: 11 Keys to Predictive Analytics in 2021

Balancing Social Good With Social Cost

It is not false altruism to reward good risks, build resiliency in portfolios or discover insights that lead to new products and services. However, underwriters must recognize that they are inherently in the business of bias. While it is acceptable to be discerning between a safe driver and reckless one, it is unacceptable to build into underwriting decision a bias toward race and religion and many times gender or health conditions. It is therefore essential that underwriters, and the actuaries and data scientists who support them, act responsibly and be accountable for any social failures of the algorithmic models they employ.

With our predictive risk selection model in mind, consider some of the available data that could be processed:

–decades of workers’ compensation claims data

–policyholder names, addresses and other personally identifiable information (PII)

–DMV records

–Credit scores and reports

–Social media posts

–Telematics

–Wearable tech data

–Biometric data

–Genetic and genealogy information

–Credit card and purchasing history

Consult algorithmic accountability experts like law professor Frank Pasquale, and they will provide you with additional data sets you might not even know existed. Professor Pasquale described the availability of databases of everything from the seemingly innocuous (wine enthusiasts) to those that shock the conscience (victims of rape). With the myriad of data available and so much of it highly personal in nature, underwriters must recognize they have a responsibility to a new set of stakeholders beyond their company, clients, shareholders and regulators — namely, digital identities.

The next risk of social harm is in how that data is used. Predictive models seek to identify correlations between new points of data to predict loss potential. If correlations are wrong, not only could it jeopardize the underwriter’s ability to properly price a risk, but the correlations could result in an illegal practice like red-lining. This situation could occur accidentally, but a dataset could be used nefariously to circumvent a statute prohibiting use of certain information in decision making.

In California, there is a prohibition on using credit scores in underwriting certain risks. Perhaps a modeler for a personal lines insurance product draws information from a database of locations of check cashing stores or pawn shops and codes into the algorithm that anyone with an address in the same ZIP code is assumed to have bad credit. You would hope this would not happen, but insurance companies use outsourced talent, over which they have less control. Maybe a modeler works outside the U.S. and is innocently unfamiliar with our social norms as well as our regulatory statutes.

There are also social risks related to speed and complexity of predictive models. Dozens of datasets might be accessed, with different coded correlations and computations processed that are then weighted and ranked until a final series of recommendations or decisions are presented to the user. Transparency is difficult to attain.

If there is something ethically or statutorily wrong with a model, the speed at which processing can occur and the opaqueness of the algorithms can prolong any social harm.

Don’t Throw the Baby Out With the Bathwater

While regulation of big data analytics is not well-established, there are governance steps that insurance companies can take. Insurance companies can start by aligning their predictive models with their corporate values. Senior leadership should insist that decision-making technology adhere to all laws and regulations, but more generally will be fair. Fairness should apply to the process and to the rendered decisions. Standards should be established, customers treated with respect, professional obligations fulfilled and products represented accurately.

Insurance companies should audit their models and data to ensure a causation linkage to underwriting loss. Any data that does not support causation should be removed. Parallel processes employing traditional and artificial intelligence techniques should also be run to confirm that an appropriate confidence level of actuarial equivalence is met. Data should be scrubbed to anonymize personally identifiable information (PII) as much as necessary to support privacy expectations and statutes. To remove biases, audits should identify and require exclusion of information that acts as a proxy for statutorily disallowed data.

In essence, the models should be run through a filter of protected class categories to eliminate any illegal red-lining. Because models are developed by humans, who are inherently flawed, modelers should attempt to program their machine learning innovations to identify biases within code and self-correct for them.

From a base of fairness, carriers can take steps to promote transparency. By starting with an explanation of the model’s purpose, insurers can move toward outlining the decision-making logic, followed by subjecting the model to independent certification and finally by making the findings of the outside auditor available for review.

Insurers can look to trade associations and regulatory bodies for governance best practices, such as those the National Association of Insurance Commissioners (NAIC) announced in August 2020. The five tenets of the AI guidelines promote ethics, accountability, compliance, transparency and traceability.

See also: Our Big Problem With ‘Noise’

One regulation that could be developed would be imposing rate bands. Predictive engines would still reward superior risks and surcharge poorer-performing accounts, but rate bands would temper the extremes. This regulation would provide a balance between the necessity for mutualization of risk and individualization of pricing that could lead to unaffordability in certain cases.

Finally, insurance companies should recognize the importance of engaging with regulators early in the development of their AI strategies. A patchwork of regulation exists today, and insurance companies could find regulatory gaps that they might be tempted to exploit, but the law will catch up with the technology, and carriers should build trust with regulators from the onset, not after a market conduct exam identifies issues. Regulators do not wish to stifle innovation, but they do strive to protect consumers.

Once regulators are comfortable that models and rating plans will not unfairly discriminate nor jeopardize the solvency of the carrier, they can help enable technology advancements, especially if AI initiatives facilitate an expansion of the market through more capacity or new products, lowers overall market costs or provides insights that helps customers improve their risk profile.

In the data arms race that carriers are engaged in with each other, better risk selection and more accurate pricing are without question competitive advantages. Another, often-overlooked competitive advantage is an effective risk management program. Robust management of a company’s AI risks will reduce volatility in a portfolio and promote resiliency. With this foundation, a carrier can deftly outmaneuver competition and should be an additional strategy that is prioritized.

Most-Needed Strategy for Insurers in 2021

The last year has proved to be one of the most difficult that the insurance industry has ever faced, but the challenging times are not over yet. In addition to the increase of regulatory challenges, as well as competitive and customer disruptions, insurers continue to endure the unanticipated effects of a global pandemic. Insurers’ technologies are stretched to the max, and employees now work from home offices in an effort to accommodate social distancing guidelines.

Customers have also been confronted by this environment and expect even more from their insurance providers. Between fluctuating stay-at-home orders and other restrictions, along with continuing concerns posed by the current climate, the need for insurance has never seemed so significant. However, for insurers to overcome the added pressures and deliver the exceptional service customers expect, they will need to make a fundamental shift in their operational model.

The key to resiliency is in operational changes and business reinvention

Insurers need to adapt to ensure that the effects of disruption – whether caused by an unexpected event, such as a pandemic, or a new regulation – do not hinder the experience their customers receive. Insurers must reinvent their business so that the services and products they provide are both appropriate for customers now and capable of withstanding future upheaval.

This might sound like a huge undertaking, but it is possible to achieve these goals through the use of technology, which will allow insurers to consolidate, analyze and use data-driven insights. Data is at the heart of the solution.

Better data improves visibility that, in conjunction with accurate scenario-based modeling and planning, will assist in the development of a more agile organization. This is especially important at a time when some insurers have had to grapple with the added challenge of doing business with a lower headcount. Data can also be useful in anticipating when customer service functions might be affected by local lockdowns or increased restrictions. 

Use data to better tailor products to customers’ needs

Better data can drive artificial intelligence that can be applied to how products and services evolve for customers. They want insurance that instills them with confidence during difficult times, but insurers need to balance their coverage to avoid overexposing themselves to events that could appear out of nowhere.

See also: Free Insurance Data You’ll Need

You can’t predict the unpredictable, but you can plan for it

To survive this climate or any other challenging situation, insurers must be able to plan, model and predict the likelihood and impact of possible events. By using technology – and, more specifically, data – insurance providers can ensure that customer expectations are met and that businesses are prepared for the future.

Challenges Remain on Use of Data, Analytics

As insurance companies look to optimize performance, mitigate risk and meet rising consumer expectations, they still face a plethora of challenges when it comes to data and analytics. Companies continue to aggregate more and more data – but the manner in which they are doing so is not necessarily efficient. Some 40% to 50% of analysts spend their time wrangling the data, rather than finding meaningful insights.

To address these operational inefficiencies, TransUnion commissioned Aite Group to conduct a study of insurance and financial services professionals. The findings from this study outline how companies can stay competitive in the insurance industry while adapting to the evolving world of data and analytics.

Like most established financial institutions, insurance companies have multiple data repositories across the organization. Individual business units own their respective processes for capturing and managing data and, more often than not, manage at the product level rather than at the customer level. This often leads to inconsistencies, with no set definitions of key terms such as “customer.” As a result, information and insights are isolated to silos – by lines of business or by product – creating barriers toward seamless data integration.

To maintain a competitive edge, insurance companies recognize the need for new data sources. More than half of the study’s respondents plan to increase spending on most types of data sources, especially newer ones, such as mobile. However, as big data gets even bigger, it becomes increasingly difficult for analytics executives to find valuable insights. Addressing the challenges that arise from big data volumes requires an enterprise data management strategy as well as an investment in the proper analytics tools and platforms for processing and analyzing the data for meaningful insights.  

The majority of these institutions are currently grappling with fractured data and legacy systems, which prevents these companies from extracting value and making the data actionable. 70% of those surveyed indicated that a single analytics platform, one that coordinates and connects internal and third-party systems, is a major differentiator. However, only about two in 10 respondents indicated that their current solutions have these capabilities.

This highlights the need for a coherent enterprise data and analytics strategy and a common platform to hold and integrate existing and new data sources, as well as analytical tools. The platform needs to be flexible to support different skill sets, react to changing market conditions and have the ability to integrate alternative sources of data.

See also: Why to Refocus on Data and Analytics  

In addition to leveraging the right tools, sourcing the right talent remains a key challenge for executives. Nearly half (45%) of insurance professionals indicate that having the right talent greatly improves their ability to underwrite profitable policies. However, due to a lack of bandwidth, insurance companies often do not have the resources to allow their analytics teams to stretch their analytics creativity. 

These operational challenges can result in a significant amount of time being dedicated to cleansing and prepping the data – preventing analytical teams from performing more valuable activities such as model development. The operational challenges create an obstacle for retaining talent as these sought-after data scientists are instead assigned to trivial work. 42% of the insurance professionals surveyed indicated that it is also challenging to find qualified data scientists in the first place. 

As the use of descriptive, prescription and predictive analytics gains traction, it is imperative that executives recognize the challenges and explore solutions. By overcoming these barriers, the industry will be better prepared to embark on the next frontier of data and analytics.

For more information about the TransUnion/Aite Group study, please visit the “Drowning in Data: Thirsty for Insights” landing page.

3 Big Challenges on the Way to Nirvana

We hear almost daily how insurtech is disrupting the once-staid insurance industry. The main ingredients are big data, artificial intelligence, social media, chatbots, the Internet of Things and wearables. The industry is responding to changing markets, technology, legislation and new insurance regulation.

I believe insurtech is more collaborative than disruptive. There are many ways insurance technology can streamline and improve current processes with digital transformation. Cognitive computing, a technology that is designed to mimic human intelligence, will have an immense impact. The 2016 IBM Institute for Business Value survey revealed that 90% of outperforming insurers say they believe cognitive technologies will have a big effect on their revenue models.

The ability of cognitive technologies, including artificial intelligence, to handle structured and unstructured data in meaningful ways will create entirely new business processes and operations. Already, chatbots like Alegeus’s “Emma,” a virtual assistant that can answer questions about FSAs, HSAs and HRAs, and USAA’s “Nina” are at work helping policyholders. These technologies aim to promote not hamper progress, but strategies for assimilating these new “employees” into operations will be essential to their success.
Managing the flood of data is another major challenge. Using all sorts of data in new, creative ways underlies insurtech. Big data is enormous and growing in bulk every day. Wearables, for instance, are providing health insurers with valuable data. Insurers will need to adopt best practices to use data for quoting individual and group policies, setting premiums, reducing fraud and targeting key markets.

See also: Has a New Insurtech Theme Emerged?  

Innovative ways to use data are already transforming the way carriers are doing business. One example is how blocks of group insurance business are rated. Normally, census data for each employee group must be imported by the insurer to rate and quote, but that’s changing. Now, groups of clients can be blocked together based on shared business factors and then rated and quoted by the experience of the group for more accurate and flexible rating.

Cognitive computing can also make big data manageable. Ensuring IT goals link back to business strategy will help keep projects focused. But simply getting started is probably the most important thing.

With cognitive computing, systems require time to build their capacity to handle scenarios and situations. In essence, systems will have to evolve through learning to a level of intelligence that will support more complex business functions.

Establishing effective data exchange standards also remains a big challenge. Data exchange standards should encompass data aggregation, format and translation and frequency of delivery.
Without standards, chaos can develop, and costs can ratchet up. Although there has been traction in the property and casualty industry with ACORD standards, data-exchange standards for group insurance have not become universal.

See also: Insurtech’s Approach to the Gig Economy  

The future is bright for insurers that place value on innovating with digital technologies and define best practices around their use. It’s no longer a matter of when insurance carriers will begin to use cognitive computing, big data and data standards, but how.