Advertisement

http://insurancethoughtleadership.com/wp-content/uploads/2014/04/bg-h1.png

Facebooktwitterredditpinterestlinkedinmail

August 7, 2017

Setting the Record Straight on Big Data

Summary:

Recent concerns about the accuracy of big data used in insurance applications reflects badly outdated thinking.

Photo Courtesy of Pexels

Recently, an article was written on ITL (and published in the Six Things newsletter) that cautioned against the use of big data to change the customer experience when applying for insurance. The article demonized eliminating or even minimizing the plethora of questions required by carriers and, instead, using data from the public domain. In making his point, the author referred to a “startup called Aviva.” Aviva, in fact, is not a startup, but a FTSE 100 company that has revenue in excess of GBP50 billion, has 30,000 employees and has been around for more than 150 years given its Norwich Union and Commercial Union lineage.

The article stunned me. The author’s thinking seems to be of a different era.

In no way am I suggesting that efforts by the insurance community to use data from the public domain to improve customers’ experience is perfect, but the premise of the article showed little understanding for the depth and complexity of information sought by insurers to evaluate and price risk, and the burdens for customers and their agents to provide that information. The article also tried to simplify a complex subject into good versus bad because of specific instances of incorrect information sourced from the public domain.

The evolution in this space is far more robust and advanced than the author seemed to understand.

See also: When Big Data Can Define Pricing  

As society has evolved, so have the sources and accessibility of information, and so has our decision making. We don’t rely on the first return by Google on a search engine or simply get a single return on a product search when seeking a product on Amazon. The same rules apply when humans make decisions – they seek input from multiple people. Insurtechs seeking to navigate the big data domain are addressing the challenge by applying this real world behavior — reducing the demands for customer information by understanding the context and bringing data together from a variety of sources, often with a high degree of veracity.

Terrene Labs, a SaaS provider to the carrier, MGA and broker community is among the most compelling examples. Terrene has managed to reduce the 150 to 200 questions required to place a property and liability, work comp and auto cover for small business customers (the $100 billion market of companies with as  many as 100 employees and $10 million in revenue) by requiring only four pieces of data. Terrene assembles data fragments from more than 900 sources (insurance-specific, non-insurance, private data sources, etc.) to generate all the information for a completed application (as well as additional relevant risk information not sought by carriers). Terrene does not have static rules of sourcing data (despite what the author suggested) but uses machine learning and artificial intelligence to dynamically source data based on algorithms that value veracity. The results are far more impressive and the process to achieve this far more complex, than the author of the referenced article seems to understand.

A powerful example that illustrates the point is determination of NAICS or SIC code, which is the basis for all carriers’ risk appetite selection and the basis for pricing. Terrene’s proprietary techniques are far more accurate than the process an agent CSR typically uses to determine class of business. A customer that identifies her business as a “cabinet store, maker and installer” could be properly categorized as a NAICS classification of 337 (furniture and related product manufacturing) or a NAICS code 444190 (kitchen cabinet store). The Terrene engine can properly determine which category is appropriate with an extremely high degree of accuracy. This accuracy ensures that appropriate carriers for this risk can be identified without the risk of rejection further into the submission/quoting process, frequently a pain point and a significant source of inefficiency and yield loss.

Big data, if done well, can improve the quality as compared with a customer’s self-reporting, which typically has an element of bias. For example, in a surety context, over a large sample set from one carrier, none of the customers reported prior bankruptcies. The Terrene solution, in fact, determined that 16% had a prior bankruptcy. Similarly, powerful insights into risk profile that are typically not sought by carriers can now be generated. For example, Terrene profiles characteristics in the risk that are not consistent with self-reporting of profession or trade – one recent example was a home remodeler that carried an asbestos remediation license.

See also: What Industry Gets Wrong on Big Data  

The evolution of big data is a work in process, so companies are taking different approaches in their journey. One such example is a company that uses the Terrene capability to pre-populate an application that then can be reviewed and affirmed by a customer before a submission is made – a process that customers report is far more effective than self-completing a 200-question set (which typically takes two-plus hours), not to mention the substantial improvement in information veracity. Unfortunately, like the article referenced at the outset, not enough positive attention is being taken to understand these powerful advancements that leaders such as Terrene can deliver now.

description_here

About the Author

Andrew Robinson is an insurance industry executive and thought leader. He is an executive in residence at Oak HC/FT, a premier venture growth equity fund investing in healthcare information and services and financial services technology.

+ READ MORE about this author ...

Like this Post? Share it!

Add a Comment or Ask a Question

blog comments powered by Disqus
Do NOT follow this link or you will be banned from the site!