February 3, 2016
How to Use All the New Data
by David Bassi
Imagine a time when much of insurance buying is inverted, beginning with an offer for coverage, rather than a lengthy application and quote request.
Most people who purchase an insurance policy are faced with the daunting task of filling out an extensive application. The insurance company – either directly or through an intermediary – asks a myriad of questions about the “risk” for which insurance is being sought. The data requested includes information about the entity seeking to purchase insurance, the nature of the risk, prior loss experience and the amount of coverage requested. Insurers may supplement that information with a limited amount of external data such as motor vehicle records and credit scores. The majority of information used to inform the valuation process, however, has been provided by the applicant. This approach is much like turning off your satellite and data-driven GPS navigation system to ask a local for directions.
According to the EMC Digital Universe with research and analysis by IDC in 2014, the digital universe is “doubling in size every two years, and by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes.” That explosion in the information ecosystem expands the data potentially available to insurers and the value they can provide to their clients. But it requires new analytical tools and approaches to unlock the value. The resulting benefits can be grouped generally into two categories:
- Providing Risk Insights: Mining a wider variety of data sources yields valuable risk insights more quickly
- Improving Customer Experience: Improving the origination policy service and claims processes through technology enhances client satisfaction
For each of these areas, I’ll highlight a vision for a better client value proposition, identify some of the foundational work that is used to deliver that value and flesh out some of the tools needed to realize this potential.
Insurance professionals have expertise that gives them insight into the core drivers of risk. From there, they have the opportunity to identify existing data that will help them understand the evolving risk landscape or identify data that could be captured with today’s technology. One can see the potential value of coupling an insurer’s own data with that from various currently available sources:
- Research findings from universities are almost universally available digitally, and these can provide deep insights into risk.
- Publicly available data on marine vessel position can be used to provide valuable insights to shippers regarding potentially hazardous routes and ports, from both a hull and cargo perspective.
- Satellite imagery can be used to assess everything from damage after a storm to proximity of other structures to the ground water levels, providing a wealth of insights into risk.
The list of potential sources is impressive, limited in some sense only by our imagination.
When using the broad digital landscape to understand risk — say, exposure to a potentially harmful chemical — we know that two important aspects to consider are scientific evidence and the legal landscape. Historically, insurers would have relied on expert judgment to assess these risks, but in a world where court proceedings and academic literature are both digitized, we can do better, using analytical approaches that move beyond those generally employed.
Praedicat is a company doing pioneering work in this field that is deriving deep insights by systematically and electronically evaluating evidence from various sources. According to the CEO Dr. Robert Reville, “Our success did not come solely from our ability to mine data bases and create meta data, which many companies today can do. While that work was complex, given the myriad of text-based data sources, others could have done that work. What we do that is unique is overlay an underlying model of the evolution of science, the legal process and the dynamics of litigation that we created from the domain expertise of our experts to provide context that allows us to create useful information from that data built to convert the metadata into quantitative risk metrics ready to guide decisions.”
The key point is that if the insurance industry wants to generate insights of value to clients, identifying or creating valuable data sources is necessary, but making sense of it all requires a mental model to provide relevance to the data. The work of Praedicat, and others like it, should not stop on the underwriter’s desktop. One underexploited value of the insurance industry is to provide insights into risk that gives clients the ability to fundamentally change their own destiny. Accordingly, advances in analytics enable a deeper value proposition for those insurers willing to take the leap.
Requiring clients to provide copious amounts of application data in this information age is unnecessary and burdensome. I contrast the experience of many insurance purchasers with my own experience as a credit card customer. I, like thousands of other consumers, routinely receive “preapproved” offers in the mail from credit card companies soliciting my business. However appealing it may be to interpret this phenomenon as a benevolent gesture of trust, I know I have found myself on the receiving end of a lending process whereby banks efficiently employ available data ecosystems to gather insights that allow the assessment of risk without ever needing to ask me a single question before extending an offer. I contrast this with my experience as an insurance purchaser, where I fill out lengthy applications, providing information that could be gained from readily available government data, satellite imagery or a litany of other sources.
Imagine a time when much of the insurance buying process is inverted, beginning with an offer for coverage, rather than a lengthy application and quote request. In that future, an insurer provides both an assessment of the risks faced, mitigations that could be undertaken (and the savings associated), along with the price it would charge.
While no doubt more client-friendly, is such a structure possible? As Louis Bode, former senior enterprise architect and solution architect manager at Great American Insurance group and current CSO of a new startup in stealth-mode observes, “The insurance industry will be challenged to assimilate and digest the fire hose of big data needed to achieve ease of use and more powerful data analytics.”
According to Bode, “Two elements that will be most important for us as an industry will be to 1) ensure our data is good through a process of dynamic data scoring; and 2) utilize algorithmic risk determination to break down the large amounts of data into meaningful granular risk indexes.” Bode predicts “a future where insurers will be able to underwrite policies more easily, more quickly and with less human touch than ever imagined.”
The potential to use a broader array of data sources to improve customer experience extends well beyond the origination process. Imagine crowdsourcing in real time the analysis of images to an area affected by a natural disaster, getting real time insights into where to send adjusters before a claim is submitted. Tomnod is already crowdsourcing the kinds of analysis that would make this possible. Or imagine being able to settle an automobile claim by simply snapping a picture and getting an estimate in real time. Tractable is already enabling that enhanced level of customer experience.
The future for insurance clients is bright. Data and analytics will enable insurers to deliver more value to clients, not for additional fees, but as a fundamental part of the value they provide. Clients can, and should, demand more from their insurance experience. Current players will deliver or be replaced by those who can.
I’d like to finish with a brief, three-question poll to see how well readers think the industry is performing in its delivery of value through data and analytics to clients. Here is my google forms survey.