Are P&C Insurers Ready for Generative AI?

The potential for advanced technology in claims management is there, but first you need a solid foundation. Here's how to build one. 

An artist’s illustration of artificial intelligence (AI)

Generative AI will have profound implications for P&C, but right now, the industry, like many others, is doing the hard work of evaluating where to apply the technology and how to do so safely. The promise of generative AI’s abilities to improve claims management is enormous, and its impact will eventually be transformative and live up to today’s hype. But this will take time. 

It’s easy to see the potential of the technology. Looking at business processes, one could say, “We could really use some summarization here,” or “a chatbot could totally handle this.” But actually incorporating generative AI responsibly into workflows that, for example, treat injured employees and assist motorists who have been injured in auto accidents is challenging given some of the inherent aspects of the technology. Addressing these characteristics requires thorough experimentation and care. 

In the short term, the P&C industry will spend more time preparing for this technology than it will implementing it. There’s no doubt, though, that generative AI is making its presence known in the industry. So as P&C companies evaluate and investigate how they can use this technology, they would also do well to begin laying the groundwork for success.

Clean up your data house

The first step is to get your company’s data house in order. Generative AI requires massive amounts of data to train the model, understand the company’s knowledge base and interact in a human-like manner. But simply having terabytes of data isn’t enough — it needs to be clean and accessible. Many insurers, the larger ones, in particular, have data stored in multiple locations across many different systems and environments. It’s difficult to access it, and, in many cases, IT doesn’t even have a good idea of what data the organization has in the first place.

Additionally, it’s important to have a data scientist on board to oversee the data cleanup and AI training. Even if your data is clean, formatted and accessible, you need to make sure you have the right kind of data for the intended use or you could introduce biases that will cause the generative AI algorithm to provide bad content. 

See also: 5 Ways Generative AI Will Transform Claims

Work with technology providers who understand our industry

You’ll need to pay close attention to data privacy, because most P&C uses for generative AI are going to require customer data. You certainly don’t want the model to start spitting out personal information about your customers to people who have no right to see it, and you want to make sure that you’re following federal and state laws if you use such data to train your models. 

Next, you’ll need to work with your technology providers to ensure that generative AI’s hallucination problem doesn’t cause harm. Generative AI’s superpower is its ability to create entirely new content, but occasionally, this content is completely made up. Sometimes, this can be simply annoying or, for those who rely on it too heavily, embarrassing, such as when it makes references to publications that don’t exist. This occurs when the technology generates incorrect or misleading results caused by several factors, including insufficient training data, incorrect assumptions made by the model or biases in the data used to train the model. 

But in a healthcare situation, hallucinated information could seriously harm or literally kill a patient. So it’s imperative P&C providers demand systems designed to monitor and detect when generative AI data is not accurate or doesn’t make sense. For this reason, it’s important to partner with technology providers that deeply understand P&C. Claims professionals are met with specific challenges, business processes and regulatory requirements that, if not understood by the user, will result in a generative AI deployment that, at best, adds no value and could cause serious, long-term harm, both to the business and policyholders. It takes time to train these models, and these models need reliable data. 

See also: A Reality Check for Generative AI

Start small

Once you have all of the above in place, you’re ready to deploy your first use case. And while it may feel anticlimactic to put in so much work preparing for only a small trial, if something goes awry, it’s easier to fix a small problem than a large one. Find a use case that will deliver concrete value to the business but also has a limited scope and exposes the organization to minimal risk. 

Success on a small scale will not only provide your team with important lessons about what works and what doesn’t, but it will also build confidence within the C-suite and the larger company, so when you do move on to bigger, more consequential uses, you’ll have earned a growing number of advocates supporting the project and pushing for its success.

Generative AI will become an increasingly important technology to P&C, and the benefits will be enormous. But they’re not going to arrive overnight, and carriers that get out over their skis with it will likely pay a high price for doing so. Right now, the best move a payer can make is to lay the groundwork for generative AI and, once the foundation is laid, test it out with small, low-risk uses, and build from there.

Mike Bishop

Profile picture for user MikeBishop

Mike Bishop

Mike Bishop is executive vice president, product and technology, at Enlyte, where he is responsible for technical product strategy, direction and execution. 

Read More