Do You Have FOMO on Gen AI?

Tech leaders are feeling pressure to integrate generative artificial intelligence into their programs, but some caution is in order.

A robotic hand shaking a human hand against a blue/purple background

KEY TAKEAWAY:

--Whether you're implementing AI algorithms or large language model (LLM) tools like ChatGPT, rushing to implement tools without expertise can hurt claim accuracy, data security and confidentiality. As with any new product or service, solutions need to be developed with those who have vast industry knowledge, specific to users’ needs, and must meet the high standards our industry requires for data integrity, confidentiality and the trust our customers expect.

----------

Growing mainstream use of GenAI tools like ChatGPT have supercharged the desire to adopt this technology in every industry, including P&C. GenAI enables interfaces that allow users to engage with AI through natural language, dramatically improving usability. 

GenAI has prompted the technology community to invest significantly in more powerful computing solutions, creating a powerful, virtuous cycle that is very exciting. Tech leaders are now being pressed to deliver programs that integrate generative artificial intelligence into their claims workflows. 

Innovation With Industry Expertise

While our company is equally excited and encouraged by the opportunities GenAI offers, as industry veterans we have the responsibility to make sure AI fear of missing out (FOMO) does not lead to technology implementation without proper due diligence.

With the influx of fintech startups promising to automate claims overnight, it’s easy for companies to take shortcuts in implementing AI and risk damages to carriers and their customers. Often, these companies lack the intricate knowledge and experience in claims management to understand the complexity, or the long-standing partnerships needed for connectivity across the entire workers’ comp or auto claims landscape. Whether you're implementing AI algorithms or large language model (LLM) tools like ChatGPT, rushing to implement tools without expertise can hurt claim accuracy, data security and confidentiality.

Cigna, for example, currently faces a class action lawsuit over charges that it illegally used an AI algorithm to deny hundreds of thousands of claims without a physician’s review. The case illustrates why giving AI too much authority right away may not be the best first step. New tech, we believe, shouldn’t replace human judgment where it’s needed; instead, it should be used to augment expertise and prioritize human experience and intervention.

This is the premise behind the development we have done in our auto physical damage team with the Mitchell Intelligent Solutions portfolio. Mitchell Intelligent Review, for example, combines cloud computing, Mitchell-generated vehicle data and the company’s machine-learning and computer-vision models to scan photos of collision damage and automatically evaluate the labor operations entered on the estimate. The artificial intelligence (AI) then flags problematic estimates that require a closer look by a trained appraiser. Automating this traditionally manual, time-consuming and resource-intensive task is intended to help carriers increase estimate accuracy, ensure quality and pinpoint workflows or areas of the business in need of improvement. The new approach also gives insurers the ability to review every estimate written and then assists them in identifying the specific appraisals they should focus on to accelerate settlement times for policyholders. 

When it comes to the hype around LLM and GenAI, casualty industry professionals need to be even more diligent in using this technology, especially when it comes to privacy concerns for claims processes. You wouldn’t want to place a claimant’s medical or personal identifiable information (PII) through a public system like ChatGPT without knowing where the information is going and who is securing it. Ethical questions about how and where to implement these technologies can only be determined by those with sufficient experience and expertise in the industry to know where the opportunities lie, while proper usage must be trusted to those with appropriate security and technology infrastructure.

See also: 5 Ways Generative AI Will Transform Claims

Opportunities Abound

The good news is there are many practical application opportunities for AI in our industry. These include customer service, triage, potential fraud and property damage and bodily injury applications, just to name a few. My company is looking at these areas and others, using our experience in auto and workers’ comp claims and our proficiency in advanced technologies to provide guidance on where these technologies make the most sense across auto and casualty claims. 

AI continues to provide a powerful opportunity to leverage data (be it medical billing data, repair information, photos of damaged vehicles or images from litigation demands) to improve task automation and enable advanced decision support to claims professionals as they seek to help individuals return to work, achieve optimal health or get back on the road.

As technology leaders, we’re excited about the potential of LLMs and GenAI technology. As with any new product or service, however, solutions need to be developed with those who have vast industry knowledge, specific to users’ needs, and must meet the high standards our industry requires for data integrity, confidentiality and the trust our customers expect. Meeting these demands won’t be easy, but I believe, with the right mix of experience and innovation, the opportunity of GenAI is even better than the buzz.


Alex Sun

Profile picture for user AlexSun

Alex Sun

Alex Sun took the helm as CEO of Enlyte in 2021, when it was formed through the merger of three companies in the workers' compensation sector: Coventry Workers Comp Services, Genex Services and Mitchell International.

Sun was formerly CEO of Mitchell, which he joined in 2001.

MORE FROM THIS AUTHOR

Read More