--The discussion has moved to how much analytical work and task automation are possible – and if it is feasible beyond simple repetitive tasks.
--Concerns have heightened about AI’s potential to replace humans and eliminate jobs.
--If it becomes impossible to tell if a person has created an image, email, letter or video, then the potential for fraud skyrockets, and determining liability in claims becomes more difficult.
When I was asked in late 2022 and early 2023 about the implications of generative AI for insurance, my reply was always two-fold. First, I advised that it is absolutely a technology space to monitor closely, given the rapid advancement, broad application and unlimited potential. But secondly, I believed that the near-term use cases for P&C insurance were limited to more horizontal spaces and not so much to insurance-specific applications.
There is no question that even now there is value in automating and enhancing interactions. ChatGPT and similar tools are, at their root, designed for conversational AI – driving informed and automated chatbot-oriented interactions. Many types of communications can benefit from this technology – agency help desks, policyholder inquiries, claims status, internal conversations and many more. In addition, anywhere in the insurance enterprise where there is a need to summarize information, create digital material or extract data is now a possibility for AI to automate. In fact, this is more than a possibility – insurers are already deploying ChatGPT across many use cases.
Recently, I started cataloging all various interesting use cases of ChatGPT and, more broadly, generative AI across industries. However, I abandoned that effort as a hopeless task. There are many articles every day on how someone has used a generative AI tool to write code, pass exams, write papers, create art, images or videos, drive database queries, power conversations and more.
Within just a few months, I have seen the tone of these articles shift from the perspective that the AI output was amazing but not always accurate or high-quality to one where significant progress has been made. It was only a few months ago when it was sometimes easy to tell the difference between human- and AI-generated content; today, the task is far more difficult.
Now the dominant questions are not about whether the technologies are viable for real-world use cases. Rather, the discussion is about how much analytical work and task automation are possible – and if it is feasible beyond simple repetitive tasks (we already have RPA for that). The use cases are rapidly expanding into more complex, industry-specific areas.
This naturally heightens the concern about AI’s potential to replace humans and eliminate jobs. My fundamental view for many years has been that the AI family of technologies will augment humans and elevate the roles of industry professionals. Agents, underwriters, adjusters and others will focus on activities that require deep expertise, experience and empathy. I still believe that is true… but not as strongly as I did in the past.
The other main question that arises is about the challenges of determining authorship. If it becomes impossible to tell if a person has created an image, email, letter or video, then the potential for fraud skyrockets, and determining liability in claims becomes more difficult.
The net of this blog is that generative AI in all its forms must be closely monitored by P&C insurers, and governments and the business world must develop the right regulatory/governing framework for AI. Experimentation with the technologies is mandatory. Now is not the time to sit on the sidelines and watch – things are moving too fast for that.