Generative AI: Coming Faster Than You Think

In the decades I've been immersed in technology and innovation, I've never seen anything adopted nearly as fast as generative AI.

Image
purple tech brain

While you've been hit in the face for months now with articles about the importance of generative AI models like ChatGPT, and while they aren't always that helpful — "Generative AI is really important"; "Actually, it's really, really important"; "No, it's really, really, really important" — I'm going to risk another one. 

I won't get into the broad significance, which will play out over many years and which others are speculating on at great length. But I do think it's worth noting some ways that financial services firms are already taking advantage of the leap forward in AI.

Venture capital firm Andreessen Horowitz laid out the most provocative set of examples I've yet seen. The article is aimed broadly at financial services but, with just a bit of thinking, can be applied to insurance, specifically.

The article summarizes the opportunities by saying:

"We believe that the financial services sector is poised to use generative AI for five goals: personalized consumer experiences, cost-efficient operations, better compliance, improved risk management and dynamic forecasting and reporting." 

The article describes ways that conversational AI could help provide advice on credit, wealth management and taxes — which require much the same sort of counseling that insurance agents provide and that can now be enhanced. For starters, a large language model (LLM) "trained on a company’s customer chats and some additional product specification data should be able to instantly answer all questions about the company’s products," the article says

It argues that the world should head toward continuous underwriting (specifically for mortgages, but the same thinking could apply for insurance policies). The holdups are that data is siloed, that emotions complicate many financial decisions and that financial services are highly regulated. But Andreessen Horowitz says, "Generative AI will make the labor-intensive functions of pulling data from multiple locations, and understanding unstructured personalized situations and unstructured compliance laws, 1000x more efficient."

For instance, the article notes that, "at every bank [or insurance carrier], thousands of customer service agents must be painstakingly trained on the bank’s products and related compliance requirements to be able to answer customer questions. Now imagine a new customer service representative starts, and they have the benefit of having access to an LLM that’s been trained on the last 10 years of customer service calls across all departments of the bank. The rep could use the model to quickly generate the correct answer to any question and help them speak more intelligently about a wider range of products while simultaneously reducing the amount of time needed to train them."

Or, "loan officers [read, underwriters] currently pull data from nearly a dozen different systems to generate a loan file. A generative AI model could be trained on data from all of these systems, so that a loan officer could simply provide a customer name and the loan file would be instantly generated for them. A loan officer would likely still be required to ensure 100% accuracy, but their data-gathering process would be much more efficient and accurate."

Combatting fraud is another near-term opportunity. The article says, "Today, the billions of dollars currently spent on compliance is only 3% effective in stopping criminal money laundering. Compliance software is built on mostly 'hard-coded' rules. For instance, anti-money laundering systems enable compliance officers to run rules like 'flag any transactions over $10K' or scan for other predefined suspicious activity.... Now imagine a model trained on the last 10 years of Suspicious Activity Reports (SARs). Without needing to tell the model specifically what a launderer is, AI could be used to detect new patterns in the reports and create its own definitions of what constitutes a money launderer."

There's more, too, in the article, on potential operational efficiencies and on how the AI can draw on unstructured data such as news reports to automatically generate important insights on risks. So, I recommend reading and pondering the whole piece

I also recommend thinking hard about the long-term implications. The rule of thumb about breakthrough technologies is that they're overestimated in the short term but underestimated in the long term.

But, for now, I want to be sure we don't miss the potential short-term wins.

As I wrote back in February, I think the efforts by ChatGPT and other generative AIs in the near term should be viewed as rough drafts, not finished products. (Here is a remarkable story from last week, showing that not everyone has gotten the memo about how generative AIs can "hallucinate." A lawyer had ChatGPT do some research for him and submitted it as part of a brief, only to find that the AI had invented numerous court cases and citations. The judge is not amused.)

But I also think those rough drafts can be extremely valuable — and they'll improve remarkably fast.

In general, even a genuine technology breakthrough faces a chicken-and-egg problem initially and takes time to shake the world. The internet traces back to the 1960s but needed three decades to develop enough reach to change commerce. As revolutionary as the iPhone launch was in 2007, apps needed to be developed and telecom carriers needed to adapt before we all became addicted to our phones.

But AI, having incubated since the 1950s, is just software. Its latest, breakthrough form doesn't require you to wait for a network to develop or for thousands of companies to develop apps.

It's there, waiting to be exploited, both in the long run and even in the short run.

Cheers,

Paul