Chief information officers, chief digital officers and others running digital innovation initiatives usually have to scrounge around for every bit of funding they can find. Not so with generative AI.
It has caught the public imagination so quickly that the world of innovation funding has flipped upside-down.
Every board knows it needs something intelligent to say about a generative AI strategy, which means every CEO knows they need something intelligent to say about a generative AI strategy. Those CEOs are turning to the in-house digital innovators and saying: "Help me figure out something intelligent to say about a generative AI strategy."
So every CIO, CDO, etc. finds themselves with money being thrown at them so they can set up a slush fund and experiment to help define a strategy -- or at least a placeholder that buys time for a strategy to be developed.
The CEO of an AI vendor I spoke with at last week's InsureTech Connect in Las Vegas said his sales cycle with major carriers used to be 12 months, or even 18, but the demand for generative AI is so feverish that he now may go from initial contact to contract in four to six weeks. He says one prospect saw a demo and asked for a contract on his way out the door.
Now, throwing money at a problem in hopes of finding a strategy tends not to end well. But while no one I met at ITC -- or anywhere else, for that matter -- has a clear answer about how generative AI will play out, some do have smart advice on how to get starting on working out what that future could look like.
In its simplest form, that advice amounts to: Dig in and play around. That doesn't mean just as a company; that means as individuals. There are endless possibilities -- and pitfalls -- associated with generative AI, and there's no time like the present to start acquainting yourself with them.
Put in a more rigorous way, "dig in and play around" means "think big, start small, learn fast," which has been my mantra for the nearly 30 years that I've been writing about corporate innovation. My frequent co-author Chunka Mui describes our approach in detail in this piece from May, "Six Words to Focus Your AI Innovation Strategy," about how to approach generative AI.
On the theory that nobody is as smart as everybody -- the founding principle for Insurance Thought Leadership -- it likely will also be useful to find fellow experimenters with whom you can share experiences as you learn both what does and what doesn't work. Along those lines, a longtime colleague, John Sviokla, already knows a ton about generative AI, as he showed in this interview I did with him recently, but through his recently formed GAI Insights group is convening lots of other smart people to share their learnings.
My personal approach when dealing with something as big and daunting as generative AI is to try to make it real by looking for examples. I get the basic theory and see the potential, but I've also seen people gloss over a lot of problems with a lot of technologies over the years, so I look for tangible results to guide me.
At ITC, I found a few new ones. Some were modest -- having the AI listen to a phone call for a claim and fill out a first notice of loss, then figure out where to place it in the queue, based on an assessment of the severity of the accident. Some were more intricate and potentially important. For instance, I was shown a live underwriting assessment of a restaurant in the Washington, DC, where the generative AI found a mechanical bull (because of a picture on the website) and a deep fryer (based on the menu). Those are the sorts of things that a thorough underwriter would have found, but having the AI find them in seconds, rather than minutes or tens of minutes, could help insurers with a tricky problem: making sure the intensity of the underwriting effort is justified by the potential size of the business.
Recent conversations, such as this one with Megan Pilcher, the insurance go-to-market leader at IntellectAI, for this month's ITL Focus also show that we're making progress in identifying opportunities. For instance, she says:
"When an underwriter prioritizes their work, documenting the accounts they did not write is a less than desirable task. We can start using AI to do that documentation and provide a summary. When the risk comes back the following year and a different underwriter picks it up, they can get a rundown."
"With today’s manual processes, someone only pulls [loss run] information if a decision has been made that at least they want to quote the risk. But would there be value in doing it at the beginning of the process, extracting loss information on risks that you would have weeded out? What could your actuaries do with that data? Could their predictive modeling be different if we were able to provide them loss data on every submission that comes to the door?... You start thinking about getting into a particular class of business, or a particular line of business, and you wonder, how many submissions would you get? What would the losses be? How would you need to price it? Now you have historical data to use for evaluation."
So, yes, my takeaway from ITC was that nobody has figured out what comes next for generative AI. But there at least are some ways to figure out how to figure out what that future could look like.
For me, that means: "think big, start small, learn fast," a la Chunka's piece; convene as many smart fellow experimenters as possible, a la John; and surface as many solid examples as you can, a la Megan and others.
P.S. As long as I'm highlighting smart pieces we've published recently at Insurance Thought Leadership, here's one more. I always enjoy the quarterly conversations I have with Dr. Michel Leonard, the chief economist at the Insurance Information Institute, and his latest economic forecast is especially interesting. He says the Fed may be signaling that it could keep raising interest rates into 2025, which would have major implications for the economy and, thus, for the insurance industry.