I often tell people I've been watching the same movie for decades — it will be 40 years this fall since I started covering IBM as a young pup of a reporter at the Wall Street Journal. I've watched the disruption that hit IBM spread to the rest of the computer industry, then to commerce in general, thanks to the personal computer, internet, search engines, smartphones and now AI.
Having watched the movie so often, I have a pretty good sense of how today's story lines will play out.
Today, I'll start even earlier than 1986 and offer a quick history of computing because I think the long view provides useful perspective on where insurance is — and where it's going. Some insurance processes are firmly stuck in the 1950s and 1960s, when batch processing was the only game in town. Others have made it to the 1980s and 1990s, with their PCs and networking. Still others are becoming fully modern, as they take advantage of mobile devices and generative AI.
On the theory that every industry is becoming a technology industry, insurers will eventually catch up on all fronts. Understanding where we lag the most and imagining a world where insurance can operate at the speed of Amazon will, I hope, provide a road map that will help us get to that future faster.
So, yes, I've set myself a rather ambitious goal this week.
To understand the starting point for computing (and insurance), think of my college roommate Mike. He was a computer science major, so he was wedded to the campus mainframe. He'd type out a program on a stack of punch cards, hand them in at the window in the computer center... and wait. When his turn finally came on the mainframe, he'd get a printout with the results. Given the complexity of what he was doing, and that even a typo would derail things, he inevitably had errors. So he'd debug the program, type out some more punch cards, turn them in at the window... and wait some more.
Because turnaround times were shorter at night, after most students had gone back to their rooms, Mike typically stayed out into the wee hours of the morning, napping on a table while waiting for his latest printout. (The way our habits meshed led to a comical relationship, where we sometimes didn't see other while both were awake for weeks at a time. I'd leave in the morning while he was asleep and, after working a job, not get back until he'd left for the computer center in the evening. He went home on weekends to see his girlfriend, so I'd sometimes find myself asking mutual friends, "Hey, how's Mike? I haven't talked to him in ages.")
Mike's travails were a holdover from the era of batch processing, when a computer could do only one thing at a time. Big efforts, such as processing payroll or reconciling accounting records, were done in a single batch at a time reserved on the mainframe. Mike's programs obviously weren't on anything like accounting's scale, but he still had to run a program in a single batch of cards and wait his turn.
Even though computing technology has improved by orders of magnitude since Mike and I were in college, a lot of business still operates at the speed of batch processing. You have a meeting on some issue, and a question comes up. Someone is assigned to do some analysis and comes back a week or two or three later with an answer. The issue is discussed again, and another question arises. More analysis over more weeks ensues. The batch processing influence is even stronger in insurance than in most industries because there is so very much data to analyze.
Computer scientists saw early how much better interactive computing would be and spent decades getting us there. By the '60s and '70s, time-sharing became possible. The setup was awkward: You had a keyboard and printer but had to type out a program on special tape that you fed into the machine, and turnaround times were painfully slow because you were queueing up behind all the programs running on a distant mainframe or minicomputer. But time-sharing spread the power of computing far beyond the walls of the data center. (Bill Gates got his career started on a time-sharing terminal at his high school. I, too, had access to a terminal in high school but somehow didn't do as much with it as he did.)
By the late 1970s and into the 1980s, Xerox PARC had worked its magic, and the Apple II and then the IBM PC were putting real power on individuals' desktops. The computers delivered big benefits to business because of the electronic spreadsheet but otherwise proved to be rather limited when used in isolation. Fortunately, Xerox took care of that issue, too, with the Ethernet networking standard that let businesses link their in-house computers. Then the internet took networking into the stratosphere thanks to the World Wide Web's debut in 1989 and the Mosaic browser in 1993. By the late 1990s, search engines were doing a good job of fulfilling Google's goal "to organize the world's information and make it universally accessible and useful." Then smartphones, led by the iPhone debut in 2007, put all the computing power and information in our hands. Generative AI is now letting us gather, process and use far more of the world's data than we humans could ever do on our own.
Big tech has taken advantage of the remarkable progression of technology to gather all sorts of signals about individuals (many of which I wish they didn't have) and target us with ads, with memes that keep us engaged, with dynamic pricing that maximizes their clients' revenue. Progress in other spheres is more uneven, but you can look at big retailers like Amazon and Walmart and see how they sense demand and respond to it in real time.
I'd say insurance has done a so-so job of taking advantage — acknowledging that our situation is complicated by heavy regulation and by the confusion of state-by-state oversight in the U.S. A lot of insurance work is still in a sort of batch mode — the analysis of loss runs, actuarial tables, and so on. While insurers have taken advantage of all the power on the desktop that PCs provide, I'm not sure we've done the best job of internal networking — why, for instance, isn't claims data always fed in real time to underwriters to inform future decisions? Insurers certainly haven't been great about taking advantage of all the information that's out there beyond their four walls; they're starting to figure out what data to trust and how to absorb it, but they've been slow. Insurers are also still figuring out what to do about smartphones. Yes, every company has an app these days, but my impression is that customers still want to be able to do a lot more self-service via phones than is possible today.
I'll withhold judgment on how insurance is doing on gen AI. We're headed in some good directions by gathering and doing initial processing for those in claims, underwriting and agencies, but we clearly haven't figured gen AI out — yet nobody has, so we're in good company.
The nice thing is that, whatever our inadequacies to this point, our version of the technology movie can have a happy ending for two reasons. One is that any new computer technology builds on everything that's come before in an exponential way. We're not just adding a gen AI capability alongside an information or networking capability. The capability increases by some exponent what was ushered in by smartphones, which raised what came before to some power, after it did the same to everything that came befoer that. The second reason is that we don't have to build the capability. The tech giants have done that over the past 75 years; we just have to take advantage. They're not done yet, either: The latest figure I saw is that the five biggest AI companies are investing $700 billion on infrastructure in this year alone
To me, the happy ending will come in a decade or so, when insurance can fully switch from batch processing to what I think of as conversational computing. You don't have a question in a meeting and send someone off to study the issue for weeks. You ask a question, and your AI uses all the internal and external information available to provide an answer. Loss runs and actuarial tables don't require massive studies. You converse with your computer and get the answers you need.
You can see glimmers of this sort of conversational future in some things going on today. Continuous underwriting is one great example. Why wait for an annual review of a policy when aerial imaging can tell you that a homeowner has added a pool, when an AI monitoring the internet can tell you that a restaurant has added a drinks menu or delivery options, etc.? Why not take advantage of the ability to sense what's going on among clients and prospects and respond?
Embedded insurance is another example. Why should selling an insurance policy always be a formal project? Why not just use the ability to sense when a customer might want coverage and respond?
Technology never stops moving. Moore's law made sure of that for decades, with what became a sort of mandate for semiconductor makers to double the power of a chip every year and a half to two years at no increase in cost, and other forces, such as AI, are now amplifying those gains in capability by orders of magnitude. I figure I've gone through six tech revolutions since I debuted on the computer beat in 1986, and we could be in the middle of the next one, with agentic AI.
For insurers, I hope a look at the history of computing identifies some spots where we can and should improve. But I mostly hope the history shows us that we're headed toward a conversational future, where we ask questions and get answers in real time. Just imagine what insurance could look like at the speed of Amazon.
Cheers,
Paul
