The insurance industry can take pride in the fact that innovation can't happen without it. Until innovators and their insurers figure out how to defray the risk from driverless cars, commercial space flight, etc., they can't go to market. But innovation also can't happen without lawyers. While we non-lawyers complain about how they slow things down, innovations can't scale until the legal system develops a framework for adjudicating the inevitable problems.
Generative AI is moving into its early legal phase, according to a report from Gartner Group. The report predicts that by the end of the year there will be more than 2,000 legal claims worldwide related to "death by AI," as mistakes by the software or by those implementing it may be the root cause of fatalities.
The implications will be most immediate for health insurers but will be felt soon enough in just about every corner of the insurance industry, especially where AI is being used to try to anticipate and prevent losses.
Let's have a look.
Gartner frames the "death by AI" issue as a broad one for companies in all industries, suggesting that general counsels need to be aware of the risks and need to work with insurers to purchase coverage. Gartner predicts that by 2030 there will be a 60% increased in corporate spending on security and governance related to AI. From that standpoint, AI looks like a big, new opportunity for insurers.
I'm more concerned about the potential surprises that may be waiting for insurers.
Those insuring medical practices, for instance, may be caught by surprise if the caretakers turn tasks over to AI that then go awry. Human doctors are still very much in the loop at the moment, but there's a real push toward instituting a combination of telemedicine and automated AI advice, especially to reach people who live in remote areas or other "healthcare deserts." So decisions will real consequences may start moving quickly into the AI.
The theory is great. You outfit people with wearables that monitor their health, alerting doctors of any warning signs. You coach people on eating, sleeping, exercise and so on. Doctors are reachable by Zoom for consultation and diagnosis.
But what happens when the AI misses the signs of an impending stroke? What happens when it misdiagnoses a diabetic?
A columnist in the Washington Post recently wrote about an experiment in Utah that raises all of these questions. It's a very responsible test, limited to having AI refill prescriptions, and could have major benefits. The columnist, an MD and former health commissioner in Baltimore, writes:
"Right now, getting a prescription refilled can be challenging. Many patients call a doctor’s office and struggle to reach the right person or are told it’s not possible without an in-person visit, which requires time and travel. Some end up putting off that visit and go without medications, which can be dangerous for those with chronic diseases such as hypertension, diabetes and cardiovascular issues."
But she also quotes a professor at Harvard Medical School who says that, "while some drugs might appear to be low-risk on paper, prescribing them is often complicated and patient-specific. He noted that many drugs require ongoing monitoring, including regular lab tests, attention to side effects and careful and nuanced discussions with patients. 'It’s not clear that AI is fully able to replicate that,' he said."
And I believe that people -- including those on juries -- hold machines to higher standards than they do humans. Humans can make errors in the heat of the moment. We know we aren't perfect. But software is written by very smart people who aren't under instant time pressure and are vetted by large, responsible organizations (with deep pockets). So AI can't just be good. It has to be perfect.
The potential for legal surprises won't just relate to "death by AI," either. There will also be "injury by AI," at a far greater rate. (While more than 40,000 people die in car accidents in the U.S. each year, for instance, some 2.5 million are injured.)
And the claims won't just hit healthcare providers that may have misdiagnosed or mistreated someone. I worry about the companies that use AI to detect dangerous situations in workplaces. What happens when they miss one and someone is hurt or killed? What happens when sensors don't detect the electrical problem in a home that leads to a fire, or the leak that's about to become a flood? When the forward-looking dashcam doesn't spot the deer that has jumped into the road?
As I've written, consumer advocates are already blaming the big, bad algorithm for any decisions they don't like on underwriting and claims. Those legal issues are about to broaden, especially for those promising prevention via AI.
We'll get through this. The legal framework will gradually develop, and we'll learn what the rules are going to be. But we need to brace ourselves for complications like the coming wave of "death by AI" claims.
Cheers,
Paul
