'My Watch Thinks I'm Dead'

Glitches with Apple's watches demonstrate an issue with false positives that can cause problems for innovators everywhere. 

Image
apple watch

That's the headline on a recent New York Times article: "My Watch Thinks I'm Dead." It seems that the Apple Watch's recent addition of a feature designed to detect low-speed car collisions and to send help if the wearer is incapacitated interprets lots of events as collisions and automatically dials 911. The problem is especially acute among skiers, who not only get bounced around on the slopes and frequently stop suddenly but may be wearing so much clothing that they don't hear the alarm from their watches in time to head off the emergency calls. 

The calls not only create obvious problems for emergency services, many of which report being overwhelmed, but point to a broader problem with false positives that I frequently see distorting thinking about innovation. 

There can be something a bit amusing about technological screwups. All that brain power behind this fancy technology, and they did what?

The Times article does have plenty of "huh?" moments. It also points, though, to even more serious issues with false positives. We often see something described as a breakthrough because it's 90% accurate at something — but that means it's 10% inaccurate. And many "breakthroughs" are more like 65% accurate. 

The issue with false positives is especially serious in healthcare, where they can lead to overdiagnosis and overtreatment that is not only expensive but can endanger patients. Al Lewis, co-founder and CEO of Quizzify, which offers employers programs that educate their employees on healthcare issues, calculates that testing an entire employee population to spot someone at near-term risk for a heart attack would cost at least $1,000 per employee — even if you make the highly optimistic assumption that the test would be 90% accurate. The reason is all the false positives. 

If you have 1,000 employees, your 90%-accurate test will likely find the one person at high risk of heart attack this year who wouldn't otherwise be found, but it will also flag some 99 other people and send them to their doctors for extensive testing and, for many, unnecessary treatment, perhaps including stents. Lewis figures the total cost of what wellness vendors may suggest as an inexpensive test to be at least $1 million for the whole, 1,000-employee population. (He goes into much more detail here about the expense and dangers of overtesting, because of all the false positives that result.) 

You see this error all the time in the enthusiasm that tech companies express for "agents" of all sorts. Remember the "internet refrigerator"? It would track the food you had inside and reorder as needed — but what about the false positives? What happens when your football player son, who drinks a gallon of whole milk a day, goes back to college in the fall? What are you supposed to do with those gallons that show up before you intervene? 

Those agents that were supposed to sort through all the news in the world and prepare a sort of newspaper for me each morning were a great idea — but only if they got everything right. What about all the material I didn't want, yet had to wade through? I remember when phones first started to have GPS; companies waxed poetic about the possibility of spotting me outside a Starbucks and being able to send me a coupon for a latte. What if I wasn't in the mood for one? Then, you're just pestering me.

While the insurance industry doesn't indulge in techno-phoria like the Silicon Valley types do, there can still be blind spots about false positives. I see lots of optimism about the accuracy of AI in spotting claims that are likely to head to expensive litigation or clients who are seriously contemplating leaving for another carrier. Lots of life insurers talk about the high accuracy they can achieve in estimating life expectancy based on just a few questions, Yet I don't see much consideration of what happens when the AI returns a false positive. 

In some cases, there's no particular downside. You do the best you can with the new technology and figure that whatever you don't spot in the way of, say, claims headed to litigation would have been missed anyway. Still, every innovation should be viewed with the idea that there may be unintended consequences — maybe that action you take when you worry that a claim may become litigious will set off someone who never considered hiring a lawyer. 

The need to watch for unintended consequences will increase as the industry continues to move toward what we're calling "predict and prevent" and away from the traditional "repair and replace" model of indemnifying people after a loss. If we're asking people to take actions, we have to be sure we aren't steering them into any danger. A colleague shared a story from his wife about how a telematics-based system was trying to turn her into a bad driver. We can't have that.

As I said, some failures of technology can be darkly amusing. I sometimes think back to a piece a colleague at the Wall Street Journal, the late, great Jeff Zaslow, wrote 20 years ago about the foibles of technology. He included an anecdote about the early versions of TiVo, which tracked what you recorded and then recorded other shows that its algorithms decided you might like. He quoted someone as saying, "My TiVo thinks I'm gay" (which the person mentioned to a TV writer friend, who turned the idea into an episode of "The King of Queens.") 

So, I hope you got a chuckle out of the Times piece and from Jeff's. Still, the problems that they chronicled in droll fashion can create serious issues if we aren't careful. I hope we're careful.

If false positives can trip up the legendary designers at Apple, they can surely ensnare the rest of us. 

Cheers,

Paul