Innovation, standards and risk management


The accident in which an Uber driverless car hit and killed a pedestrian Sunday night in Tempe, AZ, got us talking about how innovation should progress in a risky world. We worry that Uber hasn’t always been as careful as, say, Google/Waymo and GM/Cruise in testing its autonomous technology. An investigation will tell the tale soon enough in this particular instance (one of the advantages of all those sensors on driverless cars), but we’ll still be left wrestling with a world in which innovating can place people at risk, but in which not innovating might even be more dangerous. We can’t just sit still in a world where 1.25 million people die in traffic fatalities each year.

Our CEO, Wayne Allen, took up the pen to address the role we intend to play on innovating amid risk:

"I want to be super clear. We at ITL and Innovator’s Edge are going to increasingly take positions about what 'should be.' The rate at which certain technologies are being introduced into the marketplace outpaces the ability of lawmakers and even regulators to keep up, including those in the insurance world. Governmental authorities have jurisdiction over bits and pieces of the technology ecosystem, but it is rare for a single body to have complete authority over all the concerns about privacy, discrimination, consumer safety, etc.

"It is, therefore, incumbent on our industry, the folks who are supposed to manage risks of all kinds, to step forward and lead. Let’s provide innovation at the speed of reason.

"We will never advocate for the tempering innovation for the sake of antiquated processes. No. But we advocate for everyone to cinch up their chin-strap and do what’s right. 

"Consider chatbots. If chatbots are as much a commodity as many tell me they are, then anybody could roll them out to a carrier, right? Well, what if it turns out that the data transmitted to a chatbot is personal financial information or personal information about a health condition or relates to an accident and includes health information? Shouldn’t we be asking questions about how secure these chatbots are? How safe is the data being transmitted? Shouldn’t any company integrating chatbot technology in its claims process—or any customer communication, for that matter—verify that the bot and the path through which the information passes meet a standard for privacy? 

"What about autonomous vehicles? Should we be surprised if not all autonomous technology is created equal? Could one company’s technology be more secure, more heavily tested, just plain better than another? Are there standards for testing and product rollout that should be implemented? Should we as an industry simply take the position that we will not insure any vehicle (irrespective of the creative ways insurance can and will be built into the experience) that does not meet certain standards? If we do not, I am certain some enterprising politician will roll out a solution that will be the worst possible result, will likely be in line with what some lobbyist representing some corner of some industry wants, will stymie innovation and will be nearly impossible to overturn. 

"If you agree with me and want to help provide innovation at the speed of reason, for the benefit of us all, please email me at The table stakes need to be known, and they need to be raised."

Have a great week.

Paul Carroll

Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.