Make Lemonade Out of Lemonade

Lemonade's recent glitch sheds light on public fears about AI -- and about what must be done to keep AI innovation from slowing.

Image

Being a disruptor is hard. It requires taking disproportionate risks, pushing the status quo and — more often than not — hitting speed bumps.

Recently, Lemonade hit a speed bump in their journey as a visible disruptor and innovator in the insurance industry. I am not privy to any details or knowledge about the case or what Lemonade is or isn’t doing, but the Twitter event and public dialogue that built up to this moment brings forward some reflections and opportunities every carrier should pause to consider.

Let’s take a moment to make lemonade out of Lemonade events.

We should be talking about and demonstrating how we’re moving thoughtfully, safely and cautiously with new technologies. That’s how we’ll build confidence in the general public, regulators, legislators and other vital stakeholders.

Fear and Scrutiny Is Mounting

Pay attention, AI innovators; if we don’t more intentionally engage and address the risks of algorithmic systems and our intended use of consumer data with the public and regulators, we are going to hit a massive innovation speed bump. If all we do is talk about “black boxes,” facial recognition, phrenology and complex neural networks without also clearly investing in and celebrating investments and efforts in AI governance and risk management, the public and regulators will push pause.

Media coverage and dialogue about AI’s risks are getting louder. Consumers are concerned, and in an absence of aggressive industry messaging about responsible AI efforts and consumer-friendly visibility into how data is being used, regulators are reacting to protect individuals.

In July, Colorado passed SB-169. As a fast follow-up to the NAIC AI principles last year, Colorado’s law is the most direct scrutiny into insurance algorithmic fairness, management of disparate impact against protected classes and expectations for evidence of broad risk management across algorithmic systems. We will see how many states follow this lead, but insurance should watch for state legislation and DOI activity. The FTC and U.S. Congress are also developing policy and laws aiming to create greater oversight of AI and data.

Responsible Is Not Perfect – That’s OK

Regulators are trying to find the balance between enabling innovation and protecting consumers from harm. Their goal is not a perfect and fault-free AI world but establishing standards and methods of enforcement that reduce the likelihood or scope of incidents when they happen. And they will happen.

Regulators across the U.S. are realistic. They know they will never be able to afford or attract the level of data science or engineering talent to deeply and technically interrogate an AI system, so they will need to lean on controls-based processes and corporate evidence of sound governance. They are hungry for industry to demonstrate increased methods of organizational and cross-functional risk management.

I find a lot of regulatory inspiration from two other U.S. agencies. The Food and Drug Administration (FDA) offers the concept of Good Machine Learning Practices (GMLP). The Office of the Comptroller of the Currency (OCC) recently updated the model risk management handbook and emphasizes a life cycle approach to mitigating the risks of models and AI. Both recognize that minimizing AI risk is not simply about models or the data but much more broadly also about the organization, people and processes involved.

Slow Down the 'Black Box' Talk

Talking about “black boxes” everywhere not only is inaccurate but also counter-productive.

I’ve talked to and collaborated with hundreds of executives and innovation leaders across major regulated industries, and I’m challenged to identify a single example of an ungovernable AI system making consequential decisions about customers’ health, finances, employment or safety. The risk is too immeasurable.

The most common form of the broad technologies we colloquially call AI today is machine learning. These systems can be built with documentation of governance controls and business decisions made through the development process. Companies can evidence the work performed to evaluate data, test models and verify actual performance of systems. Models can be set up to be recorded, reproduced, audited, monitored and continuously validated. Objective verification can be performed by internal or external parties.

These machine learning systems are not impossibly opaque black boxes, and they are absolutely improving our lives. They are creating vaccines for COVID-19, new insurance products, new medical devices, better financial instruments, safer transportation, and greater equity in compensation and hiring.

We are doing great things without black boxes, and, in time, we will also turn black boxes into more governable and transparent systems, so those, too, will have great impact.

See also: 5 Risk Management Mistakes to Avoid

Risk Management, Not Risk Elimination

Risk management starts from a foundation of building controls that minimize the likelihood or severity of an understood risk. Risk management accepts that issues will arise.

AI will have issues. Humans build AI. We have biases and make mistakes, so our systems will have biases and make mistakes. Models are often deployed into situations that are not ideal fits. We are relatively early in understanding how to build and operationalize ML systems. But we are learning fast.

We need more companies to acknowledge these risks, own them and then proudly show their employees, customers and investors that they are committed to managing them. Is there a simple fix for these challenges? No, but humans and markets are generally forgiving of unintentional mistakes. We are not forgiving of willful ignorance, lack of disclosures or lack of effort.

Let’s Make Lemonade Out of Lemonade

Returning to where we started, this Lemonade event has provided an object lesson about the challenges balancing demonstrations of innovation with public fears about how companies are using AI.

Companies building high-stakes AI systems should establish assurances by bringing together people, process, data and technology into a life cycle governance approach. Incorporate AI governance into your environment, social and governance (ESG) initiatives. Prepare for the opportunity to talk publicly with your internal and external stakeholders about your efforts. Celebrate your efforts to build better and more responsible technology, not just the technology.

We have not done enough to help the broader public understand that AI can be fair, safe, responsible and accountable, perhaps even more so than the traditional human processes that AI often replaces. If companies do not implement assurances and fundamental governance around their systems — which are not nearly as complex as many regulators and members of the public believe they are — we’re going to have a slowdown in the rate of AI innovation.

As first published in PropertyCasualty360.


Anthony Habayeb

Profile picture for user AnthonyHabayeb

Anthony Habayeb

Anthony Habayeb is founding CEO of Monitaur, an AI governance software company, that serves highly regulated enterprises like flagship customer Progressive Insurance.

MORE FROM THIS AUTHOR

Read More