Applying Cyber Lessons to Regulating AI

As we formulate a path toward regulating AI innovation appropriately, we can look to the work regulators accomplished regarding cybersecurity.

sixthings

In February 2015, Anthem disclosed that criminal hackers had breached the company’s servers and potentially stolen 37.5 million records containing confidential personal information (CPI). This was a catalyst for insurance regulators that ultimately resulted in the creation of the Insurance Data Security Model Law that is now being adopted by states across the U.S.

Similarly, in the summer of 2020, the discussion by regulators regarding race and its role in the design and pricing of insurance became the catalyst to move forward on defining the regulatory expectations for using artificial intelligence (AI) in the insurance industry. As regulators and insurers work to understand the level of regulatory oversight that will be needed for AI innovation, we can find a path forward by looking to the work regulators accomplished regarding cybersecurity.

The Making of a Model Law

Although state insurance regulators were already discussing the protection of consumers’ CPI, the Anthem breach placed a laser focus on data security. Just two months later, in April 2015, the National Association of Insurance Commissioners (NAIC) issued and adopted the “Principles for Effective Cybersecurity: Insurance Regulatory Guidance.” These principles included ideals such as establishing a minimum set of risk-based cybersecurity standards, establishing appropriate regulatory oversight, requiring incident response by insurers, requiring insurer accountability for third parties and service providers, incorporating risks in insurers’ enterprise risk management processes and identifying material risks for the insurers’ boards of directors.

Over the next 18 months, the NAIC used these principles to draft a model law to establish standards for data security and standards for the investigation of and notification to the state insurance regulators of cybersecurity incidents. During this process, the drafters quickly recognized that insurers came in different shapes and sizes, used data differently and had different levels of systems and expertise.

Such was also true of regulators. As cybersecurity is not inherently an insurance-only issue, an expert in insurance regulation was not inherently an expert in cybersecurity. Additionally, departments of insurance were not uniformly staffed with cyber experts. The new law needed to strike a balance to ensure appropriate regulatory oversight while adapting to limitations on both the insurance and regulator sides of the equation.

In October 2017, the NAIC adopted the Insurance Data Security Model Law, which tackles a highly technical domain of a similar level of complexity as what we will soon face with AI. In the law, I see five actionable areas of regulation:

  1. Proactive identification and mitigation of risks
  2. Continual monitoring and reporting of potential risks
  3. Accountability for third parties
  4. Compliance certification to regulators
  5. Transparency on significant events to regulators and opportunity to remediate

Additionally, the model law provides the insurance regulator the power to examine and investigate insurers while at the same time providing confidentiality protections for the information provided by insurers.

In adopting this model law, regulators successfully balanced maintaining significant regulatory oversight with placing the responsibility of compliance and notification of non-compliance on the insurers, which employ the necessary expertise in cybersecurity. The result was a model law that allows regulators and insurers to prioritize the protection of consumers’ CPI through an appropriate allocation of resources and expertise.

A Parallel Path for AI

Just five years later, regulators once again find themselves addressing a quickly growing, high-impact technology that is not inherently an insurance-only issue: the use of AI. This brings the familiar challenge of insurers that are at different levels of engagement in AI, including different levels of systems and expertise. It also once again highlights the challenges for regulators with strong expertise in insurance regulation but not necessarily in the nuances and risks of AI. As regulators look at creating model regulation, they will once again need to strike that balance of ensuring appropriate regulatory oversight while recognizing limitations on both sides of the equation.

See also: Boosting Cyber Hygiene With Insurtech

As they did with cybersecurity, regulators have adopted high-level guiding principles regarding AI. The NAIC's Principles on Artificial Intelligence are intended to establish guidance for AI use and assist regulators in addressing regulatory oversight of insurance-specific AI applications. This time, though, the regulators also have the benefit of a potential road map to help navigate the development of a well-defined regulatory approach.

When overlaying the NAIC principles on the five regulatory areas I outlined above, a path forward quickly develops that emphasizes the importance of the key principles of accountability, compliance, transparency and safe, secure, fair and robust outputs.

1. Proactive identification and mitigation of risks

A company should have systems and resources in place to proactively comply with all applicable insurance laws and safeguard against AI outcomes that are either unfairly discriminatory or otherwise violate legal standards.

2. Continual monitoring and reporting of potential risks

A company must have a systematic and continuous risk management approach to AI. This includes a system to analyze AI outcomes, responses and other insurance-related inquiries. Risk management should include reporting to the board of directors any material risks and mitigation plans.

3. Accountability for third parties

A company must ensure that any third parties it engages to facilitate the business of insurance are also promoting, monitoring and upholding the principles.

4. Compliance certification to regulators

A company should annually certify to the applicable regulators the existence of proactive identification systems, mitigation, monitoring and reporting of risks, as well as compliance with legal requirements.

5. Transparency on significant events to regulators and opportunity to remediate

A company should have in place systems to record data supporting AI final outcomes and should be able to produce data to ensure a level of traceability. Any unintended consequence should be remediated when identified.

And, as was done in the data security model law, a similar AI model law can provide the insurance regulator the power to examine and investigate insurers while at the same time providing confidentiality protections for insurers’ proprietary algorithms.

While at times, regulating and managing risks of AI feels overwhelming and unknown, these are not completely uncharted waters. By adopting this model framework for AI, both regulators and insurers could embrace a comprehensive approach that would allow consumers to benefit from innovation in AI while establishing important consumer protections and trust.

As first published in Digital Insurance.


Jillian Froment

Profile picture for user JillianFroment

Jillian Froment

Jillian Froment is a highly respected strategic adviser on insurance regulatory issues and an advisory board member for Monitaur, which provides AI governance and ML assurance software for regulated industries. As a former insurance commissioner, Froment has shaped national and international regulatory models and standards on issues such as cybersecurity, cyber insurance, big data, accelerated underwriting, artificial intelligence, rebating, pandemic impacts and annuity suitability.

She is a certified NAMIC mutual director and earned a juris doctorate from Capital University and a B.S. in engineering from Ohio State University.

Read More