Rethinking Risk in the Age of Generative AI

As AI-driven deepfakes pose mounting threats, insurers grapple with coverage solutions for this emerging risk.

Photo of Woman Holding Artwork, Covering Half of Her Face

Early forms of artificial intelligence (AI) have played a role in shaping our technological landscape since the mid-20th century, from Grace Hopper's early programming breakthroughs during World War II to the codebreaking efforts involving the Enigma machine. Innovations like ELIZA—an early natural language processing program in the late 1960s designed to simulate human conversation—paved the way for today's AI-powered tools. Over the decades, AI has been quietly integrated into everyday life, from generating entertainment content and powering virtual assistant chatbots in banking apps, to recommending shows based on our streaming habits. That quiet presence changed dramatically in 2023, when generative AI tools, like OpenAI's ChatGPT, disrupted the market and brought AI to the mainstream.

Alongside these advances comes a troubling counterpart: deepfakes, which are capable of creating hyper-realistic videos, audio, and images that can be weaponized to impersonate executives, manipulate markets, and erode public trust.

This article explores the cybersecurity and reputational risks posed by AI—particularly deepfakes—and considers whether existing insurance products are equipped to handle them. How will the response to generative AI incidents differ from those traditional cyber-related incidents? As generative AI technologies continue to advance and become more sophisticated—and adopted on a wide scale—insurance providers face the challenge of determining how AI risk should be treated within the scope of existing insurance products or if they warrant their own distinct insurance product.

The Threat of Deepfakes to Businesses

Deepfake threats can take many forms. While the types of threats discussed in this article are demonstrative, they are just a small sample of the possibilities AI opens to cybercriminals. Like "traditional" cybersecurity security threats, AI threats evolve hand-in-hand with the underlying technology.

Blackmail & Extortion: Threat actors could use deepfake videos to manipulate or blackmail a company. By creating fake footage of executives or key employees in compromising situations, cybercriminals can pressure organizations to comply with demands or face reputational damage.

Social Engineering: Imagine a deepfake impersonating a C-suite executive, authorizing fraudulent wire transfers, or gaining access to sensitive information. This scenario is no longer hypothetical. A notable case saw a finance worker at a multinational company tricked into paying out $25 million to fraudsters who used deepfake technology to pose as the company's CFO. The ability of deepfakes to mimic the voices, likeness, and even the mannerisms of company leaders make them a powerful tool for cybercriminals.

Market Manipulation: Competitors or even nation-states could deploy deepfakes to damage a company's reputation, manipulate stock prices, or disrupt public trust. Fake announcements, altered earnings reports, or fabricated speeches from top executives could quickly erode investor confidence, causing significant financial losses. And once information is out, even if false, it is hard to contain. For example, on April 7, 2025, a misleading tweet on X regarding President Donald Trump's tariff policy caused turmoil in the U.S. stock market.

Reputational Damage & Liability Exposure: While reputational harm was once a major concern in the early days of cybersecurity, evolving public perception has made such risks feel more commonplace—though that may change as sophisticated AI-driven deepfake attacks push the boundaries of what's believable and trustworthy. Deepfake attacks can cause significant reputational harm —especially for high-profile leaders of publicly traded organizations. A CEO's image and trustworthiness are critical for stock performance and investor confidence. Deepfake technology has the potential to erode that trust almost instantly. Even if the content is later proven to be fake, the damage to a company's public image can linger, and the financial impact can be substantial.

Beyond public image, these incidents may lead to allegations that company directors and officers violated fiduciary duties, such as inadequate financial reporting, or failure to implement prudent AI policies or safeguards. Professional liability exposure may arise if AI adversely affects the rendering or performance of professional services.

The creation of fake content—such as a deepfake video of an executive making damaging statements—could also lead to immediate loss of consumer trust, stock price volatility, and lasting damage to the brand. This kind of damage is not only hard to quantify but also harder to recover from in a traditional sense, as rebuilding reputation takes much longer than addressing technical fixes or financial losses.

How Should AI Risk Be Covered by Insurers?

AI-driven incidents present unique challenges that may not be fully addressed or appreciated by traditional insurance policies.

From a policy language perspective, defining what constitutes an "AI incident" could be difficult. While deepfakes are a clear example, AI is also being used in various other ways, such as in decision-making processes, automation, and data analysis. Will all AI-driven incidents fall under this coverage, or will they need to be explicitly defined?

Furthermore, the complexity of claims associated with AI incidents, such as fraud or misinformation, may require new expertise and claims handling processes. For example, it could be difficult to identify liability in a deepfake scenario—will the board of a publicly traded company be found at fault for failure to implement adequate AI safeguards if a deepfake impersonates a CEO and causes stock price drops thus negatively impacting investors?

These challenges have created a debate over whether AI-driven incidents are sufficiently covered under existing insurance products or whether an AI-specific insurance product should be created to address these risks.

There are two schools of thought on how to approach coverage:

1. Traditional Coverage Perspective: Some argue that AI risk does not inherently change the covered risk, but rather changes the magnitude of the risk. For instance, traditional cyber insurance generally covers the financial losses incurred by an insured arising out of a cybersecurity incident; be it business interruption, crisis management costs, reputational harm, or damages arising out of third-party liability claims or regulatory investigations. If a threat actor group uses AI to infiltrate an insured's system, and then deploys a ransomware attack, the use of AI does not change the covered risk (loss due to a network intrusion), but rather makes it easier for the network intrusion to take place. The same can be said about other lines of insurance whose insureds interact with AI. Therefore, AI risk should not be covered under a standalone insurance product, as it is sufficiently covered under existing products. Notwithstanding, carriers should actively consider AI risk in the underwriting process and amend pricing and modeling operations accordingly.

2. Standalone AI Coverage Perspective: Given the unique nature of AI-driven incidents, some argue that this risk should warrant its own stand-alone product. Traditional insurance products were not designed with AI in mind. This could lead to gaps in coverage for losses involving AI. There is also a rising trend of specific AI exclusions in existing products. Without a dedicated product, businesses may find themselves unprotected from AI risks.

While this is far from a settled matter, it will be interesting to see how the industry reacts and adapts to AI risk in the near future.

Final Reflections

The rise of AI-driven risks poses a significant challenge for businesses and insurers alike. Whether AI-driven risks are adequately covered under existing insurance products or whether they should have their own distinct coverage category is a nuanced debate that requires careful consideration of the risks involved.

On one hand, AI-specific coverage could offer more tailored protection for financial, reputational, and operational risks. On the other hand, integrating AI-related incidents into traditional coverages might offer businesses more streamlined protection.

Ultimately, insurers must stay ahead of the curve by adapting their policies, training claims teams, and rethinking risk modeling. Businesses, too, must reevaluate their coverage and internal controls to ensure they are not caught off guard by AI-driven incidents.

Read More