AI Creates Insurance Exposures Beyond Cyber

While cybersecurity dominates AI discussions, emerging intellectual property and professional indemnity exposures require immediate insurer attention.

An artists illustration of AI

So often, the AI conversation in insurance circles centers on cyber risk - how generative AI tools are helping bad actors automate and scale attacks. While this is certainly a concern, there are also a number of less-discussed areas where AI is quietly creating exposures for businesses.

It's a shift that's seeing opportunities to innovate in the insurance market and requiring brokers to adapt quickly.

Where is AI creating exposures for businesses?

In general, the answer to this question falls into two categories: the companies developing AI tools, and the organizations using them, sometimes without even realizing it.

Let's look first at the businesses behind AI innovation and any exposures linked to how their systems are built and trained. Here, intellectual property (IP) infringement remains one of the biggest areas of concern (particularly where training data may include copyrighted or otherwise protected content obtained without explicit permission). Even when sourced from the open web, the ownership and licensing status of data can be ambiguous.

Developers may also face issues with algorithmic bias or discrimination if the training data they use reflects narrow or unrepresentative demographics, as well as reputational and financial damage if their models generate hallucinations, prediction errors, or flawed logic from poor training or model design.

On the other side of this, we have businesses that use AI - whether through generative platforms or via off-the-shelf tools and services that integrate AI functions. Here, we're seeing new professional indemnity (PI) exposures emerging. Using tools such as ChatGPT or image generators to produce media, whether blogs, visuals, or marketing content, could lead to copyright or trademark infringement if outputs reproduce protected material. There's also the risk of generated content containing false or misleading claims, which is a concern across industries, but is particularly sensitive in regulated sectors like finance or healthcare.

Real-life examples

Right now, we're seeing several high-profile disputes and public allegations that highlight how quickly some of these new forms of AI-related IP and consent issues are surfacing.

Their outcomes - whether through courts, settlements, or new regulation - are expected to set important precedents that could reshape how insurers, brokers, and businesses assess and manage technology-related risks, and may ultimately prompt a recalibration of how exposure is understood and priced across the market.

One example is the Lothian Buses voice-clone controversy, where voiceover artist Diane Brooks has publicly alleged her voice was used (via an AI-generated version) without consent. Cases such as this one underline a growing legal uncertainty surrounding AI-generated content, and have fueled calls for stronger legislative protections.

Other disputes, however, such as the U.S./U.K. Getty Images vs Stability AI case - which centers on the use of Getty's photographs (many bearing watermarks) to train Stability's generative AI model, Stable Diffusion, without the required licenses - have already reached the courts.

In November, the U.K. High Court largely ruled in favor of Stability, finding no copyright infringement in the training or outputs of Stable Diffusion (because the AI model did not store or reproduce the works), though it did uphold limited trademark breaches involving Getty's watermarked images.

Although this provides some light on the issue, the litigation as a whole is not fully resolved globally - with the U.S. case still open. It also leaves the broader question unanswered: whether training generative AI models on copyrighted material (without permission) constitutes infringement under U.K. law. This is something the court didn't make a definitive ruling on because the jurisdiction/territorial acts were lacking.

Whatever happens with these cases, it's clear there's work to do in terms of clarifying rules around things like dataset licensing for AI training and transparency of sources. But, from a tech PI perspective, it also highlights several early lessons for businesses, insurers and brokers to take note of going forwards.

What we can learn from continuing legal disputes so far
  • For AI tool developers: Put simply, IP due diligence matters more than ever. The Getty vs Stability AI case shows the importance of understanding exactly what data goes into training AI models and the risks of assuming "open web" content is free to use. With this in mind, clear data provenance will likely become an increasingly important part of demonstrating responsible AI practices and mitigating technology PI risk. In addition, considering any cross border exposure - where the AI tool is developed, where it's trained, where the data comes from, who their customers are - will become just as important, as differing legal frameworks could compound liability.
  • For organizations adopting or integrating AI: It's not just training data, the AI-generated outputs themselves can create liability, especially if they reproduce copyrighted or trademarked material. So for any firms using AI, strong verification and oversight processes are essential. Companies should have controls in place to confirm that AI-generated content doesn't infringe copyright or trademarks, and that outputs are factually accurate and appropriate for their intended use.
  • For insurers: The entire insurance industry is monitoring cases as they unfold. Right now, many are helping businesses safeguard against risks at every stage of the AI lifecycle, with new PI and cyber coverage products emerging for tech companies. But what constitutes infringement in the AI-training context is still not clear, so underwriters and risk managers must assume this is a fast-developing area of exposure and watch it very closely. That being said, this uncertainty also presents an opportunity for the industry: as technology and regulation advance, there is significant scope to innovate through new coverage models or refined underwriting approaches, which is a real positive.
  • Brokers: While insurers will always communicate a new or refined product in light of any market changes, I'd advise brokers to talk to insurers to understand their stance on AI-related exposures and stay ahead of emerging developments. And as always, the other lesson is to continue understanding clients' operations - particularly how and where they use AI, even indirectly. If brokers can help their clients recognize the less obvious risks, from content liability to algorithmic bias or model failure, it'll be easier to guide them towards suitable protection as the AI world matures.

Whether you're a business, AI developer, insurer or broker, the bottom line here is that we must all broaden our understanding of AI-related liabilities beyond cybercrime - because AI isn't going away. Its use is accelerating across every industry, and as that happens more and more this year and next, a new layer of risk is emerging that won't be solved by traditional cyber protections alone.

Read More