Gen AI Fuels Insurance Fraud Arms Race

AI-enhanced fraud cases quadrupled in three years as fraudsters weaponize generative AI to overwhelm traditional carrier defenses.

An artist's illustration of AI

The insurance industry has long treated a certain level of fraud as the cost of doing business—much like a grocery store plans for produce that never makes it off the floor. But generative AI is changing that equation.

The Coalition Against Insurance Fraud estimates that fraud costs the U.S. more than $300 billion annually, with property and casualty fraud accounting for roughly $45 billion of that total.

The defenses that carriers have spent decades building—special investigation units, predictive modeling, contributor databases—are struggling to keep pace with the rapid increase in AI-generated fraud. According to Gen Re, the estimated number of AI-enhanced insurance fraud cases in the U.S. jumped from fewer than 20,000 in 2022 to more than 80,000 in 2025. And Verisk's State of Insurance Fraud Study found that 99% of insurers have encountered manipulated or AI-altered documentation.

AI's evolution has put powerful tools in nearly everyone's hands, making fraud far more scalable. Fraudsters aren't just submitting a single doctored photo and hoping it slips through—they're generating entire claim packages: fake damage photos, repair invoices, contractor assessments, and supporting documentation, all internally consistent and built to pass automated checks from intake through adjudication.

What makes these claims harder to catch

Photo fraud used to be easy to detect—borrowed images, mismatched metadata, or inconsistent lighting that an experienced adjuster could quickly flag. What we're seeing now is fundamentally different. Today's image models can generate damage photos tailored to a specific property, with realistic lighting, weather, and perspective. The images align with the claim. The invoices support the images. Everything appears to belong to the policyholder's home.

Lower-quality fraudulent submissions still give themselves away. A roofing claim might mention window damage but show no window in the photos. AI is sophisticated, but these errors still happen when details aren't carefully cross-checked. Close scrutiny can surface these inconsistencies—but only if you're looking for them.

There's also a pattern in how these claims are priced. Lower-value submissions often move straight through processing with limited human review, and fraudsters know where those thresholds sit. When a claim comes in at $4,999 on a policy capped at $5,000, it's worth asking questions.

How carriers are detecting AI-generated fraud

Detection is layered, with each layer building on the last. It starts with metadata; timestamps that don't match the loss date or geolocation data that places a photo far from the insured property are immediate red flags.

Contributory databases add another layer. By pooling data, carriers help surface emerging fraud patterns quickly—much like antivirus software matching known signatures. Even well-constructed claims leave patterns, and these systems are built to detect them.

Experienced adjusters remain irreplaceable. A claims professional with 20 years on the job has reviewed thousands of legitimate claims and can pick up on subtle details that automated systems miss, like a medical member ID number formatted incorrectly. That institutional knowledge doesn't live in a model.

Carriers are also tightening the intake process itself. Requiring policyholders to submit photos through dedicated apps—ones that establish a verified chain of custody for the image, with embedded metadata—makes it far harder to substitute AI-generated photos after the fact. Video evidence requirements add another layer; high-quality video remains significantly harder to fabricate convincingly than a still photo.

Staying ahead of the threat actors

There's an arms race quality to all of this, and the industry needs to be honest about what that means. The tools for generating fraud are becoming more sophisticated on a faster timeline than most carriers' detection capabilities are improving. Contributory databases and human expertise are necessary but not sufficient to combat this enhanced fraud. The feedback loop between detection and response has to shorten.

Regulators are paying attention. The National Association of Insurance Commissioners launched a 12-state pilot to examine how insurers use AI in claims decisions, with a nationwide rollout targeted for later this year. The same AI capabilities that enable fraud can also enable carriers to flag legitimate claims incorrectly, and the industry needs to be able to demonstrate where those boundaries are.

The volume may also be larger than headline fraud cases suggest. According to Verisk, 55% of Gen Z consumers and 49% of millennials say they'd be at least somewhat likely to make a small, rule-bending edit to a claim photo or document. Most of them probably don't think of that as fraud. They think of it as clarifying. But as AI editing tools become more accessible, the line between a touched-up photo and a fabricated one is collapsing—and the volume of altered photos will grow with it.

The most effective response keeps experienced humans in the loop, invests in shared detection infrastructure across carriers, and shortens the feedback cycle so new fraud signatures are captured and shared faster. None of this is a permanent fix. But in a contest where the other side is constantly iterating, the carriers that move fastest will absorb the least damage.

Read More