Using NLP to Detect Fraud in Insurance Claims

Natural language processing can sift through massive data sets instantly, identifying anomalies that may indicate fraud.

Depiction of language processing

Fraud is a significant problem for the insurance industry, and companies are starting to take advantage of natural language processing (NLP) to expose it. 

Here’s what to expect: 

How Insurers Can Implement NLP for Fraud Detection

Experts say the NLP market will reach about $29 billion in 2024 and nearly double that by 2029. NLP has a place in insurance because it can often detect fraud when used in processing claims. 

NLPs gather mostly text-based client information, such as the claim description, police reports, medical records and phone transcripts. The software extracts the relevant data points and uses them to fill out claims. Without a human’s help, the NLP software can compare the data with clients’ past claims, criminal records and other crucial factors that could play into fraud detection. 

When comparing a client’s claim with similar filings with the same company, an NLP can quickly see how much correlation there is between claims and identify suspicious activities. 

The Coalition Against Insurance Fraud estimates it costs Americans more than $300 billion annually — or just under $1,000 for every person in the country. 

Other Advantages

NLPs analyze large datasets far faster than humans can, reducing paperwork for claims professionals and letting them increase client satisfaction daily by making them wait less. 

You can see NLPs helping insurance underwriters save time on routine work, too, so they can evaluate risk more precisely. 

NLPs can help address the expectation that the industry will lose about 400,000 workers by 2026. 

NLPs provide particular value when there is a backlog of claims — such as after a natural disaster. AI gives companies 24/7 support by talking with clients even when the customer service department isn’t available. In fact, contact center labor costs are expected to drop by $80 billion by 2026 due to conversational AI. Insurers can use NLPs as their support desk and reduce the need to outsource this feature. 

Legal and Ethical Issues of NLP for Insurers

Insurers must be aware of the various legal and ethical issues surrounding NLP and their ramifications for the company’s future.

Bias

Bias is one of the most pressing issues because NLP can discriminate against particular demographics if precautions aren’t in place. NLP programs can be inaccurate if the data fed into them aren't precise. NLP developers use historical data to train it, and this information could derive from biased outcomes, so NLP outcomes can exacerbate current biases

A 2021 Language and Linguistics Compass study finds bias occurs most often in data, models, research design, input representations and the annotation process. Some bias is typical, but excess leads to adverse outcomes for insurance companies. 

An example of NLP bias you may see is with language input. People speak English differently, and the NLP software might not be accustomed to specific dialects and accents, leading to inaccurate information. The NLP’s inadequacy could lead to an insurer wrongly denying a claim. 

While a 2023 study finds ChatGPT invariant to race and ethnicity and insurance type, there are statistically significant correlations in word frequency across race and ethnicity and a difference in subjectivity across the types of insurance. 

See also: How Technology Is Changing Fraud Detection

Client Privacy

NLP software’s reliance on personal data opens questions on clients' privacy. Any insurer will need someone’s physical address, email address, telephone number and other essential details when a claim is filed, and it’s up to the company to keep these records confidential. Unauthorized use of a client’s information, such as selling it to third parties, could breach privacy and invite lawsuits, depending on your jurisdiction.

Insurance companies must be aware of recent privacy laws that protect consumer data. Virginia, California, Colorado, Utah and Connecticut are five states with comprehensive privacy laws that insurers should be aware of. For instance, the Utah Consumer Privacy Act, enacted on Dec. 31, says consumers have the right to know what data a company collects, how it uses the information and if they decide to sell it. 

Sometimes, a privacy breach could be unintentional if a cyberattack occurs. These online attacks cause millions of dollars in damage and destroy reputations if an insurer doesn’t adequately protect its clients. NLPs draw personal attention and financial details from claims, so exposure could lead to devastating consequences. 

The 2023 MOVEit cyberattack has affected 94.2 million people worldwide and 2,730 insurance companies — emphasizing the importance of cybersecurity in the industry when using NLPs. Failure to protect clients will result in damaged reputations and possible fines from authorities if insurers don’t satisfy regulators. 

Evolving Regulations

There is little federal guidance on NLPs and what insurers can and can’t use them for. Most of these determinations are in the hands of state governments — and many have not taken action yet. However, evolving regulations will determine how much insurance companies can lean into this technology and the penalties for misuse. Following news and developments on NLP laws is crucial for insurance professionals.

Some states have taken action against AI, including various laws protecting consumers from profiling. Insurers may use AI profiling to determine whether a client is eligible for insurance coverage, but they may only do that if the customer consents to giving their information. Virginia, Connecticut, California and Colorado are four states that have implemented this policy. 

The most significant regulation yet comes from the European Union (EU), so insurers with an international presence should be aware of using NLP in their operations. Last December, EU regulators agreed to rules governing AI as developers continue to grow this advanced technology. Europe’s top governing body will ban AI systems using social scoring, biometric identification and categorization, and other unacceptable risk factors for NLPs. 

 


Jack Shaw

Profile picture for user JackShaw

Jack Shaw

Jack Shaw serves as the editor of Modded.

His insights on innovation have been published on Safeopedia, Packaging Digest, Plastics Today and USCCG, among others.

 

MORE FROM THIS AUTHOR

Read More