Ethical Framework Is Needed for AI

Today’s AI algorithms are built using training data that is often old and inaccurate, which can unfairly boost insurance rates for some.

Artificial intelligence (AI) has an immediate potential to make the insurance industry more profitable. It can cut down on inaccurate claims, reduce operating costs, help to underwrite more accurately and improve the customer experience. Yet, there are legitimate concerns about how the technology may affect the industry. This blog explores some of the most common and how an ethical AI framework will help.

People are scared they will lose their jobs

As with all major digital transformations over the last 20 to 30 years, employees fear that the technology will replace them. In insurance, employees often spend 80% of their time doing administrative tasks like manual data entry and reviewing documents. Allowing AI systems to automate low-value administrative work frees employees to be far more productive and valuable. This in turn reduces operating costs, increases profit, delivers better customer engagement and increases the value of the employees themselves.

And that’s just the tip of the iceberg when it comes to the added value of AI. In the commercial property sector, using satellite- and IoT-enabled AI technology to build near real-time digital twins of risks from over 300 datasets helps insurers and customers measure, manage and mitigate risks. They can reduce claims and losses, reduce business interruption and write more profitable business.

When I talk to insurers and show them how AI platforms can work, they understand its potential right away. So do their employees. While many people in the industry may worry that AI could take away their job, the reality is almost exactly the opposite. 

In the U.K., from 2016-2020, the insurance sector underwrote over £50 billion of commercial insurance policies and yet lost £4.7 billion on this underwriting. AI and digital twins can help insurers to deliver profitable underwriting.

Inaccurate or outdated training data leads to ethical concern

But today’s AI models and algorithms are built using training data that is often old and inaccurate. For instance, more densely populated areas often report more crime due to the number of people in the area. Therefore, AI models could predict these areas to have more crimes in the future, even though the crime per capita is often no higher in densely populated areas than in less populated ones.

In addition, most reported crime does not have an exact location of the incident, so the police station where it was reported is often put down as the crime location. If you live close to a police station, your home may be seen as being at higher risk of crime even though properties close to a police station are actually far less likely to be burgled.

In both of these cases, AI models built on this data could discriminate and say a property is at higher risk of crime than it is in reality, unfairly boosting insurance costs.

These examples show how important it is that data providers and insurers understand the existing bias in data. That way we can ensure we do not accentuate these biases in future AI models.

See also: Designing a Digital Insurance Ecosystem

Lack of transparency

Some people don’t trust AI because it’s new and they don’t understand it. AI is seen, to some degree, as Big Brother. When I attend conferences on ethics in AI, people invariably talk about how social media is using AI in potentially harmful ways.

However, when I work with insurers, local government and businesses, they see that, as long as they start with an ethical framework, AI can help them to much better serve the wider community and customers as well as doing right by their employees. 

Communication is key here, about what an ethical framework entails as well as how decisions are made. Citizens must be able to understand the AI-enabled decisions that affect them, and the industry must stand ready to give them access to that information. The more people understand, the better for all of us.  

The building blocks of a new ethical AI framework

An ethical AI framework benefits us all, from customers to insurers to data providers. That’s why Intelligent AI has been working for the last year with the U.K. government’s Digital Catapult to develop an ethical AI framework specifically for our insurance platform. 

With proper education, acknowledgment of the potential flaws in existing data and a transparent way for customers and communities to request details of how AI decisions are made that affect them, AI will be understood and embraced far more quickly. 

The sooner customers accept AI, the sooner they and the insurance industry can reap the rewards of the far more accurate data, pricing and claims information that AI brings. 

Insurance should be about helping customer to manage and mitigate risk. However, today, too much time is spent on administration and not enough time is left to reduce risk and help clients with business continuity (especially as we recover from the COVID pandemic). AI has huge potential in lowering costs and increasing customer service as long as we implement it with an ethical framework.


Anthony Peake

Profile picture for user AnthonyPeake

Anthony Peake

Anthony Peake is founder and CEO of Intelligent AI. He launched Intelligent AI to help insurers more accurately predict the risk on commercial properties by using a cloud-based intelligent risk platform that draws in over 300 data points.

Read More