AI, Cybersecurity and Insurance Risk

Having multiple reviewers with varied socioeconomic, ethical and individual backgrounds can lower the risks of biases being placed into AI programs.

cyber

Physical assets that we can touch and see make sense to protect. Windows get broken by accident, thefts occur, pipes can burst and anything and everything can happen in between. But what about our assets that we can’t see? What about cybersecurity? The information that we store online can be extremely valuable to those looking to do harm. 

Roughly 2.5 quintillion bytes of data is created every single day, and phishing attacks, ransomware attacks and distributed denial of service attacks are so common that some 23% of small business owners have experienced an attack on their business in the last 12 months, according to a survey by Hiscox. 

Here are examples of how AI can be used to fight against specific types of cyber threats:

Privacy 

Whether it be federal government organizations, local law enforcement or even personal home networks that have unique features within their dataset that are used to train algorithms, the identities within these systems may be compromised. To avoid identities having their integrity compromised as part of the training data and adding risk to privacy, organizations/persons can use unique techniques such as federated learning. What it comes down to is training the separate models locally at the source and federating them on a more global scale, to keep the personal data secured at the site of origin. Finding specific samples of outliers and excluding them from the training is a good practice.

Bias Bounties

With older software, sharing the intricate details of an AI algorithm can become more and more of a liability, because it’s able to provide insights into the model structure and its operation. A good countermeasure, described by Forrester as a trend for 2022 (North American Predictions 2022 Guide), is bias bounties, which help support AI software companies to further improve their algorithm robustness and reliability.   

Bias bounties are becoming the go-to tool when it comes to the defense of having ethical and responsible AI because they can help ensure that the algorithm in place is as unbiased and as functional as possible. There are more sets of eyes and different thought processes involved to help review the data throughout the course of the campaign.  

See also: Quest for Reliable Cyber Security

Data Poisoning

Data poisoning is taking data and then using it for ill intent. Data used as samples in training algorithms are changed to have an output or prediction that is malicious when triggered by specific inputs. 

Data poisoning is done before the model training step occurs. Zelros has an ethical report standard, where they have been collecting a dataset signature on the successive steps of modeling to ensure that data has not been compromised.

Human Behavior

With data or AI manipulation, malicious activity is typically responsible, but personal data that we often willingly share can be used against us 

Cybersecurity's most prominent weakness is our ability to propagate knowledge of our identity and activities in seconds to millions of people across the globe. Artificial Intelligence or even basic tools that can collect data have exacerbated the problem. 

For instance, geolocalization data that is openly shared on social networks can be leveraged by AI systems to place into categories potential customer targets and provide specific outputs or recommendations. The "attention economy” has been built on the personal data that can be fairly easily accessed. Cultural and scientific awareness is going to be one of our best bets for countering the problem (as detailed in the first topic of this article).

A machine learning model may learn much more than what we could have anticipated. For example, when gender has not been identified in customer data, the algorithm can learn to infer about gender through proxy features, in a way that a human could not, at least with the same amount of data, and in such a limited time. For this reason, analyzing and monitoring the ML model is crucial. 

If we are to anticipate algorithm and model behavior, and help prevent discrimination from occurring through proxies, a key element is diversity. Having multiple reviewers who can provide input based on their socioeconomic, ethical and individual backgrounds can lower the risks of biases being placed into AI programs in the first place. Organizations can also request algorithmic audits by third parties if the organizations lack the knowledge and diversity to complete the tasks themselves.


Antoine de Langlois

Profile picture for user AntoineDeLanglois

Antoine de Langlois

Antoine de Langlois is Zelros' data science leader for responsible AI.

De Langlois has built a career in IT governance, data and security and now ethical AI. Prior to Zelros, he held multiple technology roles at Total Energies and Canon Communications. He is a member of Impact AI and HUB France AI.

De Langlois graduated from CentraleSupelec University, France.

MORE FROM THIS AUTHOR

Read More