Tag Archives: discrimination

Ethical Framework Is Needed for AI

Artificial intelligence (AI) has an immediate potential to make the insurance industry more profitable. It can cut down on inaccurate claims, reduce operating costs, help to underwrite more accurately and improve the customer experience. Yet, there are legitimate concerns about how the technology may affect the industry. This blog explores some of the most common and how an ethical AI framework will help.

People are scared they will lose their jobs

As with all major digital transformations over the last 20 to 30 years, employees fear that the technology will replace them. In insurance, employees often spend 80% of their time doing administrative tasks like manual data entry and reviewing documents. Allowing AI systems to automate low-value administrative work frees employees to be far more productive and valuable. This in turn reduces operating costs, increases profit, delivers better customer engagement and increases the value of the employees themselves.

And that’s just the tip of the iceberg when it comes to the added value of AI. In the commercial property sector, using satellite- and IoT-enabled AI technology to build near real-time digital twins of risks from over 300 datasets helps insurers and customers measure, manage and mitigate risks. They can reduce claims and losses, reduce business interruption and write more profitable business.

When I talk to insurers and show them how AI platforms can work, they understand its potential right away. So do their employees. While many people in the industry may worry that AI could take away their job, the reality is almost exactly the opposite. 

In the U.K., from 2016-2020, the insurance sector underwrote over £50 billion of commercial insurance policies and yet lost £4.7 billion on this underwriting. AI and digital twins can help insurers to deliver profitable underwriting.

Inaccurate or outdated training data leads to ethical concern

But today’s AI models and algorithms are built using training data that is often old and inaccurate. For instance, more densely populated areas often report more crime due to the number of people in the area. Therefore, AI models could predict these areas to have more crimes in the future, even though the crime per capita is often no higher in densely populated areas than in less populated ones.

In addition, most reported crime does not have an exact location of the incident, so the police station where it was reported is often put down as the crime location. If you live close to a police station, your home may be seen as being at higher risk of crime even though properties close to a police station are actually far less likely to be burgled.

In both of these cases, AI models built on this data could discriminate and say a property is at higher risk of crime than it is in reality, unfairly boosting insurance costs.

These examples show how important it is that data providers and insurers understand the existing bias in data. That way we can ensure we do not accentuate these biases in future AI models.

See also: Designing a Digital Insurance Ecosystem

Lack of transparency

Some people don’t trust AI because it’s new and they don’t understand it. AI is seen, to some degree, as Big Brother. When I attend conferences on ethics in AI, people invariably talk about how social media is using AI in potentially harmful ways.

However, when I work with insurers, local government and businesses, they see that, as long as they start with an ethical framework, AI can help them to much better serve the wider community and customers as well as doing right by their employees. 

Communication is key here, about what an ethical framework entails as well as how decisions are made. Citizens must be able to understand the AI-enabled decisions that affect them, and the industry must stand ready to give them access to that information. The more people understand, the better for all of us.  

The building blocks of a new ethical AI framework

An ethical AI framework benefits us all, from customers to insurers to data providers. That’s why Intelligent AI has been working for the last year with the U.K. government’s Digital Catapult to develop an ethical AI framework specifically for our insurance platform. 

With proper education, acknowledgment of the potential flaws in existing data and a transparent way for customers and communities to request details of how AI decisions are made that affect them, AI will be understood and embraced far more quickly. 

The sooner customers accept AI, the sooner they and the insurance industry can reap the rewards of the far more accurate data, pricing and claims information that AI brings. 

Insurance should be about helping customer to manage and mitigate risk. However, today, too much time is spent on administration and not enough time is left to reduce risk and help clients with business continuity (especially as we recover from the COVID pandemic). AI has huge potential in lowering costs and increasing customer service as long as we implement it with an ethical framework.

Diversity and Respect: Best Insurance Policy

The sins of fathers, including the Founding Fathers, visit their iniquities upon the sons of multiple generations. The sins of the past endure throughout industries large and small, including the insurance industry. The sins exempt no one, while they are a chance for everyone to repair the breach: to learn from the past and earn the trust of African-Americans.

The history of racial discrimination is too long to summarize in a column and too indescribable, except to say healing starts when hearing begins; when insurers take the time to listen to African-Americans; when listening translates into action — by and for African-Americans — so communication can flourish and insurers can succeed.

That insurers have a duty to listen, that African-Americans also have a right to a hearing, that the two intersect is reason to proceed with the hard work of reconciliation. Hard though it may be, and difficult though it will be to hear of hardships borne by innocents, insurers cannot overcome the sins of the past unless they understand how innocents continue to bear the burdens of other people’s sins.

According to Dennis Ross of StoryConnex.com:

“Very little is monolithic in the African-American community, with one exception. The memories of abuse by insurance agents who barged into the homes of elderly grandmothers to sell policies nearly by force. Today, while homes receive a knock, ZIP codes signal higher interest rates and premiums. Insurers must not only diversify their agent base but create and market plans that reward those living in areas they once punished.” 

Ross speaks of what he knows, not because he opposes insurers, but because he supports those insurers with a commitment to diversity and respect. He invites the insurance industry to lead by example, so other industries may act without delay.

Ross speaks of the need to speak truth not only to power but through the empowerment of African-Americans. He also speaks to a need — an inchoate sense among the decent and just — to do better; to expect better; to receive (and reciprocate) acts of betterment.

Insurers should follow Ross’s advice, so the industry may communicate with greater respect toward African-Americans. The diversity of communication, from marketing to advertising to recruiting to hiring, can change a relationship for the better.

See also: State of Diversity, Inclusion in Insurance

For insurers and African-Americans to come together is a chance to right the wrongs of the past. Together, the two can work to undo attempts to erase the past. Together, the two can bring some modicum of justice to the past. Together, the two can improve the present and work to make the future better than the present.

Insurers must lead with acts, not intentions.

Insurers must show that what is necessary is also doable.

Insurers must pursue excellence, so unity may thrive where diversity lives; so the lives of African-Americans may advance in harmony with liberty and justice; so all Americans may live in freedom.

Honoring these goals will bring honor to insurers.

Big Data Can Solve Discrimination

Big data has the opportunity to end discrimination.

Everyone creates data. Whether it is your bank account information, credit card transactions or cell phone usage, data exists about anyone who is participating in society and the economy.

At Root, we use data for car insurance, an industry where rating variables such as education level or occupation are used directly to price the product. For a product that is legally mandated in 50 states, the consumer’s options are limited: give up driving and likely your ability to earn a living or pay a price based on factors out of your control.

Removing unfair factors such as education and occupation from pricing leaves room for variables within an individual’s control — namely: driving habits. In this way, data can level the playing field for all consumers and provide an affordable option for good drivers whom other companies are painting with a broad brush. In the lon term, everyone wins as roads become safer and driving becomes prohibitively expensive for irresponsible drivers.

This is just one example where understanding the consumer’s individual situation deeply allows for more precise — and more rational — decision making.

But we know that the opportunity of big data goes beyond the individual. For example, the unfair practice of naively blanketing entire countries, religions or races unfairly as “dangerous” is a major topic in the news. What happens if you apply the lens of big data to this policy?

See also: Industry’s Biggest Data Blind Spot

Causal Paths vs. Assumption-Based Decisions

With the increased availability of data, we are able to better understand the causal paths between data generation and an event. The more direct the causal path, the better predictions of future events (based on data) will perform.

Imagine having something as trivial as GPS location data from a smartphone on a suspected terrorist. Variables such as having frequent cell phone conversations with known terrorists or being located within five miles of the last 10 known terrorist attacks will allow us to move away from crude, unjust and discriminatory practices and toward a more just and rational future.

Ahmad Khan Rahami, who placed bombs in New York and New Jersey, was flagged in the FBI’s Guardian system two years earlier. The agency found there weren’t grounds to pursue an investigation — a failure that may have been averted if the FBI had better data capture and analysis capabilities. Rahami purchased bomb-making materials on eBay and had linked to terrorist-related videos online before his attempted attack. Dylann Roof’s activities showed similar patterns in the months leading up to his attack on the Emanuel AME Church in Charleston, SC.

The causal path between a hate-crime or terrorist attack and the actions of Dylann Roof and Ahmad Khan Rahami is much more direct than factors such as religion, race or skin color. Yet we naturally gravitate toward making blanket assumptions, particularly if we don’t understand how data provides a better, more just approach.

Today, this problem is more acute than ever. Discrimination is rampant — and the Trump administration’s ban on travel is unacceptable and unnecessary in the era of big data. For those unmoved by the moral argument, you should also know policies like the ban are hopelessly outdated. If we don’t begin to use data to make informed, intelligent decisions, we will not only continue to see backlash from discriminatory policies, but our decision making will be systematically compromised.

The Privacy Red Herring

Of course, if data falls into the wrong hands, harm could be done. However, modern techniques for analyzing and protecting data mitigate most of this risk. In our terrorism example, there is no need for a human to ever view GPS data. Instead, this data is collected, passed to a database and assessed using a machine learning algorithm. The output of the algorithm would then direct an individual’s screening process, all without the interference of a human. In this manner, we remove biased decision making from the process and the need for a “spy” to review the data.

See also: Why Data Analytics Are Like Interest  

This definitely provides a challenge for the U.S. intelligence community, but it is an imperative one to meet. If used responsibly, analytics can provide insights based on controllable and causal variables. The privacy risk is no longer a valid excuse to delay the implementation of technologies that can solve these problems in a manner that is consistent with our values.

This world can be made a much better and safer place through data. And we don’t have to sacrifice our privacy; we can have a fair world, a safe world and a world that preserves individual liberties. Let’s not make the mistake of believing we are stuck with an outdated and unjust choice.

Your Device Is Private? Ask Tom Brady

However you feel about Tom Brady, the Patriots and football air pressure, today is a learning moment about cell phones and evidence. If you think the NFL had no business demanding the quarterback’s personal cell phone—and, by extension, that your company has no business demanding to see your cell phone—you’re probably wrong. In fact, your company may very well find itself legally obligated to take data from your private cell phone.

New Norm

Welcome to the wacky world of BYOD—bring your own device. The intermingling of personal and work data on devices has created a legal mess for corporations that won’t be cleared up soon. BYOD is a really big deal—nearly three-quarters of all companies now allow workers to connect with private devices, or plan to soon. For now, you should presume that if you use a personal computer or cell phone to access company files or email, that gadget may very well be subject to discovery requirements.

Security & Privacy Weekly News Roundup: Stay informed of key patterns and trends

First, let’s get this out of the way: Anyone who thinks Tom Brady’s alleged destruction of his personal cell phone represents obstruction of justice is falling for the NFL’s misdirection play. That news was obviously leaked on purpose to make folks think Brady is a bad guy. But even he couldn’t be dumb enough to think destruction of a handset was tantamount to destruction of text message evidence. That’s not how things work in the connected world. The messages might persist on the recipients’ phones and on the carriers’ servers, easily accessible with a court order. The leak was just designed to distract people. (And I’m a Giants fan with a fan’s dislike of the Patriots).

But back to the main point: I’ve heard folks say that the NFL had no right to ask Brady to turn over his personal cell phone. “Right” is a vague term here, because we are still really talking about an employment dispute, and I don’t know all the terms of NFL players’ employment contracts. But here’s what you need to know:

Technology and the Law

There’s a pretty well-established set of court rulings that hold that employers facing a civil or criminal case must produce data on employees’ personal computers and gadgets if the employer has good reason to believe there might be relevant work data on them.

Practically speaking, that can mean taking a phone or a computer away from a worker and making an image of it to preserve any evidence that might exist. That doesn’t give the employer carte blanche to examine everything on the phone, but it does create pretty wide latitude to examine anything that might be relevant to a case. For example: In a workplace discrimination case, lawyers might examine (and surrender) text messages, photos, websites visited and so on.

It’s not a right, it’s a duty. In fact, when I first examined this issue for NBCNews, Michael R. Overly, a technology law expert in Los Angeles, told me he knew of a case where a company actually was sanctioned by a court for failing to search devices during discovery.

Work Gets Personal

“People’s lives revolve around their phone, and they are going to become more and more of a target in litigation,” Overly said then. “Employees really do need to understand that.”

There is really only one way to avoid this perilous state of affairs—use two cell phones, and never mix business with personal. Even that is a challenge, as the temptation to check work email with a personal phone is great, particularly when cell phone batteries die so frequently.

The moral of the story: The definition of “personal” is shrinking all the time, even if you don’t believe Tom Brady shrank those footballs.

For further reading: here’s a nice summary of case law.

When Are Background Checks Not Allowed?

The Equal Employment Opportunity Commission (EEOC) has been quite active in challenging employers’ use of criminal background and credit history checks during hiring. There is still significant uncertainty as to the current standards and law about the checks of criminal and credit history. The lack solid guidance makes it difficult for employers to determine how to evaluate their current use of this information, as well as to understand the legal pitfalls and hurdles that the EEOC has placed in front of them.

EEOC Directives

The recent activity emanates from the EEOC’s recent directive and key priority (as per its December 2012 Strategic Enforcement Plan (SEP)) to eliminate hiring barriers. This priority includes challenges to policies and practices that exclude applicants based on criminal history or credit check. The EEOC has a keen interest in this area, as it believes that criminal/credit checks have a disparate impact on African American and Hispanic applicants. As the EEOC pursues the directive, expect the EEOC to scrutinize failure-to-hire claims where a criminal history or background check was conducted. Even if the background check was “facially neutral” and was uniformly given to all applicants, the EEOC may investigate to determine if the check had a “discriminatory effect” on certain applicant(s).

The EEOC asserts that criminal background checks must be “job-related” and “consistent with business necessity.” Employers are advised to consider: (1) the nature and gravity of the offense or conduct; (2) the time that has passed since the offense, conduct or completion of the sentence; and (3) the nature of the job held or sought. The EEOC stresses the need for an “individualized assessment” before excluding an applicant based on a criminal or credit record.

Local/State/Federal Laws

Employers face additional legal hurdles regarding hiring practices because of recent local and state legislative developments. These laws are commonly referred to as “ban the box” (i.e., restrictions on the use of criminal history in hiring and employment decisions). Making matters even more difficult, employers have also been subject to a surge in class action litigation under the Fair Credit Reporting Act (FCRA). The FCRA regulates the use of and gathering of criminal histories through third-party consumer reporting agencies with respect to conducting background checks on applicants or employees.

Legal Actions

In pursuit of its directive, the EEOC has filed several large-scale lawsuits against employers. We expect that the EEOC will continue to file similar lawsuits throughout 2015 and beyond. Most have been brought as failure-to-hire claims. For example, an African-American woman brought a claim alleging that she was discriminated against based on her credit history. This claim started out as a single plaintiff action, but, after the EEOC conducted its initial investigation, the EEOC dramatically expanded the scope of the initial charge, alleging that the employer was engaging in a “pattern and practice of unlawful discrimination” against: (1) African-American applicants by using poor credit history as a hiring criterion and (2) African-American, Hispanic and white male applicants by using criminal history as a hiring criterion.

Reasonable employers complain that the EEOC has placed employers in a Catch 22. Employers have to choose between ignoring criminal history and credit background, exposing themselves to potential liability for criminal and fraudulent acts committed by employees or to an EEOC lawsuit for having used this information in a discriminatory way.

Takeaway for Employers

Claims involving criminal background checks and credit checks are an EEOC priority. At this time, employers have little guidance from the courts or the EEOC as to exactly what “job-related” and “consistent with business necessity” mean and just how closely a past criminal conviction has to correspond with the duties of a particular job for an employer to legally deny employment to an applicant. Moreover, employers continue to witness expanding restrictions dealing with criminal history at the state and local level based on ban-the-box legislation, as well as with an increasing number of class action lawsuits involving background checks as required under the Fair Credit Reporting Act.

Employers are encouraged to work closely with legal counsel as to what they should and should not ask on applicants as well as how and when they can use background information they obtain. Based on this evolving area of the law, we additionally recommend that employers purchase a robust EPL policy that will defend them in the event that the EEOC or a well-skilled plaintiff’s counsel pursues a claim against them for discrimination, or for failure to hire based on criminal or credit background checks.