What’s the Deal with Biases in AI?

Karena Lai
Jul 27, 2020 · 4 min read

In our daily lives, the term “bias” is often thrown around rather carelessly and contemptuously. Currently, many are labeling artificial intelligence systems as biased, causing many eyes to light up with alarm. However, as we will later see, artificial intelligence systems would be absolutely useless without bias— as would humans.

Image for post
Image for post

What is Bias?

We hear the term “bias” often and in many different contexts— including artificial intelligence, probability and statistics, and most commonly as a negative personality trait. Unsurprisingly, the true definition of the term has been muddled over recent years, as the word has adopted a negative connotation.

Bias: “A particular tendency, trend, inclination, feeling, or opinion” (from ).

From this, we can determine that the term “bias” doesn’t automatically refer to a negative belief of some sort. In fact, we see biases everywhere. The reason behind why you eat is because you have a particular inclination (bias) towards satiation and survival. Without biases, we wouldn’t do anything. Put simply, biases are the core drivers of action.

Bias in AI Systems

Headlines such as “Artificial Intelligence Can Be Biased. Here’s What You Should Know” and “How to avoid bias in Artificial Intelligence“ become laughable when we take into account the true definition of bias. Of course, artificial intelligence can be biased, how else would they be of any use? And avoiding bias in AI would be rendering it useless, and defeating its purpose. Article titles such as these play to the negative connotation of the term, drawing in more readers due to upfront alarm.

However, the concerns they bring are valid— there are many cases of negative biases present in artificial intelligence systems. AI systems that determine how likely an offender is to re-offend were more likely to give African American offenders a medium to high-risk score than Caucasian offenders (58% and 33%, respectively). An AI recruiting tool created by Amazon hired significantly fewer women than men (and so, Amazon claims the tool was scrapped). Face detection machines have been shown to have error rates up to 34.7% on darker-skinned females and a maximum error rate of 0.8% on lighter-skinned males- a disparity far too large to ignore.

A rash conclusion one may jump to is that AI systems are explicitly prejudiced against minority groups. But, like most things, it’s not as clear as you may think. In the first case, AI systems weren’t even given information about the defendant’s ethnicity. In the second case, Amazon later reprogrammed the tool to ignore explicit gender-identifying words, such as “she” and “woman.” The issue persisted, and they realized that bias wasn’t induced through these words— it was actually based on the verbs that candidates used to describe themselves, favoring words such as “executed” and “captured,” words more commonly found on male profiles. The crux of the bias of the third situation stemmed from the use of predominantly Caucasian males in training sets— a bias on our part, not the AIs.

Image for post
Image for post
(JavaScript!) Photo by on

From this, we can conclude that AI systems aren’t programmed explicitly to perpetuate the negative biases in our society. However, there is something causing these biases, and it’s us. In the reoffending rate scenario, African Americans did, in fact, re-offend more (52% chance of reoffending vs a 39% chance for those of Caucasian descent), because the police are more inclined to arrest an individual of color than a Caucasian individual— and their inclinations are passed through the data to the AI. So ultimately, where does the bias come from? Us.

Conclusion

Negative biases in AI systems have close to nothing to do with them at all— they didn’t create the data, we gave it to them, expecting them to ignore the subtle prejudice buried within the data. It’s not the AI’s fault- it’s our fault. If we have any hope of fixing our AI, we must fix ourselves- and we can do so by making conscious efforts to promote equality for people of all backgrounds.

Sources

Washington Post; Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel, 17 Oct. 2016.

Reuters, Jeffrey Dastin, 9 Oct 2018.

Joy Buolamwini, 2018.

The Black Box

A publication covering fairness and ethics in AI.

Sign up for The Black Box

By The Black Box

Receive updates about articles and case studies from the Youth for Ethical Technology Institute! 

By signing up, you will create a Medium account if you don’t already have one. Review our for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Karena Lai

Written by

The Black Box

The Youth for Ethical Technology Institute’s blog.

Karena Lai

Written by

The Black Box

The Youth for Ethical Technology Institute’s blog.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface.

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox.

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store