What’s the Deal with Biases in AI?
In our daily lives, the term “bias” is often thrown around rather carelessly and contemptuously. Currently, many are labeling artificial intelligence systems as biased, causing many eyes to light up with alarm. However, as we will later see, artificial intelligence systems would be absolutely useless without bias— as would humans.
What is Bias?
We hear the term “bias” often and in many different contexts— including artificial intelligence, probability and statistics, and most commonly as a negative personality trait. Unsurprisingly, the true definition of the term has been muddled over recent years, as the word has adopted a negative connotation.
Bias: “A particular tendency, trend, inclination, feeling, or opinion” (from dictionary.com).
From this, we can determine that the term “bias” doesn’t automatically refer to a negative belief of some sort. In fact, we see biases everywhere. The reason behind why you eat is because you have a particular inclination (bias) towards satiation and survival. Without biases, we wouldn’t do anything. Put simply, biases are the core drivers of action.
Bias in AI Systems
Headlines such as “Artificial Intelligence Can Be Biased. Here’s What You Should Know” and “How to avoid bias in Artificial Intelligence“ become laughable when we take into account the true definition of bias. Of course, artificial intelligence can be biased, how else would they be of any use? And avoiding bias in AI would be rendering it useless, and defeating its purpose. Article titles such as these play to the negative connotation of the term, drawing in more readers due to upfront alarm.
However, the concerns they bring are valid— there are many cases of negative biases present in artificial intelligence systems. AI systems that determine how likely an offender is to re-offend were more likely to give African American offenders a medium to high-risk score than Caucasian offenders (58% and 33%, respectively). An AI recruiting tool created by Amazon hired significantly fewer women than men (and so, Amazon claims the tool was scrapped). Face detection machines have been shown to have error rates up to 34.7% on darker-skinned females and a maximum error rate of 0.8% on lighter-skinned males- a disparity far too large to ignore.
A rash conclusion one may jump to is that AI systems are explicitly prejudiced against minority groups. But, like most things, it’s not as clear as you may think. In the first case, AI systems weren’t even given information about the defendant’s ethnicity. In the second case, Amazon later reprogrammed the tool to ignore explicit gender-identifying words, such as “she” and “woman.” The issue persisted, and they realized that bias wasn’t induced through these words— it was actually based on the verbs that candidates used to describe themselves, favoring words such as “executed” and “captured,” words more commonly found on male profiles. The crux of the bias of the third situation stemmed from the use of predominantly Caucasian males in training sets— a bias on our part, not the AIs.
From this, we can conclude that AI systems aren’t programmed explicitly to perpetuate the negative biases in our society. However, there is something causing these biases, and it’s us. In the reoffending rate scenario, African Americans did, in fact, re-offend more (52% chance of reoffending vs a 39% chance for those of Caucasian descent), because the police are more inclined to arrest an individual of color than a Caucasian individual— and their inclinations are passed through the data to the AI. So ultimately, where does the bias come from? Us.
Conclusion
Negative biases in AI systems have close to nothing to do with them at all— they didn’t create the data, we gave it to them, expecting them to ignore the subtle prejudice buried within the data. It’s not the AI’s fault- it’s our fault. If we have any hope of fixing our AI, we must fix ourselves- and we can do so by making conscious efforts to promote equality for people of all backgrounds.
Sources
“A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear.” Washington Post; Sam Corbett-Davies, Emma Pierson, Avi Feller and Sharad Goel, 17 Oct. 2016.
“Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, Jeffrey Dastin, 9 Oct 2018.
“Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Joy Buolamwini, 2018.