The Coded Gaze: Algorithmic Bias? What is it and Why Should I Care?

Pazia Bermudez-Silverman
Africana Feminisms
Published in
5 min readMay 16, 2018
Joy Buolamwini, “Poet of Code,” at her TedX talk

In this age of technology, machine-learning algorithms are used everyday by people around the world, influencing interactions, categorizations and opportunities at workplaces, in homes and through the criminal justice system. However, individuals, organizations and institutions are increasingly uncovering the bias of algorithms, including racial, gendered, class-based and regional discrimination. Our society continues to rely on these algorithms to determine if someone is guilty of a crime, who to hire for a job, what ads to promote to specific users and how much jail time someone should serve, among many other decisions, increasingly each day. Unfortunately, this means that the discriminatory practices that are fed to these algorithms, in order for them to “learn,” will only be perpetuated throughout systems in the United States and beyond. Those suffering the most from this algorithmic bias tend to be those with darker skin and those gendered as female. When looking at bias with an intersectional perspective, it is clear to see that Black women are most affected by this issue, which is why those on the forefront of writing about and fighting against these issues tend to be Black women.

“Poet of Code” Joy Buolamwini, graduate student researcher at the MIT Media Lab and Black female computer scientist, has coined the term “The Coded Gaze” to refer to algorithmic bias. She describes “The Coded Gaze” as the “embedded views that are propagated by those who have the power to code systems” (Buolamwini 2016).

What is machine learning?

Machine learning is a region of computer science that uses a set of “training data” to “learn” an algorithm in order to train the algorithm to perform well on new data not included in the training set. Examples of this include facial recognition software, auto-labeling photographs, autocorrection, and other forms of artificial intelligence. Essentially, machine-learning is training a machine, such as a computer or software, to act like a human, and with that, comes some human-like bias. For facial recognition software, a training dataset would be a set of photographs, some of faces and some not, used to make the algorithm “learn” what is a face and what is not. Then, that algorithm would be used on a set of photographs not from the dataset in the hopes that the algorithm, after “learning” from the training set, will be able to recognize whether or not there is a face in these new photographs.

So, what is the danger of machine learning in our society? Unregulated machine learning algorithms, with non-diverse training data will be skewed toward the creator’s own bias. And, if an algorithm is then spread throughout the country, and even the world, that bias will be spread as well, usually without a user’s knowledge.

How are algorithms biased?

Algorithms can be biased similar to the ways in which our society can be exclusionary, prejudice and discriminatory, including racially, regionally, gendered and class-based.

Examples of racial and gender-based Algorithmic Bias

Google Photos labeled Black people as “gorillas”

The Google photo app’s photo recognition software labeled photos of Black people as “gorillas” (Timberg 2016). Flickr’s auto-tagging system labeled images of Black people as “ape” and “animal” as well as concentration camp locations on a map as “sport” or “jungle gym” (Kasperkevic 2015). Hewlett Packard’s web camera software has difficulty recognizing people with darker skin tones and Nikon’s camera software mislabels images of east Asian people as blinking (Crawford 2016). The winners of a beauty contest judged by AI were almost entirely white, with one darker-skinned/Black-identified winner and a few light-skinned/Asian-identifying winners (Beauty.AI 2016, Levin 2016, Pearson 2016).

Why are algorithms biased?

Capitalistic greed, non-diverse training data and lack of diversity in computer programmers can lead to algorithmic bias.

Why should we care?

One main form of machine learning that algorithmic bias has an urgent impact on the lives of everyone, but especially those with marginalized identities, is the use of facial recognition software by law enforcement around the country. “The Perpetual Line Up,” a research study done by Georgetown Law’s Center on Privacy and Technology, explains why this facial recognition software encroaches on our civil rights and liberties and is extremely prejudiced towards Black people (Garvie, Bedoya, Frankle 2016).

The reasoning behind this is three-fold.

(1) Data collected for facial recognition databases can come from drivers licenses, passport photos and IDs, but is most commonly from mugshots (not necessarily of charged criminals, but of anyone arrested). And, because of systematic anti-Black and racist bias in the police department, Black people are more likely to be arrested and thus, to have mugshots taken.

(2) Additionally, people are almost never deleted from the database even if the charges against them were dropped, so this means that facial recognition databases are more likely to include a majority Black/darker-skinned people.

(3) Finally, facial recognition algorithms are rarely tested for accuracy and bias and it has been shown that facial recognition software is often biased against dark-skinned/Black folks, most significantly not being able to tell them apart, especially because these facial recognition algorithms only say whether a person is more or less likely a match, not a direct yes or no.

These 3 factors in combination make it significantly more likely for Black people to be surveilled, policed and suspected of crimes, even for those they did not commit.

What can we do?

Unfortunately, as Safiya Umoja Noble says in her recent book Algorithms of Oppression, “algorithmic oppression is not just a glitch in the system but, rather, is fundamental to the operating system of the web.” So, as Noble articulates well for us, the bias and oppression that comes with it of the algorithms used consistently in our societies is (1) not uncommon and (2) might be the only reason that our capitalist hetero-patriarchal white supremacist society continues to function in this way. Algorithmic oppression and bias, as Toni Morrison would put it, is essentially a “ghost” in the “machine” that is our informational, technological, hetero-patriarchal, white supremacist society. The machine thrives on the relative ignorance of the ghost and the suppression of activism against its invisibility. (Noble 2018, Morrison 1988)

So, I am calling you to action. We need to not only spread awareness of this topic, but also fight back.

If you’d like to learn more, listen to Joy Buolamwini’s Ted Talk, “How I’m fighting bias in algorithms,” and get involved with her organization The Algorithmic Justice League.

--

--