Evils of Computing — Algorithmic bias

Vivian Kiniga
9 min readApr 13, 2019

--

When you pick up your iPhone X or XS or any other phone fitted with the facial recognition feature, and the screen magically unlocks, do you think about what goes into the technology that identifies your face? Or when you scroll on Facebook and the tag feature in Facebook eerily suggests, “tag John Doe in this photo?” : how does Facebook know who John Doe is? These technologies, facial recognition technologies, have infiltrated our lives and become part of the norm. Technology has evolved from simply aiding the way we live to now dictating the way we live. A sample of this evolution can be seen in facial recognition which is not only used to identify and tag people in photos but also used in courts of law as a law enforcement tool to identify suspects.

In small ways, facial recognition makes decisions about people. Slain Kevin in his TED talk asks, “If we depend on complex algorithms to manage our daily decisions — when do we start to lose control?”. There are many implications that arise from the growing interdependence between man and algorithms that bring into question the systems in place that empower these algorithms to exercise control and make decisions. The advent of computers using data to make decisions on our behalf comes with a huge wave for algorithmic errors. As such, the era of automation and computing has been accompanied by the creeping in of human bias and discrimination into algorithms and into the larger society.

In this paper, I present and analyze the Google Photos facial identification error case in which two black people were tagged as gorillas by Google’s algorithm. I explore this occurrence and raise questions in regards to data, classification and politics of algorithms. I begin by providing a detailed account of the Google Photos case and then move into exploring the policies and structures in place that led to this discrimination, the implications accompanying algorithm bias and lastly explore the solution space. The main aim of this paper is to highlight the prevalent racial and gender discrimination present in facial recognition algorithms. I focus on addressing two main research questions in regards to Google Photos case. First, what led to this gross mislabelling of individuals as gorillas? Second, what are some of the concerns that arise from Google’s proposed solution to the error? Tackling these questions, I raise further questions and ethical considerations concerning discrimination present in computing systems and their effect on the functioning of society.

Google has always been at the forefront of releasing new technologies. In 2015, they released Google Photos app. Google Photos app — uses a combination of advanced computer vision and machine learning techniques to help users collect, search and categorize photos (Dougherty, 2015). On Sunday 28th June 2015, a software engineer from Brooklyn named Jacky Acline, who is black, posted a photo on Twitter that showed Google photos tagging him and another friend of his, who was also black, as gorillas. The facial recognition and photo tagging algorithm had an error that labeled black individuals as gorillas. Jacky expressed his shock and explained that he decided to live stream the issue so as to get a faster response as well as to prompt a sense of accountability from Google. He exposed the huge misgivings of the new app online and almost immediately, a Google engineer, the chief architect in Google Plus, responded to the post made by Jacky. He assured him that they were looking into correcting the error. The correction of the error then entailed the removal of the word gorilla from their categorizations and a promise of preventing such mistakes from occurring in the future. As reported by the New York Times, their action included temporarily removing nearly everything having to do with gorillas, including the ability to search for gorillas and the entire gorillas label.

It’s easy to blame algorithm and afterward design an easy and quick fix of making algorithms less biased. It is, however, important to realize that for such occurrences to occur, biases in the real world must be seeping into algorithms. A.I. systems are shaped by the priorities and prejudices — conscious and unconscious — of the people who design them, a phenomenon that I refer to as “the coded gaze (Buolamwini, 2018). Therefore, there are various structures in place that led to this form of algorithmic discrimination. Most of the facial recognition technologies are dependent on machine learning algorithms which automate decision making and are trained using sets of data. These algorithms are subject to algorithmic discrimination if trained with biased data. This bias more often than not results from the selection of the data. In the case of Google’s facial recognition technology, the error resulted from inadequate analysis of different skin tones that resulted from using data that was predominantly white.

Unfortunately, such occurrences in the facial recognition technology spaces are prevalent. According to an article in the New York Times, one of the most widely used facial recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study (Lohr, 2018). In the research carried out by Joy Buolamwini of MIT Media labs to investigate the performance of facial recognition software of companies such as IBM, Microsoft and Face ++, she discovers that all these companies’ technology performed better on male faces than female faces. Sadly, the technology was thirty- four percent less accurate for dark-skinned African women than it was for white men (Buolamwini, 2018).

It is important to recognize that software is only as smart as the data set that was used to train it. Rosenberg’s contemporary data analysis to learn about the history of data results in the conclusion that data has no truth (Rosenberg, 2013). The best fix for all algorithmic bias would be the elimination of bias from all data sets. Although ideal, raw data is an oxymoron and thus prevents the elimination of all bias from data. Methods of data capturing, data cleaning and analysis and categorization all inevitably lead to bias within the algorithms. The data that resulted in the Google Photos error is a testament to this oxymoron. Additionally, most data used for facial recognition technology is subjected to path dependency, a concept in which the use of the same (biased) data is encouraged so as to have an accurate representation of the improvements made in the technology. All these contributors lead to algorithmic bias and offer a response to how the discriminatory errors occur in algorithms.

The risks of facial recognition errors extend above and beyond tagging of photos. It is easy to dismiss this error and be optimistic that facial recognition machine learning algorithms have a gentle learning curve but there is increased need for algorithmic accountability especially in an age where facial recognition technology is being employed in important sectors of life such as healthcare and law enforcement. Before delving into the implications of algorithmic bias present in facial recognition software, it is important to take a step back and ask important questions such as who is this bias mostly affecting and how are they affected. Algorithmic biases are accompanied by huge implications. Lack of training data diversity has meant facial recognition model overfitting and built-in racial biases (Science, #ODSC-Open Data, 2018). Therefore, people of color, especially black people and specifically black women are prone to facial recognition errors. With the increase in the use of facial recognition in law enforcement, border control, hiring and surveillance, black people, specifically black women, are highly affected by the implications. False positives in facial recognition technology used by law enforcement to identify suspects lead to the targeting of the minority communities. Facial-recognition systems are more likely either to misidentify or fail to identify African Americans than other races, errors that could result in innocent citizens being marked as suspects in crimes (Frankle, 2016). Furthermore, the use of facial recognition AI in the hiring process tends to favor white male candidates thus further promoting discrimination against black candidates as they are deemed not to have favorable mannerisms by the software. So if more white males with generally homogeneous mannerisms have been hired in the past, it’s possible that algorithms will be trained to favorably rate predominantly fair-skinned, male candidates while penalizing women and people of color who do not exhibit the same verbal and nonverbal cues (Buolamwini, 2018).

Facial recognition technology is far from perfect as seen by the numerous implications of their use. How then can these biases and forms of discrimination be alleviated? Google’s solution to remove gorilla as a categorization raises numerous red flags. As humans, we are prone to making distinctions and assigning consequences to these distinctions. This propensity shifts into algorithms. Analyzing the system of categorization used by Google raises many other questions that were raised in class. What infrastructure had to be in place for the various categorizations to work? How were people classified? Lastly, what problems occurred from the categorization used? For categorization to occur there had to be a set of data to be used to classify people and objects. People were classified using existing data, which as discussed, was quite biased. The classification system used led to black people being tagged as gorillas. Bower, in his paper, describes the effects of categorization as witnessed in the apartheid, effects which can also be seen in the Google Photos scenario. In the process of making The Case of Race Classification and Reclassification under Apartheid 225 people and categories converge, there can be tremendous torque of individual biographies. The advantaged are those whose place in a set of classification systems is a powerful one and for whom powerful sets of classifications of knowledge appear natural. For these people the infrastructures that together support and construct their identities operate particularly smoothly (though never fully so). For others, the fitting process of being able to use the infrastructures takes a terrible toll. To “act naturally,” they have to reclassify and be reclassified socially (Bower, 1999). Consequently, it is quite clear then that Google’s proposed solution did not address the underlying concern of biased data and flawed categorization system.

Perhaps then the question we should be asking is not how we can eliminate bias in facial recognition technologies but we should instead strive to understand that actions and algorithms are inevitably discriminatory in themselves. As discussed in class, translating social processes into formal models not only reflects but also complicates the systems. Furthermore, it is difficult to keep these algorithms in check and accountable since algorithms are innately recursive as evidenced by the algorithmic walk activity. We, therefore, need to assign responsibility to the right people and shift the question to what counts as discrimination and to who and what rights, if any, are these forms of discrimination violating. The Google Photo app incident highlights the importance of utilizing non-homogenous data sets. It also brings into question the politics involved with algorithms. The notion that algorithms are meant to be secret in order to assure quality should be constantly challenged rather than accepted as a cultural phenomenon. With the release of new software perhaps the data sets and the assumptions embedded into these systems by engineers should be made known for quality assurance as well as accountability.

In summary, there exists racial and gender bias in facial recognition techniques. These biases negatively impact people of color, specifically women of color. These biases can be traced back to a lack of diverse data sets and flawed categorization techniques. Therefore, it is ideal for researchers to use diverse data sets, to thoroughly question the formulation of categories, the implications of these categories and the consequences assigned to the categories.

Bibliography

Bowker, Geoffrey C., and Susan Leigh Star. “The Case of Race Classification and Reclassification under Apartheid.” In Sorting Things Out: Classification and Its Consequences, 195–225. Cambridge, MA: MIT Press, 1999.

Buolamwini, J. (2018, June 22). Opinion | When the Robot Doesn’t See Dark Skin. The New York Times. Retrieved from https://www.nytimes.com/2018/06/21/opinion/facial-analysis-technology-bias.html

Buolamwini, J., & Gebru, T. (n.d.). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. 15.

Dougherty, C. (2015, July 1). Google Photos Mistakenly Labels Black People “Gorillas.” Retrieved April 12, 2019, from Bits Blog website:

https://bits.blogs.nytimes.com/2015/07/01/google-photos-mistakenly-labels-black-people-gorillas/

Frankle, C. G., Jonathan. (2016, April 7). Facial-Recognition Software Might Have a Racial Bias Problem. Retrieved April 12, 2019, from The Atlantic website: https://www.theatlantic.com/technology/archive/2016/04/the-underlying-bias-of-facial-recognition-systems/476991/

INFO 3561 lecture slides and discussions

Lohr, S. (2018, June 8). Facial Recognition Is Accurate, if You’re a White Guy. The New York Times. Retrieved from https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html

Rosenberg, Daniel. “Data Before the Fact.” In Raw Data Is an Oxymoron, edited by Lisa Gitelman, 15–40. Cambridge, MA: MIT Press, 2013.

Science, #ODSC-Open Data. (2018, October 15). The Impact of Racial Bias in Facial Recognition Software. Retrieved April 12, 2019, from Medium website: https://medium.com/@ODSC/the-impact-of-racial-bias-in-facial-recognition-software-36f37113604c

Slavin, Kevin. “How Algorithms Shape Our World.” TEDGlobal (2011)

--

--

Vivian Kiniga

Techie, UX & Privacy Enthusiast, Privacy Engineer @ Google