CSCI240 — Bias in Computer Science

Christina Yang
6 min readApr 11, 2023

--

Photo from The New York Times

Every field has bias because humans are inherently biased. We all have our own experience, values, and beliefs, which shapes how we think, make decisions, and interact with others. Sometimes we do not realize we are making biased decisions because our biases are often unconscious. Also, we might not realize that we have certain biases until someone points them out to us.

In computer science, bias can be found in many different ways, such as preexisting bias, technical bias, and emergent bias[5]. One of the examples of bias through interaction was Microsoft’s Tay, which was an AI chatbot[6]. Tay was designed to learn from its interactions with users on Twitter[6]. However, Tay was learning from some of the more negative interactions it had with users and began making racist and sexist comments[6]. Although Tay was taken down later on, it is important to be aware of these biases and how we could take steps to reduce them.

I have always been interested in learning about gender and racial bias, which led me to come up with the following research questions:

When creating new technologies, how can we make sure it is accessible for everyone and bias-free? If bias exists, what actions can be taken by individuals or communities to promote greater equality?

As stated above, this post will analyze different types of bias in computer science and provide examples with explanations. As well as identifying the groups that are impacted by different types of bias and possible solutions to the problem.

Case Study 1 — Preexisting Bias

“Preexisting bias has its roots in social institutions, practices, and attitudes”[5].

Preexisting bias refers to the presence of biases and discriminatory practices that exist in society and are inadvertently perpetuated through technology and computer systems.

Dating apps or websites are perfect examples of preexisting bias[4]. When the user creates an account, the dating apps or websites will ask the user to pick their gender, either male or female[4].

At the micro level, there is an ethical concern about the potential harm caused to individuals and excluding people who identify outside of the binary gender categories[4]. This exclusion can lead to feelings of marginalization and discrimination and ultimately harm the mental health and well-being of individuals who are not represented in the dating apps and websites’ gender options.

At the macro level, a broader ethical concern about the potential societal impact of reinforcing traditional gender norms and binary categorization. It causes society to be unable to reflect the diversity of gender identities, supporting the notion that gender is a binary and fixed characteristic.

To prevent preexisting bias, it is important to recognize the bias exists and take steps to mitigate it. On the one hand, ensure the diverse perspectives are represented in the development of algorithms[3]. On the other hand, use frameworks that make it possible to identify and eliminate bias based on race, gender, or other factors[7].

Case Study 2 — Technical Bias

The bias of a technical nature can emerge due to limitations or considerations of a technical nature[5]. As facial recognition technology has become popular and widely used, all kinds of issues and biases have been raised.

Photo from ephotozine

In 2010, an Asian-American family found that their Nikon camera always gave error messages that individuals blinked when taking photos[1]. However, Nikon’s face recognition was trained from skewed example datasets, where the software was not trained with Asian eyes[1]. Nikon’s camera will fail to detect your face unless you are Caucasian[1].

Based on the micro-ethical perspectives, Nikon’s camera can cause frustration and exclusion for the individual affected by the error messages[8]. The Asian-American daily who discovered the bias may feel discriminated against and excluded, which can undermine their trust in Nikon’s products[9].

From the macro-ethical perspective, Nikon’s camera can perpetuate stereotypes and discrimination towards people of color and reinforce the idea that their facial features are not as valuable or important[8]. This bias can contribute to the further marginalization of Asian people in the technology industry, where their needs and perspectives may be overlooked in developing new products and technologies. Moreover, the bias raises ethical concerns about using biased datasets to train machine learning algorithms. It highlights the importance of ensuring that training datasets are diverse and representative of the population to avoid perpetuating biases and discrimination in machine learning applications. Additionally, utilize trustworthy sources such as first-party data and other reliable sources, conduct frequent assessments of both AL (Artificial Learning) and ML (Machine Learning) algorithms, distinguish useful information from irrelevant or misleading data, and allocate resources towards implementing real-time analytics[9].

Case Study 3 — Emergent Bias

Screenshot from Med School Insiders’s YouTube video

Emergent bias occurs when an algorithm is used in a context that it was not designed for, and the shift in context might create issues and biases[5]. In 1990, the National Residency Match Program(NRMP) designed a software to assign US medical students to residencies. While the software was being created, only a few married couples were looking for housing. After the software was published more women entered the medical field, and more couples were looking to live together. Instead of shifting for compromises in placement choice, the outcome was frequently assigning highly preferred schools to the first partner and lower preferred to the second partner, which resulted in an emergent bias[2].

At the micro-ethical level, the algorithm’s emergent bias negatively impacted individual medical students who sought residencies with their partners. Many qualified applicants were placed in less desirable locations, negatively impacting their career prospects and personal lives[5]. The algorithm’s bias may have perpetuated gender disparities in medical education by prioritizing the location preferences of the higher-rated partner.

From the macro-ethical perspective, the emergent bias in the NRMP algorithm reflects the broader issue of gender inequality in the medical field. It highlights the need for greater diversity and inclusivity in healthcare professions[5]. The algorithm’s bias perpetuated systemic disparities in access to career opportunities, as highly-preferred residency locations tend to be concentrated in wealthier areas or urban centers[5].

To minimize the preexisting bias, designers must carefully consider not just the design requirements but also related biases in the outside environment[5]. If bias emerges due to changes in context, system designers and administrators should take appropriate action to address it[5].

Conclusion

Bias in computer science is a complex and multifaceted issue that requires attention and action from individuals, communities, and technology companies. By looking through preexisting, technical, and emergent biases, where all of them can lead to exclusion, discrimination, and perpetuation of stereotypes. However, some solutions can be implemented to mitigate bias, such as ensuring diverse perspectives are represented in algorithm development[9], using frameworks that identify and eliminate bias[7], and utilizing trustworthy sources for data[9]. By taking action to address bias in computer science, we can create technologies that are accessible, equitable, and beneficial for everyone.

[1]AdminEticas. “Nikon’s Face Detection Bias.” Eticas Foundation, 14 Sept. 2021, https://eticasfoundation.org/nikons-face-detection-bias/

[2] “Algorithmic Bias.” Wikipedia, Wikimedia Foundation, 10 Apr. 2023, https://en.wikipedia.org/wiki/Algorithmic_bias

[3]Cala, Christina, et al. “Comic: How a Computer Scientist Fights Bias in Algorithms.” NPR, NPR, 14 Mar. 2022, https://www.npr.org/2022/03/14/1085160422/computer-science-inequality-bias-algorithms-technology

[4]Ciota, Rebecca. “Pre-Existing Bias.” Rebecca Ciota, 17 Nov. 2017, https://ciotarebecca.wordpress.com/2017/12/11/pre-existing-bias/comment-page-1/

[5]Friedman, Batya, and Helen Nissenbaum. “Biasincomputersystems — Cornell University.” Bias in Computer Systems, July 1996, https://nissenbaum.tech.cornell.edu/papers/Bias%20in%20Computer%20Systems.pdf

[6]Hammond, Kristian. “5 Unexpected Sources of Bias in Artificial Intelligence.” TechCrunch, 10 Dec. 2016, https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/

[7]KHAN, AMINA. “Even Computer Algorithms Can Be Biased. Scientists Have Different Ideas of How to Prevent That.” Los Angeles Times, Los Angeles Times, 23 Nov. 2019, https://www.latimes.com/science/story/2019-11-23/fighting-bias-in-computer-algorithms

[8]Vallor, Shannon, and William J. Rewak. “An Introduction to Data Ethics Module Author: Shannon Vallor, Ph.D …” An Introduction to Data Ethics, https://www.scu.edu/media/ethics-center/technology-ethics/IntroToDataEthics.pdf

[9]Wells, Megan. “Data Bias: Why It Matters, and How to Avoid It.” Scuba, https://www.scuba.io/blog/data-bias-why-it-matters-and-how-to-avoid-it

--

--