What is Algorithmic Bias?

Ashley Kim
ACM at UCLA
Published in
5 min readApr 16, 2022

This blog post does not necessarily reflect the opinions of ACM at UCLA and the UCLA CS department.

Algorithmic Bias: https://www.internetandtechnologylaw.com/algorithm-bias-ai/

With the growing presence of AI in humanity, artificial systems have been implemented to facilitate daily and external activities in fields such as education, labor, and technology. AI is one of the major items that proceeds with the 4th Industrial Revolution and has yet to take over the technological world. Yet, what really requires more attention is the unprecedented discrimination in sex, color, and disability, also called as Algorithmic Bias, that occurs from the step where data is collected.

Now, What is Algorithmic Bias?

Here is a personal testimony from Christina Kim, a high school student in New York: “I was on a school bus with my friend and was taking a selfie. After taking a selfie, I posted it on Twitter, using the automatic cropping system to crop out the unnecessary background. When I checked my post, Twitter decided to crop out my friend and post my face only; my friend was African American.”

Here is another anecdote from Elizabeth Kim, an undergraduate student at Boston University: “when my friend from Bahrain and I used the hotel bathroom with an automatic soap dispenser, I could wash my hands without any problem, yet the dispenser did not work for my friend.”

More examples include:

  • Apple’s credit card algorithm discriminating against women as women were given lower credit limits than men
  • AI court decision system called “COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)” that gives prediction for the rate of the defendant to become a recidivist discriminates against black defendants
  • Amazon’s automated resume screener discriminating against female applicants

These automatic systems are causing problems for the people of minorities yet without malicious intent since they do not contain any method of feeling or intention. These problems include:

  • Allocative harm: these biased systems may withhold opportunities from the groups of minorities, specifically in the fields related to mortgage, health insurance, trials, and jobs. Here is an example from “The Trouble with Bias” by Kate Crawford: “Automated eligibility systems, ranking algorithms, and predictive risk models control which neighborhoods get policed, which families attain needed resources, who is short-listed for employment, and who is investigated for fraud.”
  • Representational harm: this type of harm can occur when the usage of these systems lead to a reinforcement of the subordination of groups of minorities by race, class, gender, etc. A great example of this is Google Image Search. When searching certain words on the image tab, google produces the result page that may contain bias; its determination, although unintentional, may perpetuate a bias connected to that keyword.

Then why do these discriminations occur?

https://medium.com/swlh/responsible-ai-how-we-do-we-build-systems-that-dont-discriminate-2592c896fb89

Exploring how AI operates, we should first consider the term “machine learning,” which is a type of artificial intelligence. When developers are creating artificial models, the models are basically ‘being trained’ with data. After the model is trained, it now gains an ability to make its own decisions about the information it needs to process. The developers then test the model to improve its accuracy and the model consistently learns how to make judgements. While these steps seem relatively simple, we now need to consider ‘when’ the bias enters. The bias mainly enters during the step where developers impose data on the model to train it as the data lacks crucially in diversity; for instance, when collecting pictures of human faces to train the automatic cropping system, a majority of pictures were of white people’s faces while merely a few of them were of black people’s faces. Another example stands for Amazon’s resume-screening model which was trained mostly with men’s resumes and thus relying its decision-making strategy on men’s application descriptions.

An important thing to note is that these companies did not recognize these problems until the users’ complaints got spotlighted. Their behaviors further reflect that as these datas were collected by ‘humans,’ these biases are ultimately entering from the currently existing inequality in society and human actions. In the case of Twitter’s cropping system, it took 3 years for Twitter to deactivate this function after a series of complaints. What makes these biases more serious is the current fact that there exists no explicit regulation to check each step of development: ultimately, the product is produced for the world to use.

With these biases vividly existing, what should be done?

As cautions rise against algorithmic bias, companies may argue that they are taking precautions by using more representative data and considering the users coming from the groups of minorities. Yet, these actions do not guarantee that the models will fairly perform for all of its users. One important suggestion comes from enacting a detailed regulation that scrutinizes what kind of data is used, how a model is trained, and whether a model took into account the group of minorities who may be discriminated against. This regulation may simply require reporting a document where the company reports each of its production steps with specific details, listing the data types and training and testing steps. To review these documents, there must exist an organization that reviews the documents and permits the usage. Another suggestion is to have a “reviewer” for each of it model who will regularly check the performance of the model and statistically calculate any discrimination that occurs. If detected, the developers will then retrain the model before publishing it. Not only can developers fix the algorithmic bias but also can the public programmers and technologists. One important method to reduce algorithmic bias is to raise public awareness of AI’s inevitable vulnerability in incurring discriminations. The public may learn about Artificial Intelligence through public education and projects that will inform them about the development process of AI, through which people will know beforehand how the product is developed and works. There are numerous youtube videos that explain the basics of machine learning and algorithms even for elementary school students. Moreover, the government may offer more educational opportunities for students to learn algorithmic bias in AI products so that these students will construct more strategies in reducing bias in the future. As for individual effort and passion, people should pay more attention to how and why these bias occur, and if noticed any bias, they may support the retraining of the model through public actions or through simple social media posts. Public attention is sincerely important because it acts as a crucial tool for the companies and organizations to hear from and recognize any possible bias. Through public usage these companies can consider what to prioritize when training their models and avoid presenting bias to the public. No matter who you are, a student, an engineer, or even a person who has no knowledge in this field, these algorithmic biases will ultimately connect to your lives, which is the reason why these attentions are invaluable.

--

--