Op-ed: Tackling biases in natural language processing

By Tung-Ling (Tony) Li, MEng ’22 (IEOR)

--

This op-ed is part of a series from E295: Communications for Engineering Leaders. In this course, Master of Engineering students were challenged to communicate a topic they found interesting to a broad audience of technical and non-technical readers. As an opinion piece, the views shared here are neither an expression of nor endorsed by UC Berkeley or the Fung Institute.

Photo by Markus Spiske on Unsplash

From Siri to Google Translate, natural language processing has helped
make our lives easier. However, after Amazon’s sexist AI HR
tool and Netflix’s documentary “Coded Biased” became widely known by public, people started to realize there are threats behind this amazing technology. As prospective engineers and AI developers, we are obligated to be aware of these issues and know how to avoid them while developing these
technologies.

What is Natural Language Processing (NLP)?

Natural Language Processing (NLP) started in the 1950s as the combination of machine learning and linguistics. NLP attempts to leverage the power of computer science and artificial intelligence to allow machines to understand and analyze texts to solve different problems. Today, NLP is applied in several fields including business, medicine, and education. For example, companies utilize NLP to develop customer service chatbots to offer clients 24/7 immediate problem-solving services. Moreover, email service companies employ NLP to detect and eliminate any spam or fraud emails to provide internet users with safer communication environments. Although NLP has assisted society in solving plenty of problems, the problems behind them have gradually come to the surface.

Gender Bias

Gender bias is one of the most prominent issues in NLP. Some researchers discovered that many advanced word-embedding language models, such as GPT-3, tend to connect males with occupations that require higher level of education. When the model is asked “what is the gender of a doctor?”, it would have much higher possibility to respond, “It’s male.” In contrast, if the model is asked “what is the gender of a nurse?”, it would be more likely to respond, “It’s female.” Furthermore, other researchers realized that when they translated “He is a nurse. She is a doctor.” to Hungarian and then translated it back to English, the sentence became “She is a nurse. He is a doctor.” This evidence clearly demonstrates how gender bias has truly started to appear in our NLP systems.

Racial Bias

On the other hand, racial bias has also been identified in NLP systems. To
be more specific, several sentimental analysis systems tend to mark African-American English with more negative sentimental scores. For example, if the sentence is more like non-Standard American English, such as “Bored af den my phone finna die!!!”, the systems would be more possible to consider it negative sentiment.

Why are biases in NLP Problematic?

According to the study conducted by the Brookings Institution, biased NLP
algorithms generate immediate adverse effect on the world we are living
by discriminating certain groups of people and making people’s perspectives more discriminative via online media that they are exposed to daily. Additionally, Harvard Business Review has reported that the biases in NLP can hurt people by preventing them from gaining opportunities and participating in the economy and society. For example, Amazon’s old resume-filtering algorithm displayed strong preference toward words such as “executed” or “captures” that were used more by male applicants.

Solution 1: Data Manipulation

One of the main reasons that NLP algorithms are biased is that the original dataset to train the model is unbalanced. For example, there could be more data associating “doctors” with “male”, and so the resultant model would have more probability to predict “doctors” as “male”.

Therefore, one of the best ways to eliminate bias in NLP is to solve the
problem of unbalanced data. There are many ways to achieve so. For
instance, one can utilize data augmentation algorithms such as SMOTE to
self-create more data for the minority group in the dataset. Plus, if the total
amount of the dataset is very enormous, one can also choose to remove
some data from the majority group to make the dataset more balanced.

Solution 2: Bias Fine-Tuning

Another useful method to solve bias problems in NLP is the bias fine-tuning method. The method employs the transfer learning concept to fine-tune an unbiased model on a more biased dataset. Such an approach enables the model to get rid of learning biases from training data while still being sufficiently trained to tackle target tasks. This method has been proven effective by previous researchers. Park et al. (2018) used transfer learning from sexually unbiased Twitter data and fine-tuned a gender-biased Twitter data set to train a Convolutional Neural Network (CNN). The results indicates that the accuracy score was almost equivalent to the one trained directly by unbiased dataset.

Solution 3: Form Diverse AI Development & Audit Teams

Besides datasets being used to train the model, the team developing the model could also be a crucial factor in terms of bias. In a study accepted by the
Navigating Broader Impacts of AI Research at the 2020 NeurIPS machine learning conference, the researchers suggest that biased models are not only caused by imbalanced data but also influenced by the people in the development team. The study presents proof that the level of bias in the model has negative correlation with the level of diversity of the team.

According to a study from Brookings Institution, a diverse AI and ethics audit team could be a crucial part in the development of machine learning technologies that are beneficial to societies. By having a diverse audit group to review the trained NLP models, anticipants from different backgrounds could help consider the models in multiple perspectives and help the development team spot potential biases against minority groups. Additionally, the diverse development team could offer insights through their lived experiences to suggest how to modify the model.

Conclusion

As NLP algorithms grow to be more influential in our lives, the concerns regarding biases are becoming greater. It is crucial that all AI development companies take these issues into account to prevent discrimination from spreading further through advanced technologies. In this article, several causes of biases in NLP are explored and a few evidently effective approaches are proposed to help solve the problems. This article attempts to offer such information to assist all prospective developers in the field of NLP in creating technologies that are truly beneficial to societies.

It is crucial that all AI development companies take these issues into account to prevent discrimination from spreading further through advanced technologies.

References:

Allen, J. F. (2003, January). Natural language processing. Encyclopedia of Computer Science, pp. 1218–1222.

Blodgett, S. L., & O’Connor, B. (2017, January). Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English. University of Massachusetts Amherst.

Caliskan, A. (2021, March). Detecting and mitigating bias in natural language processing.Retrieved from Brookings Institution: https://www.brookings.edu/research/detecting-and-mitigating-bias-in-natural-language-processing/#cancel

Cowgill, B., Dell’Acqua, F., Deng, S., Hsu, D., Verma, N., & Chaintreau, A. (2020, December). Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics. Columbia University.

Majumder, P. (2021). Interesting NLP Use Cases Every Data Science Enthusiast should know! Retrieved from Analytics Vidhya: https://www.analyticsvidhya.com/blog/2021/05/interesting-nlp-use-cases-every-data-science-enthusiast-should-know/

Manyika, J., Silberg, J., & Presten, B. (2019, October). What Do We Do About the Biases in AI? Retrieved from Harvard Business Review: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

Nadkarni, P. M., Ohno-Machado, L., & Chapman, W. W. (2011). Natural language processing: an introduction. Journal of the American Medical Informatics Association.

Park, J. H., & Shin, J. (2018). Reducing Gender Bias in Abusive Language Detection. Empirical Methods of Natural Language Processing.

Sun, T., Gaut, A., Tang, S., Huang, Y., ElSherief, M., Zhao, J., & Chang, K.-W. (2019). Mitigating Gender Bias in Natural Language Processing: Literature Review. Department of Computer Science, UC Santa Barbara, Department of Computer Science, UC Los Angeles.

Connect with Tung-Ling

--

--

Berkeley Master of Engineering
Berkeley Master of Engineering

Master of Engineering at UC Berkeley with a focus on leadership. Learn more about the program through our publication.