Analytics Vidhya
Published in

Analytics Vidhya

Source: Bias in NLP Processing [8]

A Survey of ‘Bias’ in Natural Language Processing Systems

This article is a brief review of the paper “Language (Technology) is Power: A Critical Survey of Bias in NLP” by Blodgett et al. The authors have selected 146 papers where Bias in NLP is studied. They pointed out several criteria of the shortcomings of these studies and classified them into six categories. Then they also provided 3 recommendations to overcome the lackings of existing literature by analyzing the relationship between social hierarchies and languages. They focused on not only how a certain bias can become harmful but also to whom and why the bias would be harmful.

  1. Selection of Papers: The method for the selection of 146 relevant papers is summarized below:
    a) First, they excluded all papers involving speech, limiting the discussion within the text only.
    b) Then the ACL Anthology (currently contains 63756 papers on computational linguistics and natural language processing) was used to find papers published before May 2020 containing keywords “bias” or “fairness”.
    c) The papers, using “bias” in the social context, were considered relevant and papers discussing “bias” in all other contexts were excluded.
    d) The citation graph of the resulting set of papers was explored to retain any paper which addresses bias in NLP systems that were either cited by or cited any of the already existing set of papers.
    e) In an effort to finalize the selection of papers, leading conferences and workshops in the field of machine learning, natural language processing, and human-computer interaction were manually investigated to find any other paper discussing bias in the NLP system, but the authors discovered that all relevant papers were included earlier.
  2. Classification of Papers: The categorization of the papers was based on a previously created taxonomy of harms that differentiated between allocation and representational harms (Barocas et al.,
    2017; Crawford, 2017). Allocational harms are defined by the disproportionate allocation of resources to different social communities. Representational harms happen when one group is treated more favorably than other groups or when one group is dehumanized. The authors extended this classification to the following six different groups:

a) Allocational harms

b) Stereotyping that reiterates negative generalizations about social groups

c) Other representational harms such as misrepresentation of different social groups

d) Questionable correlations between system behavior and features of language specific to certain social groups

e) Vague or no description of bias in the research

f) Surveys, frameworks, and meta-analyses

The authors have presented a table specifying the number of papers belonging to each of these categories with respect to motivation and techniques.

Fig 1: A categorization of the 146 papers [Blodgett et al. 2020]

3. Findings: Problems in the analysis of Bias in NLP

3.1 Problems originating from the motivation of the paper

The authors have found that some of the papers have no motivation or vague motivations, and some of them have multiple motivations. However, they often failed to provide a precise definition of terms like “discrimination”, “bias” and “injustice”. Nearly one-third of the papers’ motivations were centered on system performance rather than normative concerns. Even when papers describe definite reasons for using a particular system, it is not always clear to what extent, to whom, and how practices that are described as “bias” are harmful.

The authors also discussed the issue of inconsistency among different definitions of bias by different groups of researchers and how this inconsistency can lead to inconsistency in techniques to mitigate bias. They also explained how works with too much focus on downstream effects of bias are lacking from concentration on representational harms, e.g. being nonexistent in search results due to dominant language norms.

3.2 Problems originating from techniques

A lot of research works proposed some quantitative techniques without considering any literature on bias outside NLP with the exception of papers on stereotyping. Moreover, the authors found mismatches between motivations and techniques, e.g. only 4 out of 30 papers actually proposed techniques for the alleviation of allocational harms after mentioning them in motivations.

Moreover, most of the existing works focused on bias in either datasets or system predictions as the source of bias but failed to address the development and deployment steps in NLP systems as potential sources of bias.

4. Recommendations for analysis of bias in NLP systems: The authors have proposed the following recommendations for the NLP community about addressing bias in NLP systems:

4.1 Language and social hierarchies: There should be a focus on the wider social contexts in which language is used, as well as how its beliefs affect society. For example, toxicity detection systems often incorrectly flag African-American English as more toxic than mainstream English because of society’s anti-Black stigmas. As a result, “gender-fair” language, which is less dehumanizing and more inclusive in the context of gender and race, has been proposed to reduce the asymmetries between different groups of people in society. The NLP community should be aware of the co-production of ideologies in languages, social hierarchies, and NLP systems so that existing inequalities are not reiterated and reestablished by technology. The NLP researchers should investigate linguistic norms, standard vs non-standard language norms (and the reason behind such differences), native vs non-native usage of language and the target audience of NLP systems, collection and annotation of data, criteria of evaluation of NLP systems, the transformation of language ideologies (does “bad” language imply that the NLP system could not navigate through it easily?), and the most critical representational harms done by the NLP system.

4.2 Conceptualizations of “bias”: Researchers have found biases to be simple and transparent in some cases, but inconsistently defined biases may cloud a reader’s judgment. The researchers must precisely define “bias” by specifying the sources of bias, its harmful effects on the group of people, the ways these harms are done, and reasons why the bias was harmful. Moreover, they should also investigate the social and behavioral values of this bias.

4.3 Language use in practice: The NLP researchers should look more into the works from human-computer interaction, social computing, and sociolinguistics to learn more about how NLP systems impact different communities in different ways. They should examine how communities should react to the NLP systems, the effect of malfunctioning NLP systems on them, the improper allocation of linguistic resources due to censorship or surveillance, and how the decision-making process in the development of NLP systems can go through reforms to deconstruct the existing power relations among the technologists and the communities.

5. Recommendations on a case as an example

The authors have shown a case study: the works analyzing bias in the context of African American English (AAE). The existing works show how imperfectly and incorrectly language identification systems, dependency parsers, and toxicity detection systems work on AAE (Jorgensen et al., 2015, 2016; Blodgett et al., 2016, 2018; Davison et al., 2019; Sap et al., 2019). However, none of these works investigated deeply the history of racial hierarchies and ideologies; and thus failed to connect the bias on AAE to its speakers who are victims of racism and language inequalities originating from racial hierarchies. The authors have also mentioned works that show the penalization of AAE in the education system, judiciary, and housing. If the racial bias in the NLP system is considered only a performance issue, then the reinforcement of the existing stigma of AAE will be overlooked.

Finally, my observation and understanding for this paper are, the NLP community considers the history of racial hierarchies in the USA to understand how the AAE speaking community may suffer from interacting with an NLP system not designed for AAE. The NLP system should not accommodate only the dominant language practices but also should provide equal opportunity for different variants of languages to be more inclusive and humane technology.

Key References:

  1. Artem Abzaliev. 2019. On GAP coreference resolution shared task: insights from the 3rd place solution. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 107–112, Florence, Italy.
  2. ADA. 2018. Guidelines for Writing About People With Disabilities. ADA National Network. https://bit.ly/2KREbkB.
  3. Oshin Agarwal, Funda Durupinar, Norman I. Badler, and Ani Nenkova. 2019. Word embeddings (also) encode human personality stereotypes. In Proceedings of the Joint Conference on Lexical and Computational Semantics, pages 205–211, Minneapolis, MN.
  4. H. Samy Alim. 2004. You Know My Steez: An Ethnographic and Sociolinguistic Study of Styleshifting in a Black American Speech Community. American Dialect Society.
  5. H. Samy Alim, John R. Rickford, and Arnetha F. Ball, editors. 2016. Raciolinguistics: How Language Shapes Our Ideas About Race. Oxford University Press.
  6. Sandeep Attree. 2019. Gendered ambiguous pronouns shared task: Boosting model confidence by evidence pooling. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, Florence,
    Italy.
  7. Pinkesh Badjatiya, Manish Gupta, and Vasudev Varma. 2019. Stereotypical bias removal for hate speech detection task using knowledge-based generalizations. In Proceedings of the International WorldWideWeb Conference, pages 49–59, San Francisco, CA.
  8. https://www.loffler.com/blog/what-is-natural-language-processing-and-why-does-it-matter

--

--

--

Analytics Vidhya is a community of Analytics and Data Science professionals. We are building the next-gen data science ecosystem https://www.analyticsvidhya.com

Recommended from Medium

Visualizing the Classifier Decision Function With SkLearn and Blender.

My Data Science Journey

Ensemble model : Data Visualization

Bernoulli Naive Bayes and it’s implementation

Face Mask Detection using Computer Vision

Is Audio Two-Dimensional?

Build a model which can translate multiple Indian languages to english very efficiently & reduce…

V measure: an homogeneous and complete clustering

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
subarna chowdhury soma

subarna chowdhury soma

Full stack developer and a new enthusiast in Machine Learning.

More from Medium

What does NLP stand for in AI

Feelings rule — Sentiment Analysis with VADER.

NLP Case Study: Tesla Versus Hamlet

Sentiment Analysis, Part 2 — How to choose pre-annotated datasets for Sentiment Analysis?