De-biasing is hard!

On bias and the illusion of objectivity in NLP

Deep Kayal
Prosus AI Tech Blog
12 min readJul 7, 2021

--

The general notion of bias

Bias, as a machine learning concept, is age-old. It deals with providing learning models with some prior information about a task at hand, which might be helpful. In early machine learning systems, models have been biased using certain statistical priors, encoding the statistician’s expert knowledge about the behaviour of a certain system. With the advent of deep learning and the availability of enormous amounts of data, the focus has shifted from injecting explicit priors to concepts such as transfer learning. Transfer learning is conceptually simple: take a lot of data and a complex-enough model, and let the model capture the nuances in the data; now that this model has captured the knowledge in this vast amount data, it can transfer its knowledge to other tasks, just like statisticians would transfer their expert knowledge using a prior. As these new deep learning models became popular and transfer learning gained widespread adoption, a new kind of bias was spotted in machine learning systems: one which is prejudicial.

Real-life data is riddled with the imprints of human preconceptions and social biases. Let’s say we are training a large BERT model on a lot of data and incidentally, terms like “gay” or “Islam” were so frequently used in abusive comments that our language model learned to disproportionately associate those terms as being negative or abusive. Now, if we use this BERT model to transfer its knowledge to a sentiment classification task, we are met with the situation of our model classifying a sentence like “I am gay, and I follow the path of Islam” as being highly negative and toxic, even though the sentence is none of it.

Unlike our fellow statistician, who carefully selected the prior knowledge to inject, we weren’t careful when we trained our BERT model on this large corpus of data, and now, we have this problem of a false positive bias [1]. This illustrates the problem of bias in the current state of the art NLP models and how they can be very unfair to already marginalised communities.

Gay, queer, homosexual, black, lesbian, Islam are some of the words that have been found to suffer from false positive bias [1]

Consequences in NLP

The consequences of bias in NLP models can be felt far and wide, and have been the subject of increasing scrutiny in recent times. Many lenses can be put on to study the body of research in the context of bias and fairness. We choose to put on one that looks at a few examples of the effects of biased models with respect to various marginalised groups and communities.

Early works that hinted at the presence of bias in NLP models, studied word associations formed by their corresponding vector similarities in pre-trained models such as Word2Vec.

As a recap, it has been found that the embeddings of words learned by Word2Vec-like roughly render themselves to the rules of linear algebra; thus, if we have four words: “Rome”, “Italy”, “Paris” and “France” and their embeddings then:

Word associations learned by word embeddings

Bolukbasi et al. [2] use this idea to discover spurious associations between gender and occupation.

Spurious associations between gender and occupation

In fact, they take many of these word analogies and aggregate them to assess whether there is a systematic bias in terms of occupations and gender. And there is!

While a woman is a “nanny” or a “housekeeper”, a man is a “financier” or an “architect”

Similar behaviour has also been observed in the likes of BERT and other transformer-based widely-used language models [3]. It is easy to test this ourselves!

Let’s recall that BERT is trained by masking random words from a sentence and making the model predict it. So, if we want to see our biased BERT model in action, all we need to do is make it predict this [MASK] token in the context of an occupation. To do this, we can load BERT:

from transformers import pipeline

fill_mask = pipeline(
“fill-mask”,
model=”bert-base-uncased”,
tokenizer=”bert-base-uncased”
)

And use it to predict masks such as:

s = “[MASK] is a computer scientist”
fill_mask(s)

or

s = “[MASK] is a babysitter”
fill_mask(s)

Here are a few such results, which reflect how gender-biased BERT really is.

BERT predictions are highly gender-biased

The ripple effect of these gender-biased models and embeddings can be seen in downstream tasks where these models are used. Take, for example, coreference resolution, the task of identifying the same entity (person, place, organisation etc.) in different places in the same text document. The result of bias is that coreference resolution systems cannot correctly identify female pronouns for certain occupation roles at all.

The system resolves a male and neutral pronoun as coreferent with “The surgeon,” but not the female pronoun [4].

Similar wrongful behaviour can also be found in the context of race. For example, Sheng et al. found a peculiar behaviour in the generative capabilities of GPT2 [5]:

The powerful GPT-2 model generates abusive language in the context of Black men, gay persons and women.

Models often relate black men to theft and pimping, Latin americans to drugs and women to menial tasks.

It gets even more surprising when we delve into abuse and hate speech detection. Toxicity detection is an important problem in recent NLP after the advent of social media. To automatically prevent toxic comments from cropping up on social websites, companies like Google have dedicated teams training models to classify a piece of text as a toxic comment or benign. The result of such an endeavour is the Perspective API, wherein a comment can be evaluated to determine whether it contains abusive language. We tried to use it on statements that are not at all abusive, as below, and the results were grim.

It’s easy to spot nasty examples of bias in real-life systems. In the short video above, where we tested the Perspective API from Google’s Counter Abuse Technology team, we see how it wrongfully correlates the presence of certain words as “toxic”.

In essence, a model that was being trained to recognise toxic comments likely formed a spurious connection between words that are racial or indicate disabilities, over the real context of the toxicity. When such a model is given a benign sentence with one of these false-positive biased words, it fires off an alarm predicting the benign sentence as toxic.

Fighting bias

The fight against bias starts with identification. To that end, one of the earliest methods proposed is WEAT or Word Embedding Association Test [6]. WEAT aims at measuring the degree to which a model associates sets of target words (e.g., African American names, European American names, flowers, insects) with sets of attribute words (e.g., “stable”, “pleasant”, or “unpleasant”). The association between two given words is defined as the cosine similarity between the embedding vectors for the words.

For example, the target lists maybe types of flowers and insects, and the attributes are pleasant (e.g., “love”, “peace”) or unpleasant words (e.g., “hatred,” “ugly”). The overall test score is the degree to which flowers are more associated with the pleasant words, relative to insects. A high positive score (the score can range between 2.0 and -2.0) means that flowers are more associated with pleasant words, and a high negative score means that insects are more associated with pleasant words.

When WEAT was performed on popular heavily used pre-trained models such as Word2Vec and GloVe, the results showed that human biases were rampant in the embeddings generated by those models.

Results of WEAT on several pre-trained models. The higher the number in the box, the more the positive bias of a target toward an attribute. For example, “Male vs Female names” to “Career vs Family” has a statistic of 1.81 for Glove. This means that male names are highly biased towards career oriented attributes in the embeddings generated by GloVe.

WEAT was a seminal paper in the area of recognizing systematic bias in NLP models, and several follow up papers have been proposed, which build upon or slightly modify the test.

After acknowledgement comes the need to remedy or resolve bias in such models. Depending upon the learning task, the model at hand and the type of biased being addressed, the remedial treatment proposed in the literature may vary greatly. For example, Lu et al. [7] suggest a mechanism, they call Counterfactual data augmentation (CDA), to augment biased real-world training data to make it more gender insensitive. In essence, it involves creating copies of training examples with male pronouns like “he” or “him” with a replica containing female pronouns like “she” or “her”. This encourages learning algorithms to not pick up on the distinction. They find that such a simple approach is enough to alleviate, for example, occupation bias towards male and female pronouns in data.

Occupation biases are greatly reduced. For example, the bias of the word “banker” towards male pronouns is less than half of what it was before, after applying CDA.

While the above mechanism may work for simpler models, it does involve costly retraining. For much larger and more involved language models such as BERT, it may be infeasible to retrain the model from scratch. In such cases, approaches which battle bias post-hoc are more useful. One such approach, proposed by Liang et al. [8], aims to remove bias from sentence representations obtained from BERT-like models (for tasks such as sentiment classification) by intervening after the model has been pre-trained.

Their approach relies on first identifying a so-called bias subspace, an abstract latent vector space in which bias resides for all of the data. To think simply, the representation of every biased sentence in English would reside somewhere in the bias subspace. To construct this bias subspace, Liang et al. use a variety of open-source datasets and choose all sentences from them which have, for example, a gendered pronoun (e.g. “he”, “she”, “him”, “her”). Then they use a language model (like BERT) to obtain the fixed-length sentence representation for these sentences. Finally, they perform PCA on the sentence representations and keep the top-k principal components, just like we would if we were performing dimensionality reduction on data.

Now, to make sure that any text representation for a downstream task is bias-free, all we need to do is make that representation orthogonal to the principal components, and therefore the bias subspace. The overall intuition is simple: (1) choose a type of bias to remove, let’s say gender bias, (2) collect sentences with gendered pronouns from the web, (3) get sentence representations for the aforementioned sentences using any sentence encoder model, (4) perform PCA on these vectors and store the top-k components, (5) for each sentence in the downstream task, get the sentence representation from the same sentence encoder model, (6) finally, orthogonalise each representation from the previous step with respect to the principal components from step 4.

After debiasing using the mechanism described in [8], contextual embeddings from BERT for “man” and “woman” are equally far from occupation word representations.

Are we really de-biasing?

Granted, that researchers are well on their way to identify and quantify bias (as we’ve illustrated through some examples above), doubt looms over whether successful eradication of bias is even a remote possibility.

In the work of Blodgett et al. [9], the authors survey a large number of papers from leading machine learning conferences that study systematic bias in NLP. They find that the motivations of most of the papers are vague, inconsistent and lacking in normative reasoning. They find that nearly all of the surveyed papers tackle a very narrow range of potential sources of bias, and do not measure the implications of their de-biasing on other parts of the NLP development and deployment life cycle.

Tackling a narrow range of biases without measuring implications is a potential hazard that may cause more harm than good. Let us discuss, for example, a case where there are two genders and two races in a dataset: “male” or “female” and “white” or “black”. If we were to follow the work in some of the papers exemplified above, we would first have to choose which group to protect, and most of these narrow-focus papers do not provide guidelines on how to extend their methods to multiple groups. So, if we choose to protect the “female” group, then an effect of that may be an increase in bias against the “male” group, to level off the bias against women. This might cause an increase in bias against the “black + male” group, as an unintended effect. The same thing would happen against the “white + female” group if we were to tackle bias against the “black” group. Such negative effects of bias-cancellation have not been extensively studied, but it is easy to argue that these undesirable effects might be present.

In the paragraphs which came before, we elucidate that bias removal in existing research may have been faulty, but there’s still hope: we simply have not found the right way yet! However, Waseem et al. [10] describe a much more fundamental problem in bias research.

While the bias in data is undoubtedly the cause of problems downstream, it is not the only source of bias in an NLP or ML system. The choice of the dataset, the model to be trained, and the steps that an individual follows to engage in an NLP task are all subject to choice. The authors in [10] argue that these choices themselves are subjective and that bias and subjectivity in ML are inescapable and thus cannot simply be removed. By treating de-biasing as an optimization problem, as most papers do, they treat themselves as objective, elevating their status above the more subjective notions of biases in data, modeling etc. However, this is an inherent fallacy, as these so-called objective choices were based on observations from those very subjective notions. And this, precisely, is the big problem with bias research in NLP!

Bias research in NLP is narrow and incomplete; Bias and subjectivity in ML are inescapable!

Not all is lost

Thus far, we’ve discussed what bias is, how it affects NLP models and how researchers currently aim at fixing it. We’ve stopped with the utterly disappointing note on how it may all be in vain!

But not all is lost, if we can acknowledge that fixing bias in systems goes beyond numerical optimization. Blodgett et al. [9] delineate a few important paths forward to fight bias holistically.

We must acknowledge that fixing bias in systems goes beyond numerical optimization!

First is the need to explore the relationships between language and social hierarchies. Language is the means through which social groups are labeled and beliefs about social groups are transmitted, and these group labels can promote inequalities and stereotypes. An important question is how NLP systems assuage, uphold or exaggerate such pre-existing social hierarchies.

A question on the same strain is how the evolution of language and text affects NLP models. For example, in the news, we often read about the phrase “illegal immigrants”. Whether a sentence goes “Illegal immigrant causes harm to neighbour” or “Illegal immigrant rescues child”, there is a reinforcement on the association of “illegal” to “immigrant”, which will definitely result in biased language models. Presently, there have been suggestions to make written language fairer and more inclusive [11]. Can we claim that making the language fairer would fix the problems of bias in NLP models?

Another important topic would be to further sharpen the understanding of bias in NLP by providing explicit statements of why the system behaviours that are described as bias are harmful, in what ways, and to whom. It’s not enough to say that there is bias against women in a language model; we must explore how much it affects downstream tasks and how this cycle reinforces even more bias (if at all).

Finally, it’s also important to understand how certain communities become aware of NLP systems and whether they resist them. This means that if, let’s say, there is a chatbot in an application that is underused by women, then the bot will be less efficient while chatting with women, which in turn will reduce their experience with the bot, and a vicious cycle starts. In such cases, it would be important to understand the deeper issue of why the application wasn’t used by women to begin with, before trying to de-bias the system.

Final words

In this post, we aimed to expose how bias in NLP is more nuanced than it seems and why we might not be looking at it the right way. The current strain of papers tend to propose de-biasing mechanisms by looking at very narrow areas of the bias landscape, which limit their practicality. Bias is systemic and in this blog, we hope to have provided ample rationale about how we need to move beyond mathematical optimization to deal with it!

[1] “Measuring and Mitigating Unintended Bias in Text Classification”, Dixon et al. (https://storage.googleapis.com/pub-tools-public-publication-data/pdf/ab50a4205513d19233233dbdbb4d1035d7c8c6c2.pdf)

[2] “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings”, Bolukbasi et al. (https://arxiv.org/pdf/1607.06520.pdf)

[3] “Measuring Bias in Contextualized Word Representations”, Kurita et al. (https://www.aclweb.org/anthology/W19-3823.pdf)

[4] “Gender Bias in Coreference Resolution”, Rudinger et al. (https://www.aclweb.org/anthology/N18-2002.pdf)

[5] “The Woman Worked as a Babysitter: On Biases in Language Generation”, Sheng et al. (https://www.aclweb.org/anthology/D19-1339.pdf)

[6] “Semantics derived automatically from language corpora necessarily contain human biases”, Caliskan et al. (https://arxiv.org/pdf/1608.07187.pdf)

[7] “Gender Bias in Neural Natural Language Processing”, Lu et al. (https://arxiv.org/pdf/1807.11714.pdf)

[8] “Towards Debiasing Sentence Representations”, Liang et al. (https://www.aclweb.org/anthology/2020.acl-main.488.pdf)

[9] “Language (Technology) is Power: A Critical Survey of “Bias” in NLP”, Blodgett et al. (http://users.umiacs.umd.edu/~hal/docs/daume20power.pdf)

[10] “Disembodied Machine Learning: On the Illusion of Objectivity in NLP”, Waseem et al. (https://arxiv.org/pdf/2101.11974.pdf)

[11] “Contesting Representations of Migrant “Illegality” through the Drop the I-Word Campaign: Rethinking Language Change and Social Change”, Jonathan Rosa (https://www.taylorfrancis.com/chapters/contesting-representations-migrant-illegality-drop-word-campaign-jonathan-rosa/e/10.4324/9781315115702-5)

--

--