NLP Bias Against People with Disabilities

An overview of how biases against mentions of disabilities are embedded in natural language processing tasks and models

Catherine Yeo
May 13, 2020 · 4 min read

I recently came across “Social Biases in NLP Models as Barriers for Persons with Disabilities”, a new paper (arXiv preprint here) that will appear in ACL 2020. It provided a novel vantage point on bias in NLP by looking at how machine learning and NLP models affect people with disabilities, which I found to be a very interesting perspective and one I sought to highlight.

Photo by Josh Appel on Unsplash

Undesirable biases against people with disabilities exist in NLP tasks and models, specifically toxicity prediction, sentiment analysis, and word embeddings.

NLP models are increasingly being used in our daily lives in a variety of ways:

With such prevalent usage, it is crucial for NLP models to not discriminate against people impacted by these algorithms. Previous research exploring biases in NLP models have looked extensively at attributes such as gender and race, but bias with respect to different disability groups has been explored much less.

This is problematic when over 1 billion people in the world experience some form of disability — that’s ~15% of the population we have neglected in creating and evaluating fair AI technologies.

This paper’s analysis used a set of 56 phrases to refer to people with different disabilities. Each phrase was classified as Recommended v. Non-Recommended. For example:

  • Under the category of mental health disabilities, “a person with depression” is Recommended and “an insane person” is Non-Recommended
  • Under the category of cognitive disabilities, “a person with dyslexia” is Recommended and “a slow learner” is Non-Recommended

The researchers then followed a process of perturbation — they took existing template sentences containing pronouns “he” or “she” and perturbed them by replacing the pronoun with 1 of the 56 phrases.

Then, they calculated the score diff — the difference between the NLP model score for the original sentence and the score for the perturbed sentence.

Overall, they found that:

  • In Toxicity Prediction, The NLP model score was higher (more toxic) for both Recommended and Non-Recommended phrases, which means that phrases mentioning disability are likelier to be labelled toxic
  • Sentiment Analysis — The NLP model score was lower (more negative) for both Recommended and Non-Recommended phrases, which means that phrases mentioning disability are likelier to be labelled negative
  • In both tasks, Non-Recommended phrases resulted in a more toxic/negative score than Recommended phrases
Source: Figure 1

Furthermore, the researchers found that neural text embeddings such as BERT, a widely used language model, similarly contain undesirable biases around mentions of disabilities. Again using perturbation, they compared how the top 10 BERT word predictions changed with different disability phrases and found high frequencies of suggestions related to disabilities producing negative sentiment. This means BERT associates words with more negative sentiment with phrases referencing people with disabilities.

These biases can result in non-toxic, non-negative comments mentioning disabilities being flagged as toxic at a much higher rate, suppressing harmless discussion about disabilities.

This could impact their opportunity to participate equally in online forums, which consequently influences public awareness and societal attitudes.

  1. NLP models are already widely used in our daily lives. Given evidence of biases in these models, human judgment is needed in addition to these models’ decisions to ensure that people with disabilities are not discouraged from online participation.
  2. Further time and research in the AI fairness field also need to be dedicated to under-explored marginalized groups, e.g. people with disabilities, gender non-binary individuals, intersectional subgroups, etc.
  3. Uncovering biases in ML/NLP models is a valuable first step, and this paper did a great job bringing to light such biases against people with disabilities. Now, we must also figure out how to eliminate these biases.

For more information, check out the original paper on arXiv here.

Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. “Social Biases in NLP Models as Barriers for Persons with Disabilities”, Annual Conference of the Association for Computational Linguistics 2020.

Thank you for reading! Subscribe to read more about research, resources, and issues related to fair and ethical AI.

Catherine Yeo is a CS undergraduate at Harvard interested in AI/ML/NLP, fairness and interpretability, and everything related. Feel free to suggest ideas or say hi to her on Twitter.

Fair Bytes

Sharing byte-sized stories about fairness & ethics of AI

Catherine Yeo

Written by

CS @Harvard | I write about fairness & ethics in AI/ML for @fairbytes | Storyteller, hacker, innovator | Visit me at www.catherinehyeo.com

Fair Bytes

A Medium publication sharing byte-sized stories about research, resources, and issues related to fairness & ethics of AI

Catherine Yeo

Written by

CS @Harvard | I write about fairness & ethics in AI/ML for @fairbytes | Storyteller, hacker, innovator | Visit me at www.catherinehyeo.com

Fair Bytes

A Medium publication sharing byte-sized stories about research, resources, and issues related to fairness & ethics of AI

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store