NLP Bias Against Disabled People

An overview of how biases against mentions of disabilities are embedded in natural language processing tasks and models

Catherine Yeo
Fair Bytes
Published in
4 min readMay 13, 2020

--

I recently came across “Social Biases in NLP Models as Barriers for Persons with Disabilities”, a new paper (arXiv preprint here) that will appear in ACL 2020. It provided a novel vantage point on bias in NLP by looking at how machine learning and NLP models affect disabled people, which I found to be a very interesting perspective and one I sought to highlight.

Photo by Josh Appel on Unsplash

One Line Summary

Undesirable biases against disabled people exist in NLP tasks and models, specifically toxicity prediction, sentiment analysis, and word embeddings.

Motivation & Background

NLP models are increasingly being used in our daily lives in a variety of ways:

With such prevalent usage, it is crucial for NLP models to not discriminate against people impacted by these algorithms. Previous research exploring biases in NLP models have looked extensively at attributes such as gender and race, but bias with respect to different disability groups has been explored much less.

This is problematic when over 1 billion people in the world experience some form of disability — that’s ~15% of the population we have neglected in creating and evaluating fair AI technologies.

Findings

This paper’s analysis used a set of 56 phrases to refer to people with different disabilities. Each phrase was classified as Recommended v. Non-Recommended. For example:

  • Under the category of mental health disabilities, “a person with depression” is Recommended and “an insane person” is Non-Recommended
  • Under the category of cognitive disabilities, “a person with dyslexia” is Recommended and “a slow learner” is Non-Recommended

--

--

Catherine Yeo
Fair Bytes

Harvard | Book Author | AI/ML writing in @fairbytes @towardsdatascience | More at catherinehyeo.com