NLP Bias Against Disabled People
An overview of how biases against mentions of disabilities are embedded in natural language processing tasks and models
I recently came across “Social Biases in NLP Models as Barriers for Persons with Disabilities”, a new paper (arXiv preprint here) that will appear in ACL 2020. It provided a novel vantage point on bias in NLP by looking at how machine learning and NLP models affect disabled people, which I found to be a very interesting perspective and one I sought to highlight.
One Line Summary
Undesirable biases against disabled people exist in NLP tasks and models, specifically toxicity prediction, sentiment analysis, and word embeddings.
Motivation & Background
NLP models are increasingly being used in our daily lives in a variety of ways:
- To detect and moderate toxic comments in online forums (toxicity prediction)
- To measure consumers’ feelings (sentiment analysis) towards well-known brands
- To match candidates to job opportunities
With such prevalent usage, it is crucial for NLP models to not discriminate against people impacted by these algorithms. Previous research exploring…