Fair Bytes
Published in

Fair Bytes

NLP Bias Against People with Disabilities

An overview of how biases against mentions of disabilities are embedded in natural language processing tasks and models

I recently came across “Social Biases in NLP Models as Barriers for Persons with Disabilities”, a new paper (arXiv preprint here) that will appear in ACL 2020. It provided a novel vantage point on bias in NLP by looking at how machine learning and NLP models affect people with disabilities, which I found to be a very interesting perspective and one I sought to…




A Medium publication sharing byte-sized stories about research, resources, and issues related to fairness & ethics of AI

Recommended from Medium

Addressing the Emotional Pain Points of AI

Robot Fear Index: Increased Adoption May Be Fueling Concerns

How Palantir Foundry Helps Customers Build and Deploy AI-Powered Decision-Making Applications

Top 10 Artificial Intelligence Developers in Europe in 2021

Propositions Are Not Types: Naturalizing Information Content in Computing

PREFACE — My Upcoming Book on the History and Futures of Artificial Intelligence

The Laws of AI/Robotics

Creating a Plan For AI Adoption in startups

An Entrepreneur walking towards AI

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Catherine Yeo

Catherine Yeo

Computer Science @ Harvard | I write about AI/ML in @fairbytes @towardsdatascience | Storyteller, innovator, creator| Visit me at catherinehyeo.com

More from Medium

Gender Bias in AI Systems

Responsible AI at Scale with Beth Rudden

Women in AI Ethics podcast series sponsored by IBM. Building Responsible AI at Scale. Beth Rudden, IBM Distinguished Engineer & Principal Data Scientist — Cognitive & AI Services

Ethical Concerns in Writing Assistants

AI Ethics: A Framework for the Media Industry