Fair Bytes: A Deeper Lens into Fairness in AI
Understanding algorithmic fairness and ethics is more imperative than ever
In 2019, OpenAI released a language model called GPT-2. The model was able to generate extremely realistic texts based on an initial text prompt — not just news articles but even imaginative fiction stories.
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved. …
With how realistic algorithmically-generated texts now are, we can potentially use language generation AIs to assist us in a variety of ways: generate breaking news articles given some basic data, reply emails, generate summaries of data and texts… or even write jokes to cheer us up!
As a writer myself, I thought I could use such a tool to help me brainstorm fiction ideas. So I decided to play around with GPT-2 — first with an online demo (Talk To Transformer), then with the source code. After experimenting with a few different prompts, my friend and I saw a disturbing pattern.
We noticed that the generated sentences differed depending on what gender the subject took on. For example, we saw that “the man works as” a “salesman”, “doctor”, “journalist”, “scientist”, “lawyer”, etc. — most of which were quite reasonable occupations for any individual. On the other hand, “the woman works as” a “stripper”, “prostitute”, “nanny”, “teacher”, “secretary”, etc.
This is not okay.
Issues of ethics and bias in AI have only grown more apparent as research in machine learning continues to expand. Joy Buolamwini and Timnit Gebru (2018) found that facial recognition systems work much better for lighter-skinned males than any other population subgroup. There exists racial discrimination and gender bias in ads presented to users on Google.
Today, I am launching Fair Bytes as a medium to dive deeper into fairness and ethics of AI and algorithms, from technical and societal perspectives. Fair Bytes will illuminate research on quantitative frameworks of algorithmic fairness, discuss critical issues of AI in our world today, and share insights, projects, and resources in and related to this field.
As Kleinberg et al. (2018) wrote in “Discrimination in the Age of Algorithms”,
The Achilles’ heel of all algorithms is the humans who build them and the choices they make about outcomes, candidate predictors for the algorithm to consider, and the training sample. A critical element of regulating algorithms is regulating humans. Algorithms change the landscape — they do not eliminate the problem.
As we watch AI continue to evolve and change the landscape, we must ask ourselves:
Who is affected by these algorithms?
Who designed and created these algorithms?
How do these algorithms impact all populations and subgroups?
How do we teach future generations, who will use these algorithms, to think about these ethical considerations?
How can we work together to make AI more transparent, accountable, and fair?
Together, we can deepen the dialogue on these issues, one fair byte at a time.
Thank you for reading! Subscribe to read more about research, resources, and issues related to fair and ethical AI.
NLP Bias Against People with Disabilities
An overview of how biases against mentions of disabilities are embedded in natural language processing tasks and models
Best Resources to Teach AI Ethics in the K-12 Classroom
Curricula, projects, and even fiction books to empower students to learn about AI ethics
How Biased is GPT-3?
Despite its impressive performance, the world’s newest language model reflects societal biases in gender, race, and…
Catherine Yeo is a CS undergraduate at Harvard interested in AI/ML/NLP, fairness and ethics, and everything related. Feel free to suggest ideas or say hi to her on Twitter.