Bias in Facial Recognition Algorithms: A Primer
In this post I’ll provide a brief guide to bias in facial recognition algorithms and its related concepts, facial recognition technologies and algorithmic bias. I’ll also point you to the resources available for further reading and exploration.
Who is this for?
Beginners can use their learning from this post to tackle use-cases like facial recognition and processing technologies, and image classification algorithms more broadly.
The following five resources are, in my opinion, the best way to understand the social biases embedded in facial recognition technologies: why they arise, how they affect people, and what can be done to address them.
(1) Use The Myth of the Impartial Machine in order to answer questions like:
- What do we mean when we say that algorithmic systems “biased”?
- Where does the bias in algorithmic systems come from?
Read and interact with this interactive article to understand the basics of where bias in algorithmic systems comes from.
(2) Use Facial Recognition Technologies: A Primer in order to answer questions like:
- What are facial recognition technologies?
- How and where are facial recognition technologies used?
- How does a machine recognize an individual face?
- How accurate are facial recognition technologies?
Read this short guide to understand the basics of how facial recognition technologies work.
(3) Use Joy Buolamwini: “How I’m fighting bias in algorithms” in order to answer questions like:
- What does bias in facial recognition algorithms look like?
- What can we do about bias in facial recognition algorithms?
This TEDx talk gives a quick and punchy overview of what bias in facial recognition algorithms looks like, and what researcher and activist Joy Buolamwini is doing about it.
(4) Use Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification in order to answer questions like:
- How do we know that facial recognition algorithms can discriminate on the basis of race and gender?
This landmark paper rigorously establishes the racial and gender biases embedded in existing facial recognition algorithms.
(5) Use Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing in order to answer questions like:
- Can we audit facial recognition algorithms for bias?
This paper highlights how attempts to audit facial recognition algorithms for bias can have effects that may harm the very populations these measures are meant to protect
Want to dig even deeper? Use “Wrongfully Accused by an Algorithm” in order to answer questions like:
- How can facial racial technologies lead to actual harm?
This New York Times article presents the real-life case of a faulty facial recognition match that led to a Michigan man’s arrest for a crime he did not commit.