Bias in Facial Recognition Algorithms: A Primer

Willie Costello
Aggregate Intellect
3 min readAug 13, 2021
Photo by Lianhao Qu on Unsplash

This post was originally published as a recipe on ai.science. See the recipe for an interactive version of this post and to comment or collaborate.

What’s covered?

In this post I’ll provide a brief guide to bias in facial recognition algorithms and its related concepts, facial recognition technologies and algorithmic bias. I’ll also point you to the resources available for further reading and exploration.

Who is this for?

Beginners can use their learning from this post to tackle use-cases like facial recognition and processing technologies, and image classification algorithms more broadly.

The resources

The following five resources are, in my opinion, the best way to understand the social biases embedded in facial recognition technologies: why they arise, how they affect people, and what can be done to address them.

(1) Use The Myth of the Impartial Machine in order to answer questions like:

  • What do we mean when we say that algorithmic systems “biased”?
  • Where does the bias in algorithmic systems come from?

Read and interact with this interactive article to understand the basics of where bias in algorithmic systems comes from.

(2) Use Facial Recognition Technologies: A Primer in order to answer questions like:

  • What are facial recognition technologies?
  • How and where are facial recognition technologies used?
  • How does a machine recognize an individual face?
  • How accurate are facial recognition technologies?

Read this short guide to understand the basics of how facial recognition technologies work.

(3) Use Joy Buolamwini: “How I’m fighting bias in algorithms” in order to answer questions like:

  • What does bias in facial recognition algorithms look like?
  • What can we do about bias in facial recognition algorithms?

This TEDx talk gives a quick and punchy overview of what bias in facial recognition algorithms looks like, and what researcher and activist Joy Buolamwini is doing about it.

(4) Use Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification in order to answer questions like:

  • How do we know that facial recognition algorithms can discriminate on the basis of race and gender?

This landmark paper rigorously establishes the racial and gender biases embedded in existing facial recognition algorithms.

(5) Use Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing in order to answer questions like:

  • Can we audit facial recognition algorithms for bias?

This paper highlights how attempts to audit facial recognition algorithms for bias can have effects that may harm the very populations these measures are meant to protect

Additional resources

Want to dig even deeper? Use “Wrongfully Accused by an Algorithm” in order to answer questions like:

  • How can facial racial technologies lead to actual harm?

This New York Times article presents the real-life case of a faulty facial recognition match that led to a Michigan man’s arrest for a crime he did not commit.

This recipe was created by Willie Costello. For more recipes, visit ai.science and create a free account. For more information about recipes and their logic, see RECIPE 0.

Aggregate Intellect

Aggregate Intellect is a Global Marketplace where ML Developers Connect, Collaborate, and Build. Connect with peers & experts at https://ai.science or Join our Slack Community.

  • Check out the user generated Recipes that provide step by step, and bite sized guides on how to do various tasks
  • Join our ML Product Challenges to build AI-based products for a chance to win cash prizes
  • Connect with peers & experts through the ML Discussion Groups or Expert Office Hours

--

--