The Algorithmic Justice League

Joy Buolamwini
MIT MEDIA LAB
Published in
5 min readDec 15, 2016

UNMASKING BIAS

What does a student coding in a white mask have in common with a New Zealand man struggling with a passport photo? Both individuals found themselves on the wrong side of computational decisions.

Reuters/Richard Lee

The man of Asian descent erroneously had his eyes registered as closed by facial recognition software that analyzed his photo during an interaction with an automated passport renewal system.

In my case, I wore a mask because my bare face was not consistently detected by facial recognition software. We both experienced exclusion from seemingly neutral machines programmed with algorithms -codified processes.

We are not alone. Our individual encounters with bias embedded into coded systems — a phenomenon I call the “coded gaze” — are only shadows of persistent problems with inclusion in tech and in machine learning.

In this post, I share the motivation for starting the Algorithmic Justice League to fight the coded gaze along with progress made in highlighting algorithmic bias at venues like TEDx and the Museum of Fine Arts, Boston. I also discuss next steps for identifying and mitigating bias with facial recognition.

~30-Second Coded Gaze Clip

WHY SHOULD WE CARE?

Algorithmic bias, like human bias, can lead to unfairness. In her book Weapons of Math Destruction, data scientist Cathy O’Neil writes about the rise of the new WMDs: widespread, mysterious, and destructive algorithms that are infiltrating more facets of society.

Read Weapons of Math Destruction

Machine learning algorithms are being used to make decisions about employment, access to loans or insurance, college admissions, and even jail time.

For example, law enforcement is embracing machine learning for predictive policing. Some judges are using machine generated risk scores to determine the length of prison sentences.

Police departments across the US are expanding their crime fighting arsenals with facial recognition software which uses machine learning. Georgetown Law published a report showing that 1 in 2 adults in the US - that is 117 million- people have their images in a facial recognition network. Currently, police departments can search these faces without regulation using algorithms that have not been audited for accuracy.

Because algorithms can have real world consequences, we must demand fairness.

To fight the coded gaze, I invite you to join the Algorithmic Justice League -a collective of concerned citizens, artists, researchers, and activists. We work to:

Identify Algorithmic Bias: We are developing tools to rigorously test for bias in machine learning starting with facial recognition software.

Mitigate Algorithmic Bias: We aim to develop methods for full spectrum inclusion during the design, development, testing, and deployment of coded systems where appropriate.

Highlight Algorithmic Bias: We create media and convene learning experiences to show the need for algorithmic justice.

PROGRESS

In May, I made a call for an InCoding Movement centered on creating a world with a culture of inclusion where technology serves all of us and centers social change.

HIGHLIGHTING BIAS

Machine Neutrality?

To highlight the need for inclusive code, I developed the InCoding Manifesto video. This video is part of a larger effort to create accessible media to introduce a wide audience to the coded gaze — algorithmic bias concept.

On Nov 4, the Coded Gaze Exhibition debuted at the Museum of Fine Arts, Boston and the InCoding Manifesto was screened. Shortly, after I gave a talk at TEDxBeaconStreet which is available for viewing below.

TEDx Talk

NEXT STEPS

Acknowledging algorithmic bias in public forums is necessary but not sufficient for addressing persistent issues. As a graduate researcher at the MIT Media Lab, I am working to create tools to identify bias in facial recognition software and developing a scorecard for full spectrum inclusion. To my knowledge, there is no service consistently monitoring the accuracy of facial recognition software or the diversity of the training data used by this software.

A coalition of 52 civil rights groups led by the American Civil Liberties Union sent a letter to the US Department of Justice stating:

Face recognition systems are powerful— but they can also be biased. Thus, we urge the Department of Justice (DOJ) Civil Rights Division (CRT) to:

1. Expand ongoing investigations of police practices, and include in future investigations an examination of whether the use of surveillance technologies, including face recognition technology, has had a disparate impact on communities of color; and

2.Consult with and advise the Federal Bureau of Investigation (FBI) to examine whether the use of face recognition technologies has had a disparate impact on communities of color.

Hopefully, the Department of Justice will heed this call. In the meantime, the Algorithmic Justice League is developing tools to monitor bias in existing facial recognition software. Alongside, tools to rigorously identify bias, there also needs to be ways to mitigate bias. Mitigating bias is not just a technical challenge. How and when machine learning should be used is a matter of ongoing discussion. Questions of appropriate mitigation approaches remain.

The Costs of Inclusion

If bias is identified, should we stop using the software altogether or work towards minimizing bias? If AJL launched a #selfiesforinclusion campaign that improved facial recognition, are we inadvertently subjecting more vulnerable populations to unfair scrutiny? How can such risks be addressed? Who should address them?

The Costs of Exclusion

If we do not improve the systems and they continue to be used, what are the implications of having innocent people identified as criminal suspects? Considering the advent of self-driving cars, can we afford to have pedestrian detection systems that fail to consistently detect a particular portion of the population? Who has a voice in deciding how we move forward?

At this point, I have more questions than answers. However, being able to audit existing facial recognition systems for bias will help make a case for full spectrum testing for evolving technology.

COLLABORATIONS

Ultimately, fighting the coded gaze will be an ongoing team effort. Facial recognition is just one area to address. In building tools for full spectrum inclusion, the Algorithmic Justice League can develop systems for monitoring bias in other areas where machine learning is used.

Anyone who cares about fairness can participate by helping raise awareness, reporting instances of suspected bias, or requesting software audits.

We are currently exploring collaborations with organizations like Bocoup, an open source tech company, that have expressed interest in helping build full spectrum inclusion tools. If you are interested in learning more or participating, do not hesitate to contact the Algorithmic Justice League. We are based at the MIT Media Lab and can be reached at www.ajlunited.org

The ending to this story is not yet determined…

Share your reactions http://screening.codedgaze.com

Joy Buolamwini is a poet of code on a mission to show compassion through computation. She writes about incoding and creates learning experiences to develop social impact technology. Find her on twitter @jovialjoy and @ajlunited connect on LinkedIn or Facebook.

--

--

Joy Buolamwini
MIT MEDIA LAB

Founder Algorithmic Justice League. www.ajl.org | www.poetofcode.com | Telling stories that make daughters of diasporas dream and sons of privilege pause