InCoding — In The Beginning Was The Coded Gaze

Joy Buolamwini
MIT MEDIA LAB
Published in
4 min readMay 16, 2016

Whoever codes the system, embeds her views. A call for inclusive code.

I am writing a series of articles to explore the embedded bias in code that unintentionally limits the audience who can use products or participate in research. By sharing the ongoing need for inclusive coding i.e. “InCoding” and providing practical steps to make products more inclusive, I want to move closer to a world where technology reflects the diversity of its users and creators.

AT FIRST GLANCE ALL IS WELL

On the surface, my journey from Georgia Tech Computer Science undergraduate to Zambia as a Fulbright Fellow, Oxford as a Rhodes Scholar and now MIT as a graduate student shows a potential path of possibility for diversity in tech.

As a graduate researcher at the Media Lab, I have the opportunity to push forward projects like www.code4rights.org — engaging people in creating impact apps and www.slayplay.com — creating music through movement . I live in a world of possibility, passion, and play.

Tonight, I am being awarded the NCWIT Collegiate Award for my work on the BloomerTech SmartBra to help women monitor heart health.

While I am happy to receive this recognition alongside 5 other women doing creative and impactful technical work, I want to use this honor to recognize the need to expand the conversation around inclusion in tech.

ONGOING PROBLEM — CODED GAZE

Calls for tech inclusion often miss the bias that is embedded in written code. Frustrating experiences with using computer vision code on diverse faces remind me that not all eyes or skin tones are easily recognized with existing code libraries.

What are code libraries?

Programmers use libraries which are collections of code that can be reused to save time and add pre-written functionality to apps and other kinds of software. For example, if I want to create a program that detects a face and then displays your inspiration…(why not?), instead of writing everything from scratch, I can borrow code from a face detection library to get started.

The Aspire Mirror is a Media Lab Project that enables you to look at yourself and see a reflection on your face based on what inspires you. Developed 2015 By Joy Buolamwini

Since my undergraduate days at Georgia Tech, I have been working with code libraries on a variety of projects. My work with robotics required the use of computer vision libraries to detect the faces of people who would interact with the robot I programmed.

So far so good. Reusable code adds efficiency to programming.

Given the convenience and reusability of libraries, overtime many programmers use popular code libraries to do common tasks. If the code being reused has bias, this bias will be reproduced by whoever uses the code for a new project or research endeavor.

Unfortunately, reused code at times reflects the lack of inclusion in the tech space in non obvious but important ways. Commonly used face detection code works in part by using training sets — a group of example images that are described as human faces.

The faces that are chosen for the training set impact what the code recognizes as a face. A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set.

So what?

As a result when I work on projects like the Aspire Mirror (pictured above), I am reminded that the training sets were not tuned for faces like mine. To test out the code I created for the Aspire Mirror and subsequent projects, I wore a white mask so that my face can be detected in a variety of lighting conditions.

My friend who has bangs, also had trouble having her face detected because of her hair. By pulling her hair back as you see below, she was able to be detected.

Fitting the Norm

While this is a temporary solution, we can do better than asking people to change themselves to fit our code. Our task is to create code that can work for people of all types.

Isn’t this just an isolated incident?

The mirror experience brings back memories from 2009. While I was working on my robotics project as an undergraduate, I “borrowed” my roommate’s face so that I could test the code I was writing. I assumed someone would fix the problem, so I completed my research assignment and moved on.

Several years later in 2011, I was in Hong Kong taking a tour of a start-up. I was introduced to a social robot . The robot worked well with everyone on the tour except for me. My face could not be recognized. I asked the creators which libraries they used and soon discovered that they used the code libraries I had used as an undergraduate. I assumed someone would fix the problem, so I completed the tour and moved on.

Seven years since my first encounter with this problem, I realize that I cannot simply move on as the problems with inclusion persist. While I cannot fix coded bias in every system by myself, I can raise awareness, create pathways for more diverse training sets, and challenge us to examine the Coded Gaze — the embedded views that are propagated by those who have the power to code systems.

Whoever codes the system embeds her views. Limited views create limited systems. Let’s code with a more expansive gaze.

How?

Learn about the formation of the Algorithmic Justice League.

Joy Buolamwini is a poet of code on a mission to show compassion through computation. She writes about incoding and creates learning experiences to develop social impact technology. Find her on twitter @jovialjoy and @ajlunited connect on LinkedIn or Facebook.

--

--

Joy Buolamwini
MIT MEDIA LAB

Founder Algorithmic Justice League. www.ajl.org | www.poetofcode.com | Telling stories that make daughters of diasporas dream and sons of privilege pause