When AI Fails on Oprah, Serena Williams, and Michelle Obama, It’s Time to Face Truth.

Joy Buolamwini
8 min readJul 4, 2018

--

During the 1851 Women’s Rights Convention, master orator Sojourner Truth gave her iconic “Ain’t I A Woman?” speech. She made an impassioned plea for the recognition of her humanity.

Today the struggle for respect and dignity for historically marginalized groups — women, people or color, and especially women of color — continues.

Respect isn’t just about being recognized or not recognized. It is also about having agency regarding the processes that govern our lives. As companies, governments, and law enforcement agencies use AI to make decisions about our opportunities and freedoms, we must demand that we are respected as people.

Sometimes respecting people means making sure your systems are inclusive such as in the case of using AI for precision medicine, at times it means respecting people’s privacy by not collecting any data, and it always means respecting the dignity of an individual.

With the increased adoption of artificial intelligence for analyzing humans, I decided to pose Truth’s 19th century question to 21st century algorithms from 5 technology companies with facial analysis services: Microsoft, Amazon, Face++, IBM, and Google. To these companies I ask , “AI, Ain’t I a Woman?”.

This article explores the origins of the AIAIAW poem, commonly asked questions about the work, concerns around abuse of facial analysis technology, and action steps that can be taken to keep companies more transparent and accountable.

Learn more at www.notflawless.ai — Results from May — June 2018

After seeing the failure results on the faces of some of the most iconic black women in present day Oprah, Michelle Obama, and Serena Williams as well as the results on historic figures like Ida B.Wells, Shirley Chisholm, and Sojourner Truth, I felt compelled to write the poem- “AI, Ain’t I A Woman?”. I decided to share the results in the form of spoken word poetry because art can illuminate what research cannot capture. With the help of the Ford Foundation, I also made a poetic video to highlight the failures that inspired the words in the AIAIAW poem I wrote in March 2018. You can read the full text of the poem at www.notflawless.ai

Motivation for using poetry to highlight AI failures

What does is mean when those whom you admire are some how denigrated by the technology that is said to be the most advanced in the world?

WHY THE FOCUS ON BLACK WOMEN?

Truth’s question “Ain’t I a woman?” is fundamentally about affirming the humanity of an individual. Another way to phrase the question could be:

“Am I not a person worthy of respect?”

Respect isn’t just about being recognized or not recognized. It is also about having agency regarding the processes that govern our lives. As companies, governments, and law enforcement agencies use AI to make decisions about our opportunities and freedoms, we must demand that we are respected as people. Sometimes respecting people means making sure your systems are inclusive such as in the case of using AI for precision medicine, at times it means respecting people’s privacy by not collecting any data, and it always means respecting the dignity of an individual — regardless of how that person identifies.

Issues concerning algorithmic bias can impact anybody who will encounter AI knowingly or unknowingly. The failures on these black women are not isolated to celebrity faces as shown in the Black Panther scorecard above or just my own experiences as highlighted in the original Coded Gaze scorecard that sparked this exploration. However, I decided to focus on AI-powered facial analysis technology applied on black women in the AIAIAW poem because companies performed worst on the faces of dark-skinned women in the prior research I conducted.

For my MIT Thesis — Gender Shades, I looked at how gender classification systems from leading tech companies performed across a range of skin types and genders. All systems performed better on male faces than female faces overall, and all systems performed better on lighter-skinned faces than darker-skinned faces overall. Error rates were as high as 35% for darker-skinned women, 12% for darker-skinned men, 7% for lighter-skinned women, and no more than 1% for lighter-skinned men.

Read New York Times Article on Gender Shades Research

In the research paper that resulted from my thesis work, I also detailed the limitations of this study — primarily that gender was reduced to a binary construct and the dataset I created only included European and African parliamentarian members, leaving out a vast majority of the world. Even with these limitations, the paper broke new ground by uncovering the largest accuracy disparities in commercial gender classification along phenotypic and demographic attributes.

WHO CARES IF SOME PEOPLE ARE MISGENDERED?

Considering that similar techniques used to teach machines to guess the gender of a face are also used to perform facial analysis tasks aimed at policing or informing hiring decisions, the consequences of mistakes can impact someone’s freedom or job prospects.

In the case of policing, the ACLU conducted an investigation showing Amazon is equipping police departments with facial recognition systems- namely Amazon Rekognition. Even if Amazon Rekognition services which misgendered Oprah’s face as seen in the AIAIAW video were completely flawless, the potential for abuse on historically marginalized communities would not be reduced.

And as it stands, real-world deployments of facial analysis technology have alarming performance metrics. As I write in my New York Times ope-d — “ In the case of South Wales, where Big Brother Watch reports that between May 2017 and March 2018 the faces of over 2,400 misidentified innocent people were stored by the police department without their consent, the department reported a false-positive facial identification rate of 91 percent.”

https://bigbrotherwatch.org.uk/wp-content/uploads/2018/05/Face-Off-final-digital-1.pdf

Critically, improved performance cannot address unfair use of even the most accurate facial analysis technology.

AREN’T COMPANIES MAKING IMPROVEMENTS?

Some companies like IBM and Microsoft are responding to criticisms about their facial analysis technology by working on technical fixes to address inaccuracies. While a starting point, is this really enough? Both flawed and somewhat improved facial analysis technology can be used to bolster a surveillance state and can even be applied to lethal autonomous weapons.

As technology evolves and companies announce updates, we have to always keep the potential for abuse in mind and remember that updates are not instantaneous or comprehensive. In the case of Microsoft, on June 26 2018 they announced improvements to their facial analysis technology in response to recent criticisms. Like a car recall, while an announcement can show acknowledgement of a problem it does not mean the issue has been completely addressed. Just like after a car recall is announced, flawed cars remain on the road. Systems using older versions of Microsoft services remain in the real-world. And of course, the new model can have a new set of flaws.

In the screencasts below that I captured on the day Microsoft announced improvements, we see that issues persist in the demos advertising their AI capabilities.

While some clients may use Microsoft AI services on archival photos like the one of Sojourner Truth depicted in the screencast above , others will use the product on more contemporary color photos like the one below of Michelle Obama. In the Obama screencast, I look at the image captioning capabilities of Microsoft that extend beyond facial analysis.

Even in the immediate wake of an announcement, we see some of the problems I feature in the “AI, Ain’t I A Women?” video persist. Highlighting these flaws is one way to open a more public discussion about the increased used of facial analysis technology in our daily lives.

However, if we only focus on the technical capabilities of AI systems without addressing the ways in which the technology can be weaponized or compromised, we will not prevent abuse. Instead we will see a series of announcements pronouncing technical improvements without improving the ways in which the AI is used or governed. Subsequent steps need to focus not just on technical capabilities but safe guards and mechanisms for transparency and accountability.

The tech industry needs leaders who are willing to take a principled stance by placing limitations on the use of the technology they create. Brain Brackeen, the CEO of Kairos, is one tech leader who is taking such a stance, already refusing requests from entities who wanted to apply his products to police body cameras. He penned an informative op-ed detailing why facial recognition technology is not ready for law enforcement.

There is no place in America for facial recognition that supports false arrests and murder.

In a social climate wracked with protests and angst around disproportionate prison populations and police misconduct, engaging software that is clearly not ready for civil use in law enforcement activities does not serve citizens, and will only lead to further unrest.

Whether or not you believe government surveillance is okay, using commercial facial recognition in law enforcement is irresponsible and dangerous.” — Brian Brackeen — CEO of Kairos | Techcrunch Op-Ed

NOTFLAWLESS.AI

My aim with the “AI, Ain’t I A Woman?” video poem is to remind us that AI reflects the coded gaze — the priorities, preferences, and at times prejudices of those who shape technology. As technologists and designers, we have a responsibility to think about the social implications of our creations and work to minimize harms. One way of keeping in mind the limitations of our creations is by documenting failure cases and working with impacted communities to determine if, when, and how to advance technological innovations. The fail examples I show in the video poem are part of a practice I am call the FAILogging, Failed AI Logging, or F-logging for short. I’m developing a site to facilitate this practice, and I hope to see more creatives using storytelling to interrogate AI.

At www.notflawless.ai — a space for algorithmic confessions — you can share known issues to help us keep companies more accountable and sober the hype around AI. We have to also keep in mind the responsibility of making better products and services should not fall on the shoulders of the marginalized. We need to significantly change how we approach the design, development, deployment, maintenance, and oversight of AI, and most urgently we need to have more POCs — poets of code, people of color , persons of conscience— positioned to shape the technology that is shaping society.

ADDITIONAL READING

--

--

Joy Buolamwini

Founder Algorithmic Justice League. www.ajl.org | www.poetofcode.com | Telling stories that make daughters of diasporas dream and sons of privilege pause