IBM Leads, More Should Follow: Racial Justice Requires Algorithmic Justice

Joy Buolamwini
3 min readJun 9, 2020

--

The Algorithmic Justice League commends IBM’s decision to stop providing general purpose facial recognition technologies, and calls for next steps: systematic change requires resources. IBM should lead its industry peers, and each company should commit to provide at least one million dollars to support racial justice in the tech sector.

Yesterday, IBM put action behind written principles when they announced a decision to stop providing general purpose facial recognition and analysis technology. In the announcement, IBM CEO Arvind Krishna stated:

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values.”

The Algorithmic Justice League commends this decision as a first move forward towards company-side responsibility to promote equitable and accountable AI. This is a welcome recognition that facial recognition technology, especially as deployed by police, has been used to undermine human rights, and to harm Black people specifically, as well as Indigenous people and other People of Color (BIPOC).

Learn more at www.gendershades.org

IBM’s decision builds on the work of AJL’s founder, Joy Buolamwini, who with Dr. Timnit Gebru published pathbreaking research in 2018 titled Gender Shades. This work demonstrated that commercial facial recognition technologies were biased on the basis of gender and skin type, which maps to racial bias. Follow on work called Actionable Auditing led by Inioluwa Deborah Raji and published with Joy in 2019 also showed the importance of publicly naming accuracy disparities. In particular, the systems reviewed in these studies — including ones from IBM, Microsoft, and Amazon — were found to perform worse on darker faces than lighter faces in general, and worse on female faces than male faces, and the worst on darker female faces, highlighting the often unseen yet critical implications of intersectionality.

Unlike their industry peers, IBM responded within twenty-four hours of receiving the research results and issued a statement committing to address AI bias. In yesterday’s announcement, IBM has once again made a bold move in the right direction, and we encourage other tech companies to follow suit.

We also know that much more is needed.

To bolster public affirmations that Black Lives Matter, companies also need to commit resources to make that statement a reality. In order for organizations like ours to continue to do the hard work of both technical analysis and public communication around bias and harms in algorithmic systems, and to surface and advocate for the public interest, tech companies must help fund the capacity for external accountability.

To that end, we call on tech companies that substantially profit from AI — starting with IBM — to commit at least 1 million dollars each towards advancing racial justice in the tech industry. The money for this Tech Justice Fund should go directly as unrestricted gifts to support organizations that have been leading this work for years, such as Black in AI, Data for Black Lives, and the Algorithmic Justice League.

We also call on companies that continue to develop a range of facial recognition technologies to become signatories to the Safe Face Pledge, a mechanism we developed for organizations to make public commitments towards mitigating the abuse of facial recognition and analysis technology. The pledge prohibits lethal use of the technology, lawless police use, and requires transparency in any government use.

Racial justice requires algorithmic justice. Take a stand backed with action.

Joy Buolamwini, Aaina Agarwal, and Sasha Costanza-Chock for the Algorithmic Justice League

The Algorithmic Justice League is an organization that combines art and research to illuminate the social implications and harms of artificial intelligence. Our mission is to raise public awareness about the impacts of A.I., equip advocates with empirical research to bolster campaigns, build the voice and choice of most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate A.I. bias and harms. More at https://ajlunited.org.

--

--

Joy Buolamwini

Founder Algorithmic Justice League. www.ajl.org | www.poetofcode.com | Telling stories that make daughters of diasporas dream and sons of privilege pause