Announcing the Sunset of the Safe Face Pledge

Joy Buolamwini
5 min readFeb 8, 2021

--

Safe Face Pledge Logo Fades

The Algorithmic Justice League (AJL) and the Center on Technology & Privacy at Georgetown Law are announcing the sunset of the Safe Face Pledge. The 2018 Safe Face Pledge was a historic initiative designed to prohibit lethal use and lawless police use of facial analysis technology, as well as to create transparency in government use. At the time, we defined facial analysis technology as any system that automatically analyzes human faces or heads. We now use the term facial recognition technologies (FRTs) to emphasize a plurality of uses. The Safe Face Pledge provided actionable, measurable steps organizations could take to put AI ethics principles into practice, and as we stated on launch, provided “an opportunity for organizations to make public commitments towards mitigating the abuse of facial analysis technology.”

The project was a strategic initiative meant to set explicit redlines for unacceptable use; raise awareness about the weaponization of FRTs; broaden the conversation about the harms of FRTs; and challenge companies to make actionable, measurable commitments beyond stating their AI ethics principles. We achieved success in all of these areas.

Over the course of two years, the pledge was supported by over 40 organizations, more than 100 individual champions, and three launch signatories: Robbie.ai, Yoti, and Simprints Technology. However, none of the most visible or prolific providers of FRTs signed on.

When we launched the project, we urged organizations including NEC, IBM, Microsoft, Google, Facebook, Amazon, Megvii, and Axon to sign on. In the announcement that we posted to Medium, AJL’s Founder Joy Buolamwini stated: “Without the kinds of commitments like those proposed in the Safe Face Pledge, I fear becoming complicit in the development of technology that ultimately harms people like me — the underserved majority who suffer most the adverse impacts of technology.” Buolamwini also made it clear that technical improvements to FRTs would not address the need to limit the potential harms of the technology:

“As a researcher who sits at the intersection of privilege and oppression, I cannot tackle sociotechnical issues by only focusing on the technical portion of problems that reflect systemic oppression. How AI is used will ultimately reflect which lives we choose to value and which voices we choose to hear.”

In particular, the requirement of the Safe Face Pledge that all lethal use be curtailed proved to be a stumbling block, given many firms’ desire to support law enforcement and military applications. Although at first glance it may seem that the Pledge failed to move the most powerful actors, in fact the process helped us achieve another strategic goal: we demonstrated that, even when provided with a clear path for industry leadership towards curtailing the abuse and lethal use of FRTs, powerful actors were not willing to take that path.

Through their refusal to sign the Safe Face Pledge, industry leaders conclusively demonstrated that self-regulation is not enough to compel the comprehensive mitigation of abuses and lawless uses of FRTs. Instead, we continue to witness the weaponization of FRTs for application in life-and-death contexts, with companies like Clearview AI working hand in hand with law enforcement agencies and the immigration/detention/deportation system, and military forces. This is why, in the wake of last summer’s historic mobilizations against police brutality, triggered by the murder of George Floyd, AJL’s leadership was moved to write that We Must Fight Face Surveillance to Defend Black Lives.

Even as some pay lip service to ‘ethical AI’ principles, powerful AI companies undermine, fire, gaslight, and seek to discredit their own ethical AI researchers for pointing out potential and actual harms from AI systems. When those researchers are Black women, too often companies follow the abuse and misogynoir playbook, as we saw most recently in #GebruGate (Dr. Timnit Gebru was a co-author of the Gender Shades research that galvanized the Safe Face Pledge). Transformative change will not come from inside these companies alone. Self-regulation is not enough. We will have to create much stronger mechanisms for public oversight and accountability.

To achieve that goal, we are going to need public education and mobilization. We believe a lot is going to happen on this front in 2021, between the national release of the documentary film Coded Bias, the incoming Biden-Harris administration that has demonstrated a willingness to meaningfully engage the debate about FRTs specifically and algorithmic bias and harms more broadly, and growing public awareness and action, bolstered by the tireless work of community organizers and advocates such as Data for Black Lives, the ACLU of MA, Fight for the Future, Color of Change, the Movement Alliance Project, Mijente, and many, many more.

Indeed, because of the rapidly growing movement for algorithmic justice, regulatory action is already underway. As we write these words, there are now numerous municipal bans and moratoria on FRTs. There are also model bills like the Community Control Over Police Surveillance Model Bill and the Community Control Over Police Surveillance and Militarization Model Bill by the ACLU, and the Face surveillance ban model bill by the Electronic Frontier Foundation. The Facial Recognition and Biometric Technology Moratorium Act is making its way through the U.S. Congress, among many other initiatives here in the USA and around the world.

We can say with confidence that the Safe Face Pledge has sunset, but a new day is about to dawn. As Amanda Gorman poetically reminded us on inauguration day, we can choose to be the light that awakens this dawn. Our movement as a whole is gaining strength. AJL as an organization has assembled an absolutely phenomenal Advisory Board. We are growing our team, and building partnerships, coalitions, and community. We are supporting frontline organizers, like the Brooklyn Tenants who successfully defeated their landlord’s efforts to install FRT locks in their building. We are preparing for the national conversation about the Coded Bias film which was recently nominated for the NAACP Image Awards “Outstanding Documentary” category and shows the real-world impact of AI harms while chronicling the origins of AJL. We are developing a new project, Community Reporting of Algorithmic System Harms (CRASH), that will focus our energy on how to systematically expose, challenge, and redress the many forms of algorithmic harms that hit already marginalized communities hardest.

Finally, we are creating a community of AJL Agents, and if you have read this far, we invite you to join us and become an Agent of the Algorithmic Justice League! The future is in all of our hands. If you have a face, you have a place in this conversation. You have a voice and a choice, and we choose to march towards algorithmic justice.

— Authored by Joy Buolamwini, Founder and Sasha Costanza-Chock, Senior Research Fellow, Algorithmic Justice League

--

--

Joy Buolamwini

Founder Algorithmic Justice League. www.ajl.org | www.poetofcode.com | Telling stories that make daughters of diasporas dream and sons of privilege pause