Racial Inequality and Inequity in Facial Recognition Technology
An imbalance that reflects underlying patterns in society
Most often when I see discussions of race in relation to artificial intelligence it seems that there is a consideration of ‘racial bias’. A few have of course depending on the actions that are taken on the background of the artificial intelligence algorithms been called out for racism. A typical example is within the legal sphere, where decisions have been taken based on historical data that disadvantages those who have another skin colour other than white. It has been used by judges and prosecutors in risk assessment tools. In commercial facial recognition technology one would hope that this is not the case, yet it has been.
In Oslo, my hometown far from the United States, the message from abroad still resonated strongly. It is one message that we are a unified human race, and that those who have a different skin colour should not be killed, violated or disadvantaged.
Artificial intelligence is the racial bias of humans with decisions implemented at scale. I keep coming back to this chilling, yet important video by Joy Buolamwini: “Ain’t I a woman.” I truly recommend watching it, and thinking twice about decisions made in facial recognition technology.
Just a few years ago when Joy started her focus on facial recognition technology most large technology companies were failing to identify in particular women of colour.
In the years that have followed Joy Buolamwini have worked consistently with a project called Algorithmic Justice League (AJL).
The Algorithmic Justice League is an organization that combines art and research to illuminate the social implications and harms of artificial intelligence. Our mission is to raise public awareness about the impacts of A.I., equip advocates with empirical research to bolster campaigns, build the voice and choice of most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate A.I. bias and harms. More at https://ajlunited.org.
On the 3rd of June 2020 (three days prior to this article) she wrote a piece on Medium.
We Must Fight Face Surveillance to Protect Black Lives
An urgent letter from the Algorithmic Justice League
In this article she talks about the deployment of a wide range of surveillance technologies by several federal agencies in the United States.
She comes back to her previous point, that is still pervasive, indeed many of these systems have demonstrated racial bias with lower performance on darker skin.
In this article in addition to giving an overview of the issue she also posted a series of resources that I will repost here:
- Algorithmic Justice League’s primer on facial recognition technologies
- ACLU’s Technology 101 on surveillance technology
- Data for Black Lives’ statement of solidarity with Black Minnesotans
- Electronic Frontier Foundation face surveillance short overview
- Questioning directed by Rep. Alexandria Ocasio-Cortez at the May 2019 Congressional hearing on facial recognition technology
- Community Control Over Police Surveillance Model Bill by the ACLU
- Community Control Over Police Surveillance and Militarization Model Bill by the ACLU
- Face surveillance ban model bill by the Electronic Frontier Foundation
Congressional hearings on facial recognition technology
- Part I: Its Impact on our Civil Rights and Liberties, May 2019
- Part II: Ensuring Transparency in Government Use, June 2019
- Part III: Ensuring Commercial Transparency and Accuracy, January 2020
- Press Pause on Face Surveillance campaign by the ACLU of Massachusetts
- Stop Facial Recognition on Campus campaign by Fight for the Future and Students for Sensible Drug Policies
- Electronic Frontier Foundation’s About Face Toolkit
- Students for Sensible Drug Policies Ban Facial Recognition on Campus Toolkit
Recent research shows that gender, racial, and skin colour biases can be propagated by commercial facial recognition technology.
One problem is categorical in definition.
Many terms such as gender, race, and ethnicity, are socially constructed categories, differ across societies, cultures, and over time, and have no universally accepted meaning.
Nevertheless, practitioners may attempt to categorise individuals into groups such as binary ’male’ and ’female’ based on their own notions of categories.
There is a racial inequity as well: “…as an objective reference to an imbalance that reflects underlying patterns in society that include, for instance: racial attitudes/bias and that act subtly to undermine and exclude; socio-economic systems that embed the legacy of slavery and legal discrimination.”
“…inequity can only be overcome by working on changing these systems, and these are a public — governmental — responsibility.” (allsides.com)
As such, it is important to consider these issues in a systemic perspective in constructed categorical ways.
Why do we construct racial categories when we are one human race?
What good does it do?
Why does the system struggle with identifying certain faces, to the point of some being invisible or disadvantaged?
All artificial intelligence companies working with facial recognition technologies should ask themselves these questions closely and ensure they act in an equitable way to include concerns of racism in the development of technology.
I thought it would be worth sharing and suggest you go to Algorithmic Justice League.
Algorithmic Justice League - Unmasking AI harms and biases
Join the Algorithmic Justice League in the movement towards equitable and accountable AI. Thank you! Your submission…
Another recommendation is the poem released by Joy Buolamwini:
Hope you enjoyed this article and that it gave you some helpful directions to explore the issue further.
We must ensure systemic, pervasive inequalities and inequities goes away — and we must start today.
Black lives matter. Underlying patterns in society and structural inequalities is often replicated in facial recognition technology. We have to ensure this structural violence is not replicated and automated.
This is #500daysofAI and you are reading article 368. I am writing one new article about or related to artificial intelligence every day for 500 days. Towards day 400 I am writing about artificial intelligence and racial inequality.