Image for post
Image for post
Atlanta, GA, USA, Photo by — @josephyates_

Racial Inequality and Inequity in Facial Recognition Technology

An imbalance that reflects underlying patterns in society

Alex Moltzau 莫战
Jun 6 · 5 min read

Most often when I see discussions of race in relation to artificial intelligence it seems that there is a consideration of ‘racial bias’. A few have of course depending on the actions that are taken on the background of the artificial intelligence algorithms been called out for racism. A typical example is within the legal sphere, where decisions have been taken based on historical data that disadvantages those who have another skin colour other than white. It has been used by judges and prosecutors in risk assessment tools. In commercial facial recognition technology one would hope that this is not the case, yet it has been.

In Oslo, my hometown far from the United States, the message from abroad still resonated strongly. It is one message that we are a unified human race, and that those who have a different skin colour should not be killed, violated or disadvantaged.

Image for post
Image for post

Artificial intelligence is the racial bias of humans with decisions implemented at scale. I keep coming back to this chilling, yet important video by Joy Buolamwini: “Ain’t I a woman.” I truly recommend watching it, and thinking twice about decisions made in facial recognition technology.

Just a few years ago when Joy started her focus on facial recognition technology most large technology companies were failing to identify in particular women of colour.

In the years that have followed Joy Buolamwini have worked consistently with a project called Algorithmic Justice League (AJL).

The Algorithmic Justice League is an organization that combines art and research to illuminate the social implications and harms of artificial intelligence. Our mission is to raise public awareness about the impacts of A.I., equip advocates with empirical research to bolster campaigns, build the voice and choice of most impacted communities, and galvanize researchers, policymakers, and industry practitioners to mitigate A.I. bias and harms. More at https://ajlunited.org.

On the 3rd of June 2020 (three days prior to this article) she wrote a piece on Medium.

In this article she talks about the deployment of a wide range of surveillance technologies by several federal agencies in the United States.

She comes back to her previous point, that is still pervasive, indeed many of these systems have demonstrated racial bias with lower performance on darker skin.

In this article in addition to giving an overview of the issue she also posted a series of resources that I will repost here:

Educational materials

Model legislation

Congressional hearings on facial recognition technology

Ongoing campaigns

Organizing toolkits

Recent research shows that gender, racial, and skin colour biases can be propagated by commercial facial recognition technology.

One problem is categorical in definition.

Many terms such as gender, race, and ethnicity, are socially constructed categories, differ across societies, cultures, and over time, and have no universally accepted meaning.

Nevertheless, practitioners may attempt to categorise individuals into groups such as binary ’male’ and ’female’ based on their own notions of categories.

There is a racial inequity as well: “…as an objective reference to an imbalance that reflects underlying patterns in society that include, for instance: racial attitudes/bias and that act subtly to undermine and exclude; socio-economic systems that embed the legacy of slavery and legal discrimination.”

“…inequity can only be overcome by working on changing these systems, and these are a public — governmental — responsibility.” (allsides.com)

As such, it is important to consider these issues in a systemic perspective in constructed categorical ways.

Why do we construct racial categories when we are one human race?

What good does it do?

Why does the system struggle with identifying certain faces, to the point of some being invisible or disadvantaged?

All artificial intelligence companies working with facial recognition technologies should ask themselves these questions closely and ensure they act in an equitable way to include concerns of racism in the development of technology.

I thought it would be worth sharing and suggest you go to Algorithmic Justice League.

Another recommendation is the poem released by Joy Buolamwini:

Hope you enjoyed this article and that it gave you some helpful directions to explore the issue further.

We must ensure systemic, pervasive inequalities and inequities goes away — and we must start today.

Black lives matter. Underlying patterns in society and structural inequalities is often replicated in facial recognition technology. We have to ensure this structural violence is not replicated and automated.

This is #500daysofAI and you are reading article 368. I am writing one new article about or related to artificial intelligence every day for 500 days. Towards day 400 I am writing about artificial intelligence and racial inequality.

DataSeries

Imagine the future of data

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store