The Ethics of Facial Recognition Technology

Shubhi Upadhyay
Kigumi Group
Published in
5 min readFeb 24, 2023
Photo by Firmbee.com on Unsplash

Introduction

We are all familiar with facial recognition technology (FRT) in various ways–we may use it every day to unlock our phones, to use social media, or to play a variety of video games. FRT, at its simplist, is a type of artificial intelligence technology that is used to analyze and identify human faces.

But as it becomes more ubiquitous, it becomes increasingly important to ask ourselves: how do we ensure that our security, privacy, and civil liberties remain intact?

The Importance of FRT Regulation

One issue with FRT is its unregulated use and potential impact on data privacy. It can be used to track the movements, behaviors, and emotions of individuals, which can then be sold to third parties to be used for advertising and personal purposes, without each individual’s knowledge and/or consent. A recent example is the quandary surrounding James L. Dolan, chief executive of Madison Square Garden (a popular sports arena and entertainment venue in New York City), and his presumed unethical use of FRT. According to a New York Times article published in December 2022, MSG Entertainment, the owner of Madison Square Garden, has been using FRT to identify and ban those deemed unwelcome by Dolan, comparing pictures of individuals entering the arena with people who have been previously banned from the venue. MSG Entertainment’s choice to use the technology to ban its critics raises the question of the ethics of influential segments of the population using FRT for personal grievances and similar abuses of power. If MSG Entertainment is using FRT to ban its critics, what could come next? Would more powerful individuals take advantage of the technology to target their own personal critics and rivals?

This question becomes even more serious in light of the fact that FRT is still largely unregulated, both domestically within the U.S. and globally. According to a CNN article published in August 2022, there are currently no federal laws in the U.S. governing the use of FRT, so states, cities, and counties are left to voluntarily enact their own regulations. Given this opt-in nature, FRT is currently still legal and/or mostly uncontrolled in the majority of U.S. states, including New York, where Madison Square Garden is located.

Madison Square Garden in New York City
Madison Square Garden (Photo by Pedro Bariak on Unsplash)

How FRT Currently Perpetuates Racial and Gender Bias

Moreover, the fact that FRT is largely unregulated is especially serious as these technologies are prone to racial and gender bias.

A study conducted by researchers at MIT and Stanford showed that three commercial facial analysis programs’ largest error rate for determining the gender of light-skinned men was 0.8%, but for dark-skinned women, it grew to higher than a whopping 34%.

Such findings indicate that there are still significant loopholes in the way facial recognition models are being developed and tested. In order to develop more accurate models for everyone, it is important to train FRT models with diverse demographic datasets — including women and people of color during the development stage itself. With a more diverse representative dataset, a FRT model would theoretically be better able to accurately recognize and classify people of all backgrounds and appearances. Additionally, the three models tested in the study were simply general-purpose algorithms, created to match faces in photos and assess physical characteristics. But what about the dangers of FRT being used in more serious, life-altering situations, such as law enforcement surveillance?

A report published by Georgetown Law in October 2019 found that African Americans are more likely (in disproportionate numbers) to be found by facial recognition technology deployed by the police. This disparity occurs because the data that these FRTs are trained on simply comprises mugshots from past arrests, which disproportionately exposes the data to Black and minority Americans (given the statistical reality that Black Americans are more likely to be arrested than their white counterparts). This racial gap in arrests indicates that there are more Black Americans in the datasets that law enforcement FRT is trained on, which in turn “teaches” the model that Black Americans are more likely to commit crimes than non-Black individuals, hence perpetuating racial bias. Therefore, the use of FRT by police can have drastic effects, including false arrests of Black Americans in disproportionately high numbers.

Photo by ev on Unsplash

Potential Benefits of FRT

Despite all of these negative effects, there are still benefits to using FRT. For example, airports use it to expedite security checks and check-in travelers faster. Furthermore, it can be used in missing persons cases to help reunite victims of crime and natural disasters with their loved ones.

It is important to remember that the concept of FRT itself isn’t erroneous, but rather the way it is developed and implemented.

Fortunately, there are researchers and companies working to reduce its bias by exposing models to a broader range of data and increasing diversity in the teams that create such models. Additionally, it is important to remember that there are still local regulation efforts taking place to make the use of FRT more ethical, such as San Francisco’s ban on FRT usage by police.

Conclusion

To conclude, while FRT has many potential benefits, it also poses significant ethical concerns. Without sufficient regulation and ethical development practices, it could violate our privacy and civil liberties, perpetuate racial and gender biases, and be abused by powerful individuals or organizations. As we continue to develop and use this technology, it is important that we remain vigilant and take steps to ensure that it is used ethically and responsibly.

References

--

--