Sexism in Facial Recognition Technology​

Until AI has completely eliminated human biases, can its results be considered any more trustworthy than a human’s?

Lumen Database Team
Berkman Klein Center Collection
3 min readMay 5, 2021

--

Photo by Electronic_Frontier_Foundation: CC BY 2.0

Facial recognition technology is becoming more powerful and more ubiquitous seemingly every day. In January 2021, a study found that a facial recognition algorithm could be better at deducing a person’s political orientation than a human judgment or a personality test. Similarly, earlier this week, the Dubai airport rolled out technology through which an iris-scanner would verify one’s identity and eliminate the need for any human interaction when entering or exiting the country.

The use of facial recognition by law enforcement agencies has become common practice, despite increasing reports of false arrests and jail time. While there are various downsides to facial recognition technology being used at all, including fears of mass surveillance and invasion of privacy, there are flaws within facial recognition technologies themselves that lead to inaccurate results. One such major challenge for this still-burgeoning technology is gender-based inaccuracies.

Research has indicated that women are 18% more likely to be misidentified through facial recognition than men. One line of research found that while Amazon’s Rekognition software’s ability to recognize white women’s faces was down to 92.9%, a darker-skinned woman’s recognition would only be 68.9% accurate. Similarly, a study conducted at the University of Washington revealed that a facial recognition tool was 68% more likely to predict that an image of a person cooking in the kitchen is that of a woman. These are clear patterns exhibiting sexism in AI and the use of such technologies for law enforcement is likely to disproportionately affect marginalized groups of people, including genders other than male and female.

It is true that once a leap in technology has been made, hoping to be able to wipe it off the face of the earth may be a little like trying to put a genie back in a bottle. A more probable step in the right direction could be to first, feed representative and better data sets to AIs, and second, deploy the technology only with substantial democratic oversight.

Data scientists at the MIT Media Lab have noted that in the instances where they have trained AI with more diverse data, the accuracy of the results has been less discriminatory and accurate. Hence, presenting AI with a diverse representation of datasets for it to learn from would be a great start in reversing the prevalent biases. Additionally, these technologies would also benefit if the providers of the facial recognition software are transparent about its underlying workings.

This transparency, if accompanied with democratic oversight regarding the application, could potentially aim at striking a better balance as to when, where, how and to what end facial recognition may be used. For example, such oversight could be in the form of regulations that set an industry standard that an AI must meet before its commercial application. However, regardless of how many leaps facial recognition takes in the coming years, a serious, deliberate discussion is necessary for determining whether facial recognition should be used by law enforcement at all. This is because the consequences of error are so grave. The European Commission will release its first legislative proposal later this year and it will be interesting to see how the proposal will attempt to regulate AI application.

Former UN Special Rapporteur David Kaye’s warning that AI will enforce bias is more pertinent now than ever before. However, until the day that AI has completely eliminated human biases, can its results be considered any more trustworthy than a human’s?

About the author: Shreya is an Employee Fellow at the Berkman Klein Center, where she works on the Lumen Project. She is a passionate digital rights activist and uses her research and writing to raise awareness about how digital rights are human rights.

--

--

Lumen Database Team
Berkman Klein Center Collection

Collecting and facilitating research on requests to remove online material. Visit lumendatabase.org and email us if you have questions.