Biases in Facial Recognition

Theodoremui
6 min readJul 31, 2021

--

This blog will explore the myriad of biases in facial recognition technology that permeate our society.

What is Facial Recognition?

Facial recognition is a technology that identifies an individual based on their face. Facial recognition systems are typically used in ID verification services, such as Apple products like iPhones and iPad, but are also used in hidden cameras that match faces of people walking past them to people’s faces on a watchlist. The scary part is that these watchlists can include anyone, including innocent citizens, and the images can come from anywhere.

How does Facial Recognition work?

The process is divided into four steps.

1. Detection

The model captures and detects the image.

2. Analysis

The model analyzes the subject’s face. It reads the geometry of it to distinguish key aspects, such as the eyes, cheekbones, and lips.

3. Conversion

The model converts the visual information into digital numerical code based on your face’s features. This is called your faceprint, and, like fingerprints, everyone’s is unique.

4. Matching

The faceprint is compared against a large database of millions of other faces. The model matches your face against the other faces, and if it finds a match, a determination is made. However, the training set is limited and comprised of mostly white male faces. Machine learning models learn from data collected from the real world, so a model can learn or may even escalate pre-existing biases in the data based on race, gender, religion, or other characteristics. This means that the model is susceptible to biases towards certain groups, such as women and African-Americans.

Stories

Below will be a series of stories that highlight biases in facial recognition systems.

Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots

https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28

ACLU conducted a test of Amazon’s face recognition system called Amazon Rekognition. It incorrectly matched 28 members of Congress with mugshots of other people who have been arrested. Nearly 40 percent of the false matches were of people of color, including six members of the Congressional Black Caucus. Yet, Amazon is aggressively marketing its face surveillance technology to the police. Amazon’s service can identify up to 100 faces in a single image, track people in real-time through surveillance cameras, and scan footage from body cameras.

List of members of Congress targeted by Rekognition:

Senate
John Isakson (R-Georgia)
Edward Markey (D-Massachusetts)
Pat Roberts (R-Kansas)

House
Sanford Bishop (D-Georgia)
George Butterfield (D-North Carolina)
Lacy Clay (D-Missouri)
Mark DeSaulnier (D-California)
Adriano Espaillat (D-New York)
Ruben Gallego (D-Arizona)
Thomas Garrett (R-Virginia)
Greg Gianforte (R-Montana)
Jimmy Gomez (D-California)
Raúl Grijalva (D-Arizona)
Luis Gutiérrez (D-Illinois)
Steve Knight (R-California)
Leonard Lance (R-New Jersey)
John Lewis (D-Georgia)
Frank LoBiondo (R-New Jersey)
David Loebsack (D-Iowa)
David McKinley (R-West Virginia)
John Moolenaar (R-Michigan)
Tom Reed (R-New York)
Bobby Rush (D-Illinois)
Norma Torres (D-California)
Marc Veasey (D-Texas)
Brad Wenstrup (R-Ohio)
Steve Womack (R-Arkansas)
Lee Zeldin (R-New York)

Black People labeled ‘animals’ by Apple and Google facial recognition

Joy Buolamwini is a Ghanaian-American computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League, an organization that looks to challenge bias in decision-making software.

One day, Buolamwini picked up her phone and waited for it to scan her face. However, her face never unlocked the phone. Confused, she did some research to find out what happened. If it didn’t recognize her as a human, then what did it recognize her as? The answer: A gorilla. Yes. A gorilla.

Sadly, this is not uncommon. Many darker-skinned people have been recognized as animals, despite their qualifications.

“If we fail to make ethical and inclusive artificial intelligence we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.” — Joy Buolamwini

2.

https://algorithmwatch.org/en/apple-google-computer-vision-racist

Roy A., a Berlin-based lawyer who campaigns against discrimination, reported that some pictures depicting Black people appeared under the label “Animal” on his iPhone 8.

Apple added the automated labeling feature to iOS in 2016. According to the company, the process takes place entirely on the user’s device. Perhaps due to the phone’s limited resources, it labels images erratically. An AlgorithmWatch colleague, who is white, reported that her children had been labeled “Animal” in the past.

The behavior is not unique to Apple’s software. Algorithm Watch processed both images in Google Vision, an online service for image labeling. The image depicting characters with darker skin tones was labeled “Mammals”, the other was not. Other labels, including “Human Body”, were similar for both pictures.

Google’s image labeling services have a history of producing discriminatory and racist outputs. In 2015, Google Photos labeled individuals with dark skin tones “gorillas”. The company apologized but, did not solve the problem. Instead, it simply stopped returning the “gorilla” label, even for pictures of that specific mammal.

Labeling Black characters “Mammals” or “Animals” is not a bug that can be “fixed”. It is the continuation of over two centuries of institutional racism.

Facial Recognition incorrectly labeled African-American as criminal

https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police/

On January 9, 2020, Detroit police drove to the suburb of Farmington Hill and arrested Robert Williams in his own driveway. Williams, a Black man, was accused of stealing watches from Shinola, a luxury store. He was held overnight in jail.

During questioning, an officer showed Williams a picture of a suspect. His response was to reject the claim. “This is not me,” he told the officer. “I hope y’all don’t think all black people look alike.” He says the officer replied: “The computer says it’s you.”

Williams’s wrongful arrest, which was first reported by the New York Times in August 2020, was based on a bad match from the Detroit Police Department’s facial recognition system. Two more instances of false arrests have since been made public. Both are also Black men, and both have taken legal action.

Now Williams is following in their path and going further — not only by using the department for his wrongful arrest but by trying to get the technology banned.

Progress being made

https://www.forbes.com/sites/isabeltogoh/2020/06/09/ibm-will-no-longer-offer-or-develop-facial-recognition-software-in-pursuit-of-racial-justice-reform/?sh=5057a2f71192

IBM’s CEO has told Congress on Monday that the tech firm will no longer offer, develop or research facial recognition technology as it strongly opposes the technology’s use for “mass surveillance, racial profiling, violations of basic human rights and freedom,” The Verge reports.

Arvind Krishna addressed the letter to Democratic senators, including Jerry Nadler and Kamala Harris. In the letter, Krishna said IBM is seeking to work with Congress on issues of police reform and “holding police more accountable for misconduct.”

The technology is increasingly used by law enforcement. But studies in recent years, including by researchers Joy Buolamwini and Timnit Gebru, have revealed the extent of bias in facial recognition tech and how this leads to a disproportionate targeting of people on the basis of race or ethnicity, as well as violation of privacy. As a result, companies providing the tech have come under increasing legal pressure, including tech company Clearview AI, while Facebook this year agreed to settle a lawsuit for $550 million — the largest payout related to a privacy case yet — after it was accused of storing users’ biometric data without consent.

2.

https://fortune.com/2021/06/21/ban-facial-recognition-in-all-publicly-accessible-spaces-europe-privacy-regulators-urge-edps-edpb-ai-regulation/

Europe’s privacy regulators have called for a full ban on facial recognition systems monitoring people in all public places. The European Commission proposed a regulation that would place strict safeguards on the use of artificial intelligence. Under EU law, A.I. threatens EU citizens’ fundamental rights and needs to be tightly and broadly reined in. The European Commission has recommended banning the use of A.I. systems that infer emotions to “categorize individuals into clusters based on ethnicity, gender, political or sexual orientation.” Privacy campaigners have previously criticized the proposed regulation as leaving the door open for discriminatory surveillance.

Conclusion

In this blog, we have covered the basics of facial recognition, explored real-life biases in these systems, and how the world is making progress to restore fairness in society.

--

--