The Increasing Gap between AI Innovation and AI Ethics: Facial Recognition

Mapping out the facial recognition landscape

Tanishq Sandhu
Fair Bytes

--

Photo by Matthew Henry on Unsplash

We unlock iPhones by scanning our faces. National threats are being averted by surveillance systems. Genetic diseases such as DiGeorge syndrome can be detected by face analysis.

Have you ever stopped to notice how technology keeps accelerating at a remarkable pace?

Artificial intelligence (AI) and advanced computing capabilities are driving innovation and ingenuity in the modern era. One subfield in AI that has experienced great advances is facial recognition technology and its supporting computer vision models.

Unfortunately, AI has been experiencing an industry-wide trend: technological innovation has far outpaced the inclusion of AI ethics and regulations, and facial recognition has been no exception to that trend. This bias has resulted in harm to society.

Luke Stark, Assistant Professor at the University of Western Ontario, even went as far to title his paper, “Facial recognition is the plutonium of AI” (Stark 2019), comparing facial recognition to the toxicity of nuclear waste.

Facial recognition technologies manifest bias in 3 main ways:

  1. Age Bias
  2. Gender Bias
  3. Racial Bias

Age Bias

The first area in which facial recognition software ethically falls short is that it includes ageism, or discrimination based on an individual’s age.

A study by the National Institute of Standards and Technology tested 189 algorithms from 99 companies, organizations, and developers. The empirical evidence collected from these tests helped the NIST conclude that age negatively affected the accuracy of computer vision algorithms in these programs. More specifically, the study found that children and elderly were more heavily biased against and experienced mismatches more frequently than the rest of the population.

Think about this — these children and elderly who experience greater bias account for half of the world’s population (World Population Prospects — Population Division 2019).

In other words, facial recognition technology is more likely to mismatch one half of the population than the other on the basis of age. Unfortunately, ageism is not the sole area in which computer vision models display bias.

Gender Bias

Sexism is defined as discrimination based on one’s sex or gender.

Not only does facial recognition technology and its supporting computer vision models experience issues with identifying the traditional, binary genders — male and female — but it also rarely (if ever) accounts for non-binary genders.

Two studies were done at the Massachusetts Institute of Technology (MIT) that explored these very technological prejudices.

The first study, by Timnit Gebru and Joy Buolamwini, analyzed facial-analysis programs from 3 major companies and found that although light-skinned men experienced error less than 0.8% of the time, dark-skinned females experienced errors between 20–34% of the time. The study also found that the data that the neural-network model was trained with had more than 77% males.

The model itself is bound to discriminate against women if more than ¾ of the training data is white males.

In fact, these economically-incentivized corporations cater their technologies to upper-class white males who hold key positions rather than serving the general public.

Another study was conducted by the MIT Media Lab where the same researcher, Joy Buolamwini, and co-author Deborah Raji found that Amazon’s Rekognition software “mistook women for men 19 percent of the time” (Vincent 2019). This is problematic when you consider that this same software is also supplied to the federal government and law enforcement agencies.

To put that into perspective, a software that may be used by organizations such as the Transportation Security Administration (TSA) is likely to misidentify 1 out of every 5 women that it scans at an airport.

This is the very same surveillance technology we rely on to keep our country and borders safe.

In one analysis, data scientist Rachel Meade shares a table that shows the error rates of facial recognition software by 3 major corporations: Microsoft, IBM, and Face++.

Error rates of facial recognition software by Microsoft, IBM, and Face++ (Source: here)

For Microsoft, a dark-skinned female experienced a 20.8% error rate as compared to 0.00% error rate for a light-skinned male. IBM and Face++ experienced similar error rates with dark-skinned females erroring 34–35% of the time and light-skinned males less than 1% of the time.

In short, given that females make up roughly half of our population, facial recognition technologies are more likely to error on one-half of the world’s population than the other, if we consider sex independently of any other factors.

Racial Bias

The inequities in facial recognition technologies also fuel racial discrimination.

As previously mentioned in the study by Timnit Gebru and Joy Buolamwini at MIT, the problem starts with the fact that the training data behind machine learning models is already heavily biased with the data set being “more than 83 percent white” (Hardesty 2018).

As mentioned in my previous article (here), this incorrect representation of the world (or data) is referred to as human reporting bias.

This phenomenon in turn gives privilege to whiteness and lightness in the human population, where dark-skinned people are more likely to be misclassified or flagged on facial recognition surveillance.

Unfortunately for logistical ease, race is often used for categorizing facial recognition images which in turn supports negative stereotyping.

Think about this:

By separating images into racial categories, it is being implied that there are personality characteristics similarities in those classification groups.

In a study by the National Institute of Standards and Technology (NIST), it was found that “Asian and African American people were misidentified as much as 100 times more than white men” (Porter 2019) and that white men overall had the highest rates of accuracy.

This is especially frightening given that minorities not only account for 40% of the population but are also expected to beat the Caucasian racial group as the majority of the United States population within the next 20 years.

This same technology is also being used by law enforcement through mediums such as body cameras. This technology can be critical for the lives of not only civilians but also law enforcement officers, and it is prone to misidentifying a subset of the population up to 100 times more.

This also means that approximately 40% of the United States population is subject to being accidentally flagged without probable cause due to their racial profile.

Computers and programs that are supposed to be binary and objective have now been programmed to include the same discrimination and prejudices that humans have struggled with, which completely undermines a computer program’s usefulness.

In the words of Timnit Gebru, one of the pioneers and de facto leaders of AI ethics: “for many, facial recognition was way less accurate than humans” and therefore “should be banned at the moment” (Ovide 2020).

Current Efforts

Although the work is nowhere near being enough and lagging far behind the industry’s AI innovation, it would be unfair to ignore movements to fight the imperfections in facial recognition technologies.

In an article written by Brad Smith, the president of Microsoft, Smith discusses 3 main pushes/movements his company has advocated for:

The first movement has resulted in a law being passed in Washington which requires technology providers of facial recognition software to create an application programming interface (API) to allow for “legitimate, independent and reasonable tests for accuracy and unfair performance differences across distinct subpopulations” (Smith 2020). One of the main issues in discriminatory technology is the lack of transparency and the vague, subjective nature of criteria. The API law would make the inner mechanics of the program visible and open to criticism which would drive developers and software leads to put greater importance into creating a fair and dispassionate program. This type of regulation coupled with harsh penalties could single-handedly “break the wheel” of software being catered to the typical upper-class white man.

The second push by Microsoft has resulted in a second law being passed by the state of Washington where “a public authority may not use facial recognition to engage in ‘ongoing surveillance, conduct real-time or near real-time identification, or start persistent tracking’ of an individual except in three specific circumstances. These require either

(1) a warrant;

(2) a court order ‘for the sole purpose of locating or identifying a missing person or identifying a deceased person;’ or

(3) ‘exigent circumstances,’ a well-developed and high threshold under state law.” (Smith 2020) Essentially, this law prevents anyone from using real-time surveillance to perform analytics unless there is a well-established and specific reason to do so.

In light of how biased and flawed facial recognition technology has been shown to be, this law is a huge step forward in terms of limiting the negative impact of the technology. An even greater step would be to expand it to be a federal law as well.

The same law also requires humans to be the ones to make decisions rather than the machines themselves. For example, if a high threat individual is flagged, a human must review the case and confirm the findings rather than completely relying on a computer program that is prone to errors and misclassifications.

More recently, the Black Lives Matters movement has also resulted in change across the facial recognition industry after protests on how the technology has jeopardized many Black lives unnecessarily.

As of June 9, 2020, IBM announced that it would leave the facial recognition business entirely citing bias and inequality as the key concerns.

Amazon and Microsoft also followed suit in the next 2 days and both announced that law enforcement agencies would have limitations on using their facial recognition technology given the high impact and high error rate especially on minorities.

What Now?

In light of the ethical dangers that facial recognition technologies pose on the general public, I would propose the following recommendations to ensure the technology prioritizes the safety of the public over all else:

  1. Diversify data feeding the core ML/CV model

When training a model through supervised learning, it is essential to use a diverse dataset to represent the wide variety of data the algorithm will encounter in a production environment.

  1. Mitigate risk via human-in-the-loop decision making

Technology should never have the right to make decisions at its own discretion; a human should always confirm a program’s recommendation.

  1. Tighten government regulation

As shown in the efforts taken by Microsoft, our elected officials should increase regulation with the cooperation and input of big tech firms to protect users and their data.

— —

Tanishq Sandhu is pursuing his Bachelor’s Degree in Computer Science at Georgia Tech and is passionate about ethical intelligence and full-stack development. To connect, make a suggestion, or learn more, visit Tanishq’s website at www.tanishqsandhu.com.

--

--

Tanishq Sandhu
Fair Bytes

AI & ML Enthusiast | Writer & Editor, Fair Bytes | CS @ GT | www.tanishqsandhu.com