Facial Recognition Study Renews Civil Rights Concerns

A National Institute of Standards and Technology study found that many of the world’s top facial recognition systems exhibit significant race and gender biases.

By Livia Luan

Last December, researchers at the National Institute of Standards and Technology (NIST) found that many of the world’s top facial recognition systems disproportionately misidentify people of color, women, elderly people, and children — some of the most vulnerable groups in society.

These troubling findings materialized after the researchers analyzed the performance of 189 algorithms submitted by 99 developers, which together represent the majority of the global facial recognition industry. The researchers tested “one-to-one” matching, which is used to match a person against a passport or ID card, and “one-to-many” matching, which is used to match a person with a single record in a larger database. In addition, the researchers measured rates of “false positives,” which occur when a software misjudges photos of two different people to show the same person, and “false negatives,” which occur when a software fails to match two photos that, in fact, show the same person. These distinctions are crucial — a false positive in a one-to-one search could allow someone to bypass the face unlock system of another person’s phone, whereas a false positive in a one-to-many search could increase an individual’s risk of being falsely accused of a crime.

Overall, the algorithms exhibited significant race and gender biases. For one-to-one matching, the researchers observed higher rates of false positives for Asian and African American faces compared to Caucasian faces, often varying by factors of 10 to 100 times. Algorithms developed in the United States produced similarly high rates of false positives in one-to-one matching for Asians, African Americans, and indigenous groups (which include Native Americans, American Indians, Alaskan Indians, and Pacific Islanders). However, these results contrasted with those of algorithms developed in Asian countries, which did not see a dramatic difference in false positives in one-to-one matching between Asian and Caucasian faces. In terms of one-to-many matching, algorithms produced higher rates of false positives for African American women. Finally, the researchers concluded that not all algorithms produced a high rate of false positives across demographics in one-to-many matching; in fact, those that are the “most equitable” (meaning they use more diverse training data) are also among the most accurate.

These findings are an essential resource for understanding the current state of facial recognition technology. Building upon earlier studies that measure the prevalence of bias in facial recognition systems, the NIST study illustrates that people of color and women are at a higher risk of experiencing data security breaches where facial recognition is required for access and being falsely accused of a crime. This is extremely concerning in light of reports about the surveillance of immigrants, public housing residents, and individuals experiencing homelessness — groups of which people of color and women comprise a significant share.

Despite the study’s disappointing results, there is still cause for hope. The researchers suggest that an algorithm’s performance is linked to the data on which it was trained, identifying “more diverse training data” as one of several factors that “may prove effective at mitigating demographic differentials with respect to false positives.” Moreover, the study represents a breakthrough in research about the accuracy of facial recognition systems on diverse communities. Prior to its publication, there was little to no useful information regarding the accuracy of systems in identifying members of the Asian American and Pacific Islander (AAPI) community. As a result, NIST’s findings provide AAPI advocacy groups, along with civil society organizations at large, crucial data with which they can formulate policy positions and lobby lawmakers at all levels of government.

Livia Luan is the programs associate and executive assistant at Asian Americans Advancing Justice | AAJC, where she supports the telecommunications, technology, and media program on rapidly evolving issues such as digital privacy, digital equity, and facial recognition technology. Read more about our telecommunications and technology program in our community resource hub.

--

--

Advancing Justice – AAJC
Advancing Justice — AAJC

Fighting for civil rights for all and working to empower #AsianAmericans to participate in our democracy.