Facial Recognition Technology and Its Use by Law Enforcement: A Means to Further Justice or Injustice?

--

Facial Recognition Technologies (FRTs) are becoming more ubiquitous and useful. We use them in trivial ways like unlocking our smartphones and trying on virtual sunglasses. On the other hand, FRTs are used to perform meaningful tasks like automated test proctoring (this hits home for students of the COVID era such as myself), and decreased processing times at TSA customs lines.

Given these valuable uses are we comfortable and ready for FRT to be deployed by law enforcement? What are the implications for privacy and free association when FRT is unilaterally deployed without notice or consent?

Matt Cagle (@matt_cagle) an expert, technology and civil liberties attorney with the ACLU of Northern California, and Colleen Chien, Professor of Law at Santa Clara University School of Law and co-curator of the Artificial Intelligence for Social Impact and Equity Series, discussed these critical questions at the Facial Recognition for Equity panel held on September 24, 2020.

My personal interest in this topic originates from being concerned with the prospect of how power can be concentrated within law enforcement departments without little or any involvement from the public. This interest was furthered as a result of being a student in The Business, Law, Technology, and Policy of Artificial Intelligence class taught by Professor Chien.

Why Are We Talking About FRT Now?

FRT would be less of a contemporary issue if it wasn’t so accessible. With the proliferation of cloud computing and troves of available data it is relatively easy and inexpensive to spin-up a FRT system. According to Matt Cagle, he and his team at the ACLU created a FRT system using Amazon’s Rekognition for about the cost of a large cheese pizza (~$17).

There are several types of FRT, but Face Recognition is the variety used by law enforcement agencies and carries with it the greatest danger of misuse. The software is provided with a huge training set of pictures aptly named a gallery from which it compares a query photo against and tries to determine the identity of the face.

The Issues.

A basic premise undergirding the use of FRT is that data obtained from a private party or a police officer’s body-cam can be fed into the system which would then identify a wanted person either alone or amongst a group. As Matt discussed, this premise doesn’t align with reality for the following reasons, and even if it did other concerns are present.

Inaccuracy. Studies have demonstrated FRTs’ poor performance at identifying people of color. There are several potential causes ranging from homogeneity of the dataset, weight engineers assigned to certain variables, or how photography better reflects attributes of lighter faces. In 2018, Matt and his colleagues tested their Amazon Rekognition system on members of Congress. The system incorrectly matched a member of Congress who identified as being a person of color with a random picture of a person of color ~40% of the time.

Bias. Matt noted that FRTs are trained with a gallery which can be composed of pictures pulled from virtually any source such as a driver’s license, passport or mugshot database, as well as publicly facing social media websites.

Law enforcement agencies typically use a mugshot database which presents two unique issues. First, mugshots are disproportionately composed of people of color which reinforces racial bias. Second, a mugshot doesn’t mean someone has been convicted of a crime so using one as a comparative basis to identify a potential criminal is inherently flawed.

Notice and Consent. Clearview AI is a company that sells its FRT to law enforcement agencies. It was discovered that Clearview had scraped approximately three billion pictures from public websites without providing notice or gaining consent from the websites or the people in the photos. The lack of notice and consent is also present when private parties like Google decide to share their data with law enforcement agencies.

The Potential Chill. Matt mentioned, we can look to China to see what the effects of widespread governmental use of FRTs might be. The Chinese government installed cameras using FRT outside of a regularly frequented mosque. Once it was known cameras were in place devotees opted to discontinue attendance. It isn’t difficult to see how some Americans would forego participating in a rally or a protest out of fear of being identified, subsequently tracked and profiled.

What About 100% Accuracy? Prior to the talk, I thought that once the accuracy problem was resolved FRT could be used as a source of good within law enforcement (this may still be viable in other areas like exonerations). However, a byproduct of perfect accuracy is perfect tracking, which only worsens one’s ability to freely express themselves without fear of government oversight.

Countervailing Forces.

Despite the largely unchecked use of FRT by government agencies there is a growing resistance from state and local legislatures, as well as technology companies.

The ACLU led the first ever ban on the use of FRT by a government entity when San Francisco approved the Stop Secret Surveillance Ordinance. Since then, fourteen U.S. cities have enacted similar bans and the California Legislature approved AB 1215 which places a three year moratorium on the use of FRT in officer-worn body-cams. Portland has gone one step further by banning corporate uses of FRT in public spaces.

Some technology companies have taken a step back from FRT. IBM no longer offers facial recognition products, while Microsoft and Amazon have respectively halted development and placed a one-year moratorium on police use until a federal law is passed. However, it should be noted that Microsoft’s decision to step back may have been nothing more than a means of preserving its public image. During the first half of 2020, the company was the only named supporter of a California bill that would have significantly eroded personal privacy protections through the use of FRTs. Fortunately the bill was defeated by a coalition of civil liberty activists.

Matt made a very compelling point that biometric data is unique because it cannot be divorced from its owner and replaced unlike a cellphone or social security number. A person can suffer from the effects of a biometric data breach for the rest of their life; for example, a false arrest or conviction carries unknown costs, social stigma, and may prevent someone who is already against the odds from progressing their social-economic standing. The public should not have to bear these costs.

Want more? The High Tech Law Institute’s on-going series Artificial Intelligence for Social Impact and Equity is where you’ll find it.

About the Author: TSCL is a second-year student at Santa Clara University School of Law.

--

--