U.S. Lawmakers Must Continue to Scrutinize Facial Recognition Technology

By Livia Luan

Last year, the American public was exposed to the dangers of facial recognition technology, thanks to numerous reports illustrating the increasing surveillance of vulnerable communities by government agencies and private entities.

The Washington Post reported that Immigration and Customs Enforcement agents mine state departments of motor vehicles databases, including those of Utah, Vermont, and Washington, to find and deport undocumented immigrants. In addition, outlets such as CityLab highlighted the conflicts between public housing residents and their landlords over the deployment of facial recognition technology outside their homes. Furthermore, the New York Daily News reported that a Google contractor tried to improve the Google Pixel 4 smartphone’s face unlock system by sending teams to Atlanta specifically to scan the faces of “black people there, including homeless black people.”

In light of studies and tests revealing higher error rates for people of color and women, the expansion of facial recognition surveillance spurred lawmakers across the country to address the harms of differential surveillance practices. Cities in California (Berkeley, Oakland, and San Francisco) and Massachusetts (Brookline and Somerville) voted to ban the use of facial recognition technology by law enforcement. At the state level, California followed in the footsteps of Oregon and New Hampshire to prevent police from using the technology in body cameras.

In Congress, the push to regulate facial recognition technology began to garner strong bipartisan support. Lawmakers introduced legislation that would regulate the technology’s use by commercial companies and federally funded public housing agencies. Other bills sought to prohibit federal agencies from using the technology without a federal court order and to altogether block the use of federal funding for the purchase or use of the technology. At the tail end of the year, following the National Institute of Standards and Technology’s release of an expansive study exposing significant demographic differentials in facial recognition systems, Rep. Bennie G. Thompson, Chairman of the House Committee on Homeland Security, asked the Department of Homeland Security’s acting secretary to “conduct an immediate assessment of whether to halt [the department’s] current facial recognition operations and plans for future expansion until such disparities can be fully addressed.”

Published at the beginning of this year, The New York Times’ investigation into a “groundbreaking” app designed by a facial recognition start-up called Clearview AI has increased the urgency of enacting a federal legislative solution. Backed by a database of over three billion images that were allegedly scraped from Facebook, YouTube, Venmo, and millions of other websites, the app helps law enforcement agencies match photos of unknown people to their online images. According to the company, over 600 law enforcement agencies, including the Chicago Police Department, have started using it in the past year.

Clearview is deeply intrusive, allowing users to potentially gather the personal information of every person they see, including people’s names, addresses, occupations, and relationships. Although co-founder and CEO Hoan Ton-That claims that his app is 99% accurate and does not produce higher error rates for people of color, it has not been vetted by independent experts. Moreover, it raises serious data privacy and security concerns — the company retains photos in its database even after users delete them from their social media accounts or make their accounts private, and the company’s ability to protect its data is untested.

In response to The Times’ report, a slew of companies — including Facebook, Twitter, Google, YouTube, Venmo, and LinkedIn — sent cease-and-desist letters to Clearview in an attempt to prevent the app from scraping pictures from their platforms. In addition, New Jersey’s attorney general placed a moratorium on the use of the app by state police officers, citing a need “to have a full understanding of what is happening here and ensure there are appropriate safeguards.”

One of these safeguards should protect against data security threats. On February 26, The Daily Beast reported that Clearview had “disclosed to its customers that an intruder ‘gained unauthorized access’ to its list of customers, to the number of user accounts those customers had set up, and to the number of searches its customers have conducted.” However, an attorney for the company downplayed the severity of the breach in a statement to the news outlet: “Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.”

To complicate matters further, the following day, a Buzzfeed News article revealed that the scale of Clearview’s business dealings is much larger than was previously thought. After reviewing documents pertaining to Clearview, reporters discovered that the company has shared or sold its technology to thousands of organizations around the world. Domestically, it has signed paid contracts with U.S. Immigration and Customs Enforcement, the U.S. Attorney’s Office for the Southern District of New York, and Macy’s. Moreover, its technology has been used by multiple government organizations that reside within the Department of Justice. In identifying new clients, Clearview has provided access “not just to organizations, but to individuals within those organizations — sometimes with little or no oversight or awareness from their own management.”

While states such as New Jersey and Vermont are taking steps to protect their residents from Clearview’s invasive technology, these recent findings suggest that much more needs to be done at the federal level. Lawmakers should intensify their efforts to pass legislation that regulates the deployment of facial recognition technology in different settings and by different stakeholders; that requires companies like Clearview to implement and maintain rigorous data privacy and security measures; that compels companies to adopt self-regulatory best practices in order to tackle algorithmic bias; and that guarantees robust civil rights protections that safeguard people of color, women, and other vulnerable communities. Equipped with the authority of the National Institute of Standards and Technology’s research findings, our representatives and senators are morally responsible for preventing good and bad actors alike from freely deploying flawed forms of this technology.

Livia Luan is the programs associate and executive assistant at Asian Americans Advancing Justice | AAJC, where she supports the telecommunications, technology, and media program on rapidly evolving issues such as digital privacy, digital equity, and facial recognition technology. Read more about our telecommunications and technology program in our community resource hub.

--

--

Advancing Justice – AAJC
Advancing Justice — AAJC

Fighting for civil rights for all and working to empower #AsianAmericans to participate in our democracy.