Testifying about the Dangers of Face Recognition Use in the Private Sector

Despite the well-documented harms associated with police use of facial recognition technology, businesses are becoming increasingly reliant on the invasive surveillance tool — and not just for security purposes.

Meg Foster testifying before the Committee on Consumer & Worker Protection

On February 24, 2023, Justice Fellow Meg Foster testified on behalf of the Center on Privacy & Technology at Georgetown Law in a hearing before the New York City Council Committee on Consumer & Worker Protection. The Center was one of many privacy and civil rights organizations — including Surveillance Technology Oversight Project (S.T.O.P.), the Electronic Privacy Information Center (EPIC), the New York Civil Liberties Union (NYCLU), Fight for the Future, Surveillance Resistance Lab, and Amnesty International USA — to testify in support of a prohibition on the use of facial recognition technology by both private and government entities in New York City.

Madison Square Garden’s recently discovered and heavily criticized policy of using face recognition to identify and ban from their venues all lawyers associated with firms engaged in active litigation with the company — while shocking and abhorrent — is just the latest incident in a long history of people being harmed by facial recognition technology. As recently as November, 2022, a police officer relying on face recognition wrongly arrested and jailed a Louisiana man, making him the fifth Black man — that we know of — to experience the very grave harms of misidentification. While New York City adopted a local law in 2021 that requires commercial establishments using biometric technologies to disclose that use with conspicuous signage and prohibits the sale or exchange of biometric information by those establishments, the law does not adequately protect the privacy, civil rights, and civil liberties of consumers or workers.

Ahead of the hearing, the Committee filed a report acknowledging the numerous risks associated with biometric surveillance tools like facial recognition. We hope that the testimonies provided at the hearing persuade the Committee to take further action.

Read Meg’s written testimony below, or listen to her live oral testimony here.

• • •

Chairperson Velázquez and Members of the Committee,

I am submitting this written testimony on behalf of the Center on Privacy and Technology at Georgetown Law. We respectfully urge the Committee, and eventually NYC Council, to pass legislation to end the use of facial recognition technology in NYC’s public and private sectors.

The Center on Privacy & Technology at Georgetown Law is a law and research think tank that focuses on the privacy rights and surveillance of historically marginalized communities. Its track record includes rigorous, long-term research and groundbreaking legal and policy analysis and advocacy, resulting in state and federal legal reforms to protect vulnerable people’s civil rights and liberties from both government and corporate surveillance.

The Center has been studying face recognition since its founding in 2014. In 2016 we published The Perpetual Line-Up, the first comprehensive report on how law enforcement agencies across the country use face recognition technology. Since then, we have published four more major reports, testified before the United States Congress and numerous state legislative bodies, and worked alongside civil society and community organizations to expose and advocate against the harms of facial recognition technology, including its threats to civil rights and liberties.

While the use of facial recognition technology by private businesses has drawn “new criticism” amid Madison Square Garden Entertainment’s (“MSG Entertainment”) policy of banning lawyers employed by firms engaged in active litigation with the company (a policy made widely known by the ejection of a mother who was accompanying her daughter’s Girl Scout troop to a Rockette’s show in December, and owner James Dolan’s subsequent doubling down on this retaliatory or censorial use of the technology), MSG Entertainment has been employing facial recognition at its venues since 2018 and it is one of over 200 private companies that had accounts with facial recognition software company Clearview AI as of 2020. Thus, it is well past time for oversight. In today’s testimony, I hope to make three points that should inform your investigation of and response to the use of facial recognition technology by business owners in New York City.

  • First, left unregulated, companies can and will use facial recognition technology to retaliate against people whose speech and advocacy they find displeasing. This not only harms expressive freedoms, but it also has the potential to undermine public health and safety and to impede meaningful competition.
  • Second, private actors can use facial recognition technology to discriminate, either directly by using the technology to identify and exclude members of protected groups or people who disproportionately belong to those groups, or indirectly by basing identification and exclusion policies on proxies that closely correlate with those groups.
  • Third, existing law is inadequate to mitigate the pervasiveness of facial recognition technology, the risk that users will abuse it, and the breadth of harm that flows from such abuse.
  1. Business Owners Can Use Facial Recognition Technology to Punish Adversaries.

As the incident(s) at MSG Entertainment suggest, private business owners can and do utilize facial recognition technology to engage in retaliation, and the potential chilling effect of this use is obvious. As New York State Attorney General Letitia James suggested in a recent letter to MSG Entertainment regarding its facial recognition policy, “forbidding entry to lawyers representing clients who have engaged in litigation against [MSG] may dissuade such lawyers from taking on legitimate cases, including sexual harassment or employment discrimination claims.” Moreover, the Supreme Court has long recognized that “litigation is not [merely] a means of resolving private differences; it is also a form of political expression.” NAACP v. Button, 371 U.S. 415, 429 (1963). The fear that private companies will retaliate against those who dare to represent opposing interests not only deters the vindication of substantive rights, it threatens a key tool for social and political advocacy.

But the danger to free speech and public discourse is not limited to the realm of lawyers and litigation. Facial recognition can be used to punish and silence all sorts of critics: ongoing legal challenges by New York attorneys to MSG Entertainment’s abusive use of facial recognition technology rely on a law that was passed to protect theater critics, but by analogy, it is not difficult to imagine that any one who speaks up against corporate interests may be targeted with facial recognition technology. In fact, there are reports that MSG Entertainment does maintain a blacklist of celebrities who have criticized its owner, James Dolan.

The retaliatory use of facial recognition by private businesses to silence critics is antithetical to the values of free expression. But it also poses a danger to public health and safety and fair competition. Imagine a large restaurant chain that uses facial recognition to ban Yelp and Google reviewers with a history of commenting on health code violations or former employees that report labor violations. Such practices would allow businesses to evade public accountability for unlawful and insidious conduct, make it harder for consumers to protect themselves, deprive the market of the power to punish poor business practices, and impede the government’s ability to identify businesses flouting industry regulations.

  1. Business Owners Can Use Facial Recognition Technology to Engage in Unlawful Discrimination.

Profession may not be a protected class, but MSG’s targeting of lawyers demonstrates that facial recognition technology can allow business owners to categorically exclude specific classes of people, including those protected by state and city anti-discrimination laws. And because of the breadth of sources from which a facial recognition database can pull photos, business owners may instead engage in proxy discrimination — for instance, scanning patrons’ faces and comparing them to publicly available mugshots in order to ban individuals with a criminal history. While this policy is facially neutral in the sense that anyone could have a criminal history, longstanding disparities in policing means that it would disproportionately impact people of color.

Even where intent is absent, facial recognition can lead to discrimination. Numerous studies have revealed that face recognition software is plagued with bias, and specifically, that most face recognition algorithms perform less accurately on images of people of color, women, children, and the elderly, with Black women being subject to the highest rates of error. Though there have been several high-profile, wrongful arrests of Black men, misidentification by facial recognition technology is not limited to the policing context: in 2021, a Black teenager was barred from a roller-skating rink after a facial recognition system incorrectly matched her face to that of a patron who had previously gotten into a fight at the rink and subsequently been banned.

Given such risk of both unreliability and racial bias, businesses should not even be permitted to use facial recognition technology for security purposes, as many business owners — including James Dolan — and policymakers have suggested is appropriate. Beyond failing to mitigate public safety threats, facial recognition systems that flag the wrong person nonetheless risk unnecessary police interactions that could lead not only to wrongful arrest, but to police violence — especially if the misidentified individual is a person of color or disabled.

  1. There are Insufficient Legal Safeguards to Expose, Prevent, and Redress the Harms Caused by Facial Recognition Technology.

The potential for New York City business owners to abuse facial recognition technology is especially concerning in light of the inapplicability of the few legal safeguards that do exist in the government context. First, the state and federal open record and freedom of information laws that have been crucial to uncovering the scope of harmful government surveillance in New York City — including the NYPD’s relationship with facial Clearview AI, and its secret fund for surveillance tools — can only reach private entities to the extent that they interact with public agencies and officials, and those interactions are documented in some form. So while New York City’s law requiring commercial establishments that collect biometric information from customers to post a notice of that practice near all entrances can shed some light on the singular question of which businesses may be utilizing facial recognition technology, the disclosure stops there, leaving New Yorkers with no knowledge of what type of biometric information is being collected, from whom, for what purpose, or by what type of technology, and therefore, with no opportunity to challenge the practice or seek redress from harm.

Second, federal and state constitutional rights to privacy, due process, and equal protection that might restrict certain government surveillance practices that constitute a search or seizure, that lack some form of legal oversight or formal procedure, or that disproportionately impact certain groups of people do not protect individuals harmed by such practices in the private sector. Of course, businesses are not entirely free from accountability: numerous local, state, and federal laws exist to prohibit discrimination and harassment and enforce health, safety, and fair business practice standards. But it is those very laws that are being undermined when companies like MSG Entertainment use surveillance tools to discourage legal representation and access to courts.

What the incident at Madison Square Garden ultimately reveals is that a patchwork of laws directly or indirectly addressing some aspect of facial recognition and its attendant harms is insufficient for tackling the entire scope of the problem and will only lead to a game of whac-a-mole, with the technology perpetually outpacing the law. While a law that prohibits the wrongful refusal of admission to and ejection of ticket-holders from “places of public entertainment and amusement” can protect criticism of or other adversarial action against those establishments, it cannot guarantee admission of those same critics from other venues like sports arenas, let alone non-ticketed places like restaurants or retail stores. And while the privacy and civil rights of students are recognized by a moratorium on the use of facial recognition technology in New York schools, Amazon delivery drivers in New York forced to consent to facial recognition as a condition of employment, on the other hand, are unprotected because no such moratorium has been passed in the employment context. And finally, legislative efforts that focus exclusively on government or police use of facial recognition risk not only neglecting the retaliatory or discriminatory uses by businesses outlined above, but also leaving open a backdoor for law enforcement to access face recognition data in partnership with private businesses.

As more and more entertainment companies, retail stores, school systems, employers, and government agencies adopt facial recognition technology, and do so without public input or any form of democratic process, it is worth asking whether piecemeal oversight or legislation can adequately prevent the technology’s numerous risks posed to privacy, free speech and association, workers’ rights, and consumer protection.

We greatly appreciate the Committee’s attention to this critical issue, and thank you for the opportunity to submit this testimony.

--

--