Statement in Support of Proposed Ordinance Banning Facial Recognition Technology in Boston

Christopher Bavitz
Berkman Klein Center Collection
10 min readJun 9, 2020

Statement in Support of Proposed Ordinance Banning Facial Recognition Technology in Boston

June 9, 2020

Introduction

The undersigned write in support of the proposed Ordinance Banning Facial Recognition Technology in Boston, offered by Boston City Councilors Michelle Wu and Ricardo Arroyo. The proposed ordinance is now under consideration by the City Council’s Committee on Government Operations (the “Committee”).

We are researchers and scholars whose work addresses issues relating to the ethics and governance of artificial intelligence, machine learning, and related technologies. Those joining this written statement do so as individuals; titles and affiliations are for identification purposes only.

We submit this statement in an effort to do the following:

(a) highlight some concerns about use and deployment of artificial intelligence, algorithmic, and machine learning technologies, generally;

(b) note that those concerns are especially acute when such technologies are deployed by government actors;

(c) identify particular technical and related concerns with facial recognition;

(d) highlight the importance of this debate to our particular social, political, and public health moment; and

(e) explain why the proposed Ordinance adequately, appropriately, and narrowly addresses the concerns expressed herein.¹

(a) AI and Machine Learning Technologies, Generally

As a general matter, facial recognition systems raise many of the same concerns raised by other technological systems based on artificial intelligence and machine learning. Of note, researchers and journalists have shown that a broad range of harmful consequences could come to pass when values embedded in a particular technological system are not aligned with the values of the societies they serve. The use of algorithmic tools in credit scoring creates the potential for discrimination.² A system designed to filter resumes submitted by job applicants — trained on information about employees who have been successful in their work with the employer — may reinforce systemic inequalities in the current workforce.³ And, any use of artificial intelligence tools raises the specter of counter-efforts via adversarial attacks; a growing body of security research around such attacks shows that malicious actors may subvert the ways AI and related systems observe patterns and apply rules, with the intention of causing harm.⁴

Evidence of bias and other flaws in such systems abounds — from a system used to assess risk in the context of the criminal justice system,⁵ to a system used to determine eligibility for Medicaid and other government benefits,⁶ to a system used to identify instances of unemployment insurance fraud.⁷ Efforts to address the potential adverse impacts of these tools have taken a variety of forms — from proposals to ban technologies outright,⁸ to the promulgation of principles or standards to which technology developers should aspire,⁹ to civil litigation geared toward vindicating individuals’ rights.¹⁰

(b) Specific Concerns About Use of These Systems by Government Actors

These kinds of concerns are especially acute when AI, algorithms, and machine learning tools are used not by private actors but by government institutions. Citizens lack choices regarding their interactions with divisions of government empowered to provide services, confer benefits, and enforce the law. The use by government actors of technical systems that perpetuate bias and discrimination, or that fail to offer adequate explanations, implicates constitutional and human rights. While citizens can invoke their rights to stay silent or refuse a search, if facial recognition systems are deployed by the government in public spaces, citizens cannot meaningfully choose to not participate.

(c) The Problem of Facial Recognition and Surveillance

Against this backdrop, government use of AI and machine learning systems specifically designed to identify and track faces raises particular concerns. These concerns start with how facial recognition systems are designed and trained. As the scholar Luke Stark has written, these systems “have insurmountable flaws connected to the way they schematize human faces. These flaws both create and reinforce discredited categorizations around gender and race.”¹¹

In addition to the fact that these systems are based on faulty categorizations, many facial recognition systems are trained on datasets that are unrepresentative of populations where the systems would be used. A range of research, including by the National Institute of Standards and Technology and by Joy Buolamwini and Timnit Gebru, has shown repeated gender and racial bias in the results of facial recognition systems. NIST found significant levels of false positives “highest in West and East African and East Asian people, and lowest in Eastern European individuals”;¹² Buolamwini audited large-scale commercial facial recognition systems and found that these systems “performed best for lighter individuals and males overall” and “worst for darker females.”¹³ While these public audits have led to overall improvements in accuracy, there are still significant concerns about biased results.¹⁴

Finally, the risks to civil liberties from facial recognition systems outweigh their potential benefits. We worry that government use of facial recognition — particularly in combination with existing facial photo databases (such as databases of driver licenses photos, which match faces to names), widespread surveillance cameras, and photos from public and semi-public social media sites — significantly jeopardizes the freedom of people to exist outside constant surveillance.¹⁵ Facial recognition technology represents a sea change in the scale and ease with which information is gathered on individuals engaged in public protest. We fear potential chilling effects from the use of this technology, particularly in terms of the rights to assembly and free expression.¹⁶ Given that citizens cannot change their faces (or — outside of the context of a pandemic — easily hide them), the use of facial recognition systems by the government is contrary to the basic rights and freedoms at the core of a democracy.¹⁷

(d) Relevance of this Debate to our Particular Social, Political, and Public Health Moment

The proposed Ordinance expressly refers to COVID-19, and it is impossible to ignore the fact that the Committee is considering it against the backdrop of an unprecedented global pandemic. In the United States, that pandemic has had an outsized impact on communities of color.¹⁸ The risks of deploying a technology that has such a poor track record with respect to racial bias, in the context of a pandemic that itself has had negative impacts along racial lines, are high.

Internationally and across the country, the pandemic has caused governments to take extraordinary steps to protect public health. But, these steps must be legitimate, proportionate, and related to mitigating the spread of the virus. COVID-19 should not be used as a pretext for the unnecessary expansion of surveillance capabilities.¹⁹ Effective public health interventions, including contact tracing, require the public’s trust and participation. The use of technologies that are not fit for purpose, that have not been satisfactorily audited for the biases described above, and that erode (whether in perception or reality) the firewalls that must exist between public health surveillance and law enforcement are counterproductive. We are particularly concerned about the hasty adaptation of facial recognition tools originally designed for purposes other than contact tracing,²⁰ as implementations of these tools may outlast this crisis and become part of a broader bulk surveillance mechanism.

The Committee’s consideration of the proposed Ordinance also takes place as the nation expresses outrage and grief about the unjust treatment of Black people in the United States by law enforcement and the criminal justice system. Given the underlying, well-documented problem of over-policing of Black communities throughout the United States and in Boston,²¹ we are particularly concerned about the potential of facial recognition systems to exacerbate policing bias and the inequities that result. We are also concerned about the significant risk to civil liberties that facial recognition systems pose, particularly the freedoms to speech and to assemble, in connection with the ongoing protest movements in support of racial justice and against unequal treatment of Black people by the police.²²

(e) The Proposed Ordinance

The proposed Ordinance is narrowly tailored to prohibit government agencies and officials in the City of Boston from relying on or using facial recognition technologies and information that is the product thereof. For all the reasons set forth herein, a ban of the sort described in the proposed Ordinance is an appropriate step for the City of Boston at this time.

For those concerned about an outright ban, however, we note that Massachusetts Senate Bill 1385 provides for a moratorium on the use of biometric surveillance that shall apply in all cases absent “express statutory authorization to the contrary.”²³ The bill goes on to define a set of strict technical and procedural requirements that any such statutory authorization must meet. We mention Senate Bill 1385 to underscore that the alternative to the proposed Ordinance is not inaction; either a ban (of the sort contemplated by the proposed Ordinance) or a moratorium (with narrowly defined limitations) is the correct course for the City of Boston at this time.

Conclusion

We applaud the Committee and the City Council for its consideration of this important proposed Ordinance banning facial recognition technology in Boston and support its passage.

The following join this statement in their individual capacities; titles and affiliations are for identification purposes only.

Kendra Albert
Lecturer on Law, Harvard Law School
Clinical Instructor, Cyberlaw Clinic, Berkman Klein Center for Internet & Society

Amar Ashar
Assistant Research Director, Berkman Klein Center for Internet & Society

Christopher T. Bavitz
WilmerHale Clinical Professor of Law, Harvard Law School
Faculty Co-Director, Berkman Klein Center for Internet & Society
Managing Director, Cyberlaw Clinic

Ryan Budish
Assistant Research Director, Berkman Klein Center for Internet & Society

Jessica Fjeld
Lecturer on Law, Harvard Law School
Assistant Director, Cyberlaw Clinic, Berkman Klein Center for Internet & Society

Urs Gasser
Professor of Practice, Harvard Law School
Executive Director, Berkman Klein Center for Internet & Society

Sybil Gelin
Project Coordinator, Berkman Klein Center for Internet & Society

Adam Holland
Project Manager, Berkman Klein Center for Internet & Society

Mason Kortz
Clinical Instructor, Cyberlaw Clinic, Berkman Klein Center for Internet & Society

Adam Nagy
Project Coordinator, Berkman Klein Center for Internet & Society

Sarah Newman
Senior Researcher, Berkman Klein Center for Internet & Society

David O’Brien
Senior Researcher and Assistant Research Director for Privacy and Security, Berkman Klein Center for Internet & Society

Hilary Ross
Senior Program Manager, Berkman Klein Center for Internet & Society

Carolyn Schmitt
Communications Associate, Berkman Klein Center for Internet & Society

Jenna Sherman
Senior Project Coordinator, Berkman Klein Center for Internet & Society

Rebecca Tabasky
Director of Community, Berkman Klein Center for Internet & Society

¹ Several of those joining this statement previously joined testimony to the Joint Committee on State Administration and Regulatory Oversight of the Massachusetts Legislature in support of H.2701 and S.1876, regarding establishment of a commission in Massachusetts concerning the use of artificial intelligence in state government. Portions of this statement draw from that prior testimony. See “Testimony in Support of H.2701 and S.1876, Establishing a Commission in MA re: Use of AI in State Government” (October 1, 2019), available at https://medium.com/berkman-klein-center/testimony-in-support-of-h-2701-71856b6b9e67.

² Kaveh Waddell, “How Algorithms Can Bring Down Minorities’ Credit Scores,” The Atlantic (December 2, 2016), available at https://www.theatlantic.com/technology/archive/2016/12/how-algorithms-can-bring-down-minorities-credit-scores/509333/.

³ Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters (October 9, 2018), available at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

⁴ Samuel G. Finlayson, John D. Bowers, Joichi Ito, Jonathan L. Zittrain, Andrew L. Beam, Isaac S. Kohane, “Adversarial attacks on medical machine learning,” Science Policy Forum (March 22, 2019), available at https://science.sciencemag.org/content/363/6433/1287. Adversarial attacks on AI systems can be very difficult to anticipate and use such fundamentally new and different techniques (compared with other forms of cyberattacks) that existing bodies of state and federal law may be ill-equipped to address them. See Ram Shankar, Siva Kumar, David R. O’Brien, Kendra Albert, and Salome Viljoen, “Law and Adversarial Machine Learning,” arXiv (October 25, 2018), available at https://arxiv.org/abs/1810.10731.

⁵ Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, “Machine Bias,” Pro Publica (May 23, 2016), available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

⁶ Jay Stanley, “Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case,” ACLU (June 2, 2017), available at https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case.

⁷ Emily Lawler, “Michigan Supreme Court ruling lets lawsuit against state in unemployment fraud scandal move forward,” MLive.com (April 5, 2019), available at https://www.mlive.com/news/2019/04/michigan-supreme-court-ruling-lets-lawsuit-against-state-in-unemployment-fraud-scandal-move-forward.html.

⁸ Kate Conger, Richard Fausset, and Serge F. Kovaleski, “San Francisco Bans Facial Recognition Technology,” The New York Times (May 14, 2019), available at https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html.

⁹ Jessica Fjeld, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar, “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI,” Berkman Klein Center for Internet & Society (Feb. 2020), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3518482.

¹⁰ See generally, Rashida Richardson, Jason M. Schultz, and Vincent M. Southerland, “Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems, AI Now (September 2019), available at https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf.

¹¹ Luke Stark, “Facial recognition is the plutonium of AI,” Crossroads 25(3):50–55, DOI: 10.1145/3313129 (April 2019), available at https://dl.acm.org/doi/10.1145/3313129.

¹² Patrick Grother, Mei Ngan, and Kayee Hanaoka, “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects (NISTIR 8280),” (December 2019), available at https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.

¹³ Joy Buolamwini, Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” (February 2018), available at http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

¹⁴ Inioluwa Raji, Joy Buolamwini, “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products,” (January 2019), available at https://dam-prod.media.mit.edu/x/2019/01/24/AIES-19_paper_223.pdf. See also Devin Coldewey, “IBM ends all facial recognition business as CEO calls out bias and inequality,” TechCrunch (June 8, 2020), available at https://techcrunch.com/2020/06/08/ibm-ends-all-facial-recognition-work-as-ceo-calls-out-bias-and-inequality/.

¹⁵ Woodrow Hartzog and Evan Selinger, “Facial Recognition is the Perfect Tool for Oppression,” (August 2, 2018), available at https://medium.com/s/story/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66. See also Kashmir Hill, “The Secretive Company That Might End Privacy as We Know It,” The New York Times (January 18, 2020), available at https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.

¹⁶ Pete Fussey and Daragh Murray, “Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology,” University of Essex, Human Rights Centre (July 2019) p. 36–39, available at http://repository.essex.ac.uk/24946/.

¹⁷ Jennifer Lynch, “Face Off: Law Enforcement Use of Face Recognition Technology,” (February 2018), available at https://www.eff.org/files/2018/02/15/face-off-report-1b.pdf.

¹⁸ See “COVID-19 in Racial and Ethnic Minority Groups,” Centers for Disease Control and Prevention, available at https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/racial-ethnic-minorities.html.

¹⁹ Internationally, the COVID-19 pandemic has been exploited by some governments to enact policies that restrict human rights. In response, a bipartisan coalition of lawmakers in the U.S. Senate and House have introduced the “Protecting Human Rights During Pandemic Act” (PHRDPA) (S. 3819 and H.R. 6986).

²⁰ See Sen. Edward Markey, “Senator Markey Presses Clearview AI on Facial Recognition Monitoring During Nationwide Protests,” (June 8, 2020), available at https://www.markey.senate.gov/news/press-releases/senator-markey-presses-clearview-ai-on-facial-recognition-monitoring-during-nationwide-protests.

²¹ Jeffrey Fagan, Anthony A. Braga, Rod K. Brunson, April Pattavina, “An Analysis of Race and Ethnicity Patterns in Boston Police Department Field Interrogation, Observation, Frisk, and/or Search Reports,” (June 2015), available at https://s3.amazonaws.com/s3.documentcloud.org/documents/2158964/full-boston-police-analysis-on-race-and-ethnicity.pdf.

²² Joy Buolamwini, “We Must Fight Face Surveillance to Protect Black Lives,” Medium One Zero (June 3, 2020), available at https://onezero.medium.com/we-must-fight-face-surveillance-to-protect-black-lives-5ffcd0b4c28a.

²³ Senate Bill 1385, “An Act establishing a moratorium on face recognition and other remote biometric surveillance systems,” Mass. Leg. 191st Gen. Court, available at https://malegislature.gov/Bills/191/SD671.

--

--

Christopher Bavitz
Berkman Klein Center Collection

WilmerHale Clinical Professor of Law, Harvard Law School; Managing Director, Cyberlaw Clinic; Faculty Co-Director, Berkman Klein Center for Internet & Society.