A New AI Lexicon
Published in

A New AI Lexicon

Illustration by Somnath Bhatt

A New AI Lexicon: Surveillance

The Ghosts of White Supremacy in AI Reform

A guest post by Stop LAPD Spying Coalition, a grassroots community organizing to abolish police surveillance in Los Angeles. Twitter: @stoplapdspying

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

Surveillance and data collection have long been advanced by colonizers getting together to decide how Black and brown people should be controlled. Today, policing is among the U.S.’s most violent methods of racial control, and efforts to improve, refine, and calibrate police surveillance through police reform continue the tradition of white supremacist experimentation. This essay explores how reformist concepts and language like “oversight,” “accountability,” and “transparency” have proliferated alongside harmful surveillance technologies, helping to normalize and expand them. These reform concepts are often promoted by nonprofits funded by the policing industry, and they are instituted through police oversight bodies, transparency requirements, regulatory criteria, and auditing processes. We name these reformist concepts, institutions, and processes surveillance bureaucracy.

Reformist institutions are where police and their mainstream critics seek common ground. Reformism is especially dangerous when applied to practices such as “data-driven” policing because it strengthens the state’s ability to claim objectivity in its violence. For a decade, the Stop LAPD Spying Coalition has worked to end data-driven policing in Los Angeles, especially the algorithmic systems that mine mass surveillance data to target people and places for increased policing. Our work against these programs has confronted not only the police agencies that wield deadly data systems but also the reformist institutions that police work with to sanitize their violence.

Today’s police technologies build on historical projects of European colonization and pacification, as well earlier methods of maintaining the enslavement of Black and indigenous people. In the U.S., the evolution of police surveillance can be traced back to lantern laws, skin branding, and runaway slave notices, as we know from Simone Browne’s crucial book Dark Matters. This is why the Stop LAPD Spying Coalition’s efforts to confront the harms threatened by AI are not focused on privacy and fairness — rights that many in our communities were never afforded to begin with — but grounded in advancing anti-colonialism and Black liberation.

We see police agencies like the Los Angeles Police Department (“LAPD”) as the vanguard of white supremacy, spending their yearly billions to pioneer new ways of targeting, controlling, and dominating Black and brown communities. Surveillance bureaucracy and reform have been crucial to this process. These approaches suggest there is a correct way to do surveillance, so long as the harm is supervised and standardized. From this perspective, innovations like police algorithms are couched as always in-process, requiring further experimentation, refinement, and monitoring. For example, LAPD Chief Michel Moore has deflected criticism of data-driven policing programs by stating that they are part of an ongoing evolution of policing, and citing the need for the LAPD to “experiment” and “try emerging strategies.”[1]

Instead of empowering communities that are policed, surveillance bureaucracy promotes the “expertise” of technologists, lawyers, academics, and others willing to collaborate with police. These credentialed professionals produce the reforms that the state uses to transform itself and make surveillance technologies seem necessary and even progressive. Along the same lines, the state turns to technologists, researchers, data scientists, and computer programmers to provide a scientific veneer to longstanding practices of racial profiling and racist patrolling. We have even observed researchers and scientists working to calibrate the precise toll of police technologies, as though there can ever be an acceptable level of racial harm from policing.

Below we highlight examples of academic and private sector collaboration with police to question why there is so little discussion in the critical AI space about the obvious ethical problem of companies that profit off policing technologies also funding academic critiques of these same technologies. The problem begins with AI’s political economy, as well as the ethics and ideologies driving reform. As J. Khadijah Abdurahman has urged, we must confront “the Fairness, Accountability, Transparency and broader ethics frameworks that have allowed opportunists to build their brand, taking up the space required to address the core issues of Big Tech’s hegemonic violence.”

This process of facilitating surveillance experimentation through bureaucratic reform has been unfolding for years with algorithmic policing in Los Angeles. Last year, a community organizing campaign forced LAPD to end its use of “predictive policing” software sold by PredPol, a for-profit business founded by UCLA professors. Soon after, a group of over 1,400 academic mathematicians signed a letter that criticized PredPol’s “racist consequences.” The same month LAPD stopped using PredPol, it announced a surveillance framework entitled “Data-Informed Community-Focused Policing” that claims police will now use mass data to “measure results, improve efficiency, and provide overall accountability.” Soon after, PredPol reincorporated as Geolitica and undertook a nearly identical rebranding, changing its slogan from “The Predictive Policing Company™” to “Data-Driven Community Policing.” They now say their software helps “public safety teams to be more transparent, accountable, and effective.”

PredPol’s rebranding, before and after. Sources: https://geolitica.com and https://www.predpol.com

We see these rebranding maneuvers — turning “predictive policing” into “using data to hold police accountable” — as reformist strategies to make policing stronger and more durable. Herein lies the danger with surveillance bureaucracy: Whether via data-driven policing’s earlier pretense of “predicting” crime or the reformed framing, LAPD will continue to collect data to control and harm our people. The surveillance inputs, data processing systems, and policing outputs remain the same. In fact, replacing the discredited project of “predictive policing” with the reformist footings of “accountability” and “community” makes these systems more difficult to dismantle.

These notions of transparency, accountability, and community policing are not what the grassroots organizers who worked to dismantle predictive policing fought for. Along with the community groups and individuals who propelled our organizing, we demanded abolition of these technologies. But policing exists within an ecology where institutions committed to reform through experimentation and refinement can suppress community voices. Reformism is an industry unto itself, comprised of for-profit surveillance vendors like Microsoft — whose products are integral to LAPD’s architecture of surveillance and racial profiling — but also academic institutions, researchers, and nonprofits. Many of those institutions are directly funded by the industry interests that use police reform to sell their products.

For example, the NYU nonprofit Policing Project says on their website that they work to “strengthen policing,” and that “Microsoft provides support for the Policing Project’s work on ethical regulation of policing technology.” The nonprofit was founded by two law professors who for years used legal scholarship to promote the idea of applying administrative rulemaking to police technology. With that intellectual scaffolding in place, these lawyers now use funding from companies who sell police technology to help police write administrative rules for this same technology. Policing Project receives funding from Microsoft, Amazon, Shotspotter, Mark43, and Axon, all of whom sell AI software to police. In 2017, LAPD hired Policing Project to help write its body camera policies. Once those policies were implemented, LAPD paid Axon over $36 million for body cameras as well as storage and analysis services for the footage. Axon then used these profits to fund the Policing Project.

Another one of the Policing Project’s funders, ShotSpotter, faced widespread community outrage this year when its technology was linked to the police murder of 13-year-old Adam Toledo. Not only does ShotSpotter fund Policing Project, its CEO also sits on the nonprofit’s advisory board. A news article describing Toledo’s murder quoted ShotSpotter’s vice president referencing a Policing Project study that touted ShotSpotter’s benefits. Those claims were later contradicted by a peer-reviewed 7-year study of 68 cities published in the Journal of Urban Health. Unlike Policing Project’s work, this study was not sponsored by ShotSpotter.

We have also seen Policing Project work to undermine community opposition to the business of its funders. For example, the group’s founder Barry Friedman met with LAPD last year to pitch an effort to help “law enforcement to overcome the negative perceptions of tech in the delivery of police services.” In other words, they met to directly undermine the work of community groups like ours, which is work they knew well from their last foray into LA, when we mobilized community opposition to a project LAPD had hired them for. Emails we obtained through public records released by LAPD reveal that, after L. Song Richardson, the dean of U.C. Irvine School of Law, pulled out of supporting Policing Project’s work, Friedman wrote to LAPD about how “disappointing” Richardson and her colleagues were. He also told the LAPD officials: “This has been my lesson in LA politics, and it makes me feel for your position all the more.” We doubt this is the first time police reformers and police officials have found affinity in the trouble they face from Black and brown critics of policing.

This essay shows how data-driven technologies help obfuscate the institutions that decide who is policed. Police practices are never determined by the State alone but have instead been shaped by a range of actors, including nonprofit consultants, private industry, real estate developers, and the propertied class, as well as by academics that lay the intellectual framework for racist violence like “Broken Windows policing.” Today policing decisions continue to be made as much by police as by those sponsoring the “neutrality” of nonprofits that write regulations for police technology, by those who sit on bodies like Axon’s “AI and Policing Technology Ethics Board,” and by those whose research reduces the trauma of racist policing to a cost-benefit ratio. If we want to end police violence, we must expose all of these workings.

The Skid Row neighborhood where the Stop LAPD Spying Coalition is based has been recognized as perhaps “the most heavily policed area in the world.” Skid Row is where police technologies are tested before they are deployed more broadly. We argue that bureaucratic reforms continue the legacy of colonial experimentation in service of white supremacist political ends — much in the same way colonial administrators used legal bureaucracy, surveillance technology, and even scientific study to dominate people they considered seditious or politically threatening. Such governance can never be made ethical or accountable. Until we dismantle policing, we expect that the ghosts of white supremacy will live on in colonization’s data, its ethics, and its ideologies.

References:

[1] Los Angeles Police Commission Regular Meeting, Tuesday October 15th 2019 930am; Police Admin Bldg; Police Commission Board Room; 100 West 1st Street, LA, CA 90012. See: https://lacityview.org/programs/la-police-department-commission-meeting

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
AI Now Institute

AI Now Institute

3.3K Followers

Researching the social implications of artificial intelligence now to ensure a more equitable future