What do we want criminal justice technology to be?

Georgetown’s “The Perpetual Line-Up” report forces the question

R. Joshua Scannell
Data & Society: Points
5 min readOct 25, 2016

--

On October 18, the Georgetown Center on Privacy & Technology released a report on the current state of law enforcement use of facial recognition technologies. The research center’s findings, titled “The Perpetual Line-Up,” are at once deeply shocking and unsurprising. The report is authoritative and worth reading in full, but there are three major takeaways:

First, almost one half of American adults are enrolled in some sort of facial recognition database that is searchable by local, state, and/or federal law enforcement agencies.

Second, the systems that these agencies use are demonstrably racially biased.

And, third, they are developed and implemented largely in secret and in the absence of any effective constitutional or regulatory guidelines. The result of the widespread adoption of these systems has, as Georgetown’s title suggests, been the creation of an effective “perpetual line-up” of 117 million Americans.

Facial recognition straddles the affective line between traditional forensics and future-shocked science fiction.

On the one hand, facial recognition is a type of biometric technology, like fingerprinting and DNA analysis. This situates it within a traditional, and largely constitutionally protected, toolkit of forensic techniques. On the other hand, the capacities to 1) algorithmically model facial characteristics, 2) enroll photographic evidence in databases at scale, and 3) automatically run remote checks of partial photographic evidence against these databases in something close to real time, — these capacities are new.

Most of the systems that law enforcement agencies are purchasing have been brought to market in the last decade. Some techniques, like the ability to trawl ambient crowd surveillance footage to deliver facial matches in real time (currently used in at least Chicago and LA), are still effectively limited by processing power and storage capacity. However, as the Georgetown report makes clear, these constraints are diminishing.

CC BY-NC-ND 2.0-licensed photo by Craig Sefton.

“The Perpetual Line-Up” is the most recent in a wave of concerned reports on latest-generation law enforcement technologies, such as predictive policing and predictive recidivism tools. The anxieties raised seem to cluster around three basic issues:

First, a cultural aversion to generalized surveillance that is usually articulated in science fictional simile. Writers are quick to invoke specters of Orwell and Dick to make a point about the creeping power of the police state.

Second, a civil libertarian concern with the constitutionality of data-driven policing technologies. Digital surveillance tools are being rolled out in a legal abyss. None of the rulings that might provide a constitutional framework, like Katz (1967) were written with “big data” in mind, and no one knows what a reasonable expectation of metadata privacy might be. Litigation is a long and slow process, and Moore’s law doesn’t respect circuit courts’ timeframes.

Third, a civil rights concern that is most often articulated as “bias.” Facial recognition, for instance, is 5–10% less accurate at measuring black faces than white ones. This is usually thought to be the consequence of some combination of bad training data and bad hiring practices in tech. A “fair” or “unbiased” algorithm would have similar levels of inaccuracy across all racial groups.

But the critiques around “fairness” and “bias,” which “The Perpetual Line-Up” echoes, have a fundamental problem. The American criminal justice system is not ahistorically “biased”: it is racist and sexist, classist and queerphobic. In legalistic terms, that means that it already, and systemically, fails to operate based on individual suspicion and guilt. In engineering terms, it means that, if the idea is to create a more “fair” criminal justice system through machine learning, then we have the unfortunate problem that all existing training data is garbage.

To be a person of color, to be queer, or to be poor in America is already to be biometrically criminalized by such techniques as the statistical conflation of melanin and criminality — or, in the parlance, “risk.”

As Sorelle Friedler and her team have recently noted (in different terms), a perfectly “fair” and “unbiased” digital system that takes actually existing social relations as its blueprint will achieve little more than transubstantiating an organizing social logic, racism (to say nothing of classism or queerphobia), into an engineering concern.

The Georgetown report shows that facial recognition technology presents a unique challenge to basic assumptions about the American social contract. We, they argue, don’t want to live in a society that criminalizes a person’s body rather than a person’s actions. The perpetual line-up does just that by surreptitiously enrolling those who have not already been formally ensnared in the criminal justice system into that system’s databanks. That may indeed feel “creepy” to those of us for whom such carceral attention is novel. But, for many, it’s just life.

Biometrics — which is to say the technical process of measuring, distinguishing, taxonomizing, and thereby assigning social value to different bodies — have long been crucial to the American criminal justice system’s operative logics of criminality. When, for instance, black people are arrested at five times the rate of the rest of the population (as the report notes is the case in Minnesota), that is evidence of the constitutive link between biometrics — in this case, skin color — and generalized suspicion. For tens of millions of Americans, this creeping surveillant power of the security state existed long before the Department of Homeland Security began to pay for facial recognition tech.

Reports like “The Perpetual Line-Up” force a fundamental question: What do we want technologies like facial recognition to do? Do we want them to automate narrowly “unbiased” facets of the criminal justice system? Or do we want to end the criminal justice system’s historical role as an engine of social injustice? We can’t have both.

Points: In this helpfully link-filled Points original, R. Joshua Scannell introduces us to the Georgetown Center on Privacy & Technology’s massive, urgent report on police use of facial recognition tech — as a way into a larger question: What do we want from criminal justice technology? — Ed.

R. Joshua Scannell is a researcher at Data & Society and a PhD candidate in sociology at the CUNY Graduate School and University Center. His work examines the relationship between contemporary digital technology and logics of policing.

--

--