Pulling Back the Curtain on the Technologies Automating Inequities in the Criminal Legal System

In February 2019, Nijeer Parks was arrested for a crime he didn’t commit. He was then held in jail for ten days, considered too “dangerous” to be released.

In Mr. Parks’ case, police in New Jersey and New York were seeking a man who allegedly stole candy from a gift shop and nearly ran his car into a police vehicle while fleeing. The suspect had left behind a fraudulent ID, which police ran through a face recognition system. It returned a match for Mr. Parks. Despite being nearly an hour away when the incident occurred — an alibi later verified by Western Union — Mr. Parks was arrested. Police had done no additional investigatory work, arresting Mr. Parks based solely on a bad face recognition match.

New Jersey does not have a bail system. Instead, once Mr. Parks was in jail, a risk assessment algorithm calculated how “dangerous” he would be to the public if released before trial. Having two non-violent drug-related offenses ten years prior, Mr. Parks was deemed “dangerous,” and was denied release.

Why did police think Mr. Parks was the suspect they were looking for? And why did they consider him dangerous? Both of these decisions, critical to Mr. Parks’ liberty, safety, and wellbeing, were made on the basis of algorithmic recommendations. They illustrate a major shift in the criminal legal system: police, judges, prosecutors, and other legal authorities are increasingly using algorithmic technologies when making critical decisions about policing and punishment.

Mr. Parks’ story highlights a number of issues with algorithms in the criminal legal system. They make mistakes that have serious impacts on peoples’ lives. They build upon one another to create a web of opaque technologies that is hard to understand or challenge. And they stymie attempts to reconsider how the criminal legal system operates by reinforcing existing approaches to contested questions about criminal justice: what should the role of police be? What makes a person “dangerous?” How should society treat a “dangerous” person? In doing so, algorithms perpetuate historical and contemporary inequities.

Not only are these technologies built on data that reflects the biases and inequities of the real world, they are also implemented in ways that have inequitable outcomes. Many commercially available face recognition algorithms still suffer from racial and gender bias, performing worse on people who are not older, lighter-skinned men. Predictive policing algorithms are built with police data, which reflects the fact that low-income, Black, and other historically marginalized communities are more heavily-policed. The result is algorithms that continue to recommend police surveil the same people and neighborhoods over and over again.

Nearly every critical decision made throughout the criminal legal system can be informed by algorithms. Police decide which neighborhoods and individuals to patrol using crime forecasting techniques like predictive policing. Law enforcement officers identify (or misidentify) suspects in criminal investigations with face recognition. Judges, parole boards, and officials in the courts and prison system consult risk assessments when setting bail, sentencing, deciding supervision levels during incarceration, and other decisions related to a person’s liberty. And these are just the most common algorithms; the true scope is even greater.

To make matters worse, most of these algorithms are developed by unaccountable private actors, and adopted by government agencies through opaque procurement processes. For a variety of reasons, it can be difficult if not impossible for the public — or even the people operating the technologies — to understand how these algorithms work. Private companies cite copyright protections to shield their algorithms from outside inspection, preventing critical examination of how they’re built or function. In some cases, such as “black box” algorithms, not even the developers completely understand how they work.

The quiet automation of policing and punishment is harmful not just for those caught up in the legal system like Mr. Parks — it also makes it more difficult for advocates, officials, and the general public to question how critical decisions get made. Cop Out: Automation in the Criminal Legal System is an attempt to pull back the curtain on the system of technologies that is metastasizing — and invisibilizing — a system of policing that is already rife with abuse. Understanding these technologies, and the role they play in the criminal legal system, is crucial to disrupting technological encroachment, but also to challenging the very systems of policing and punishment they prop up.

--

--

Jameson Spivack
Center on Privacy & Technology at Georgetown Law

Associate, Center on Privacy & Technology at Georgetown Law. Focusing on the policy and ethics of AI and emerging technologies. Hoya + Terp.