PART 1: Predictive Policing, and Algorithmic Transparency as Anti-Discrimination

Overview

David Chang
4 min readAug 5, 2017

“The policy position that is taken [in the criminal justice system] is that it’s much more dangerous to release Darth Vader than it is to incarcerate Luke Skywalker.” — Richard Berk, University of Penn Professor, Dept. of Criminology[1]

American citizens have growingly demanded more accountability from its police forces and its justice department. They see its problems as structural, symptoms of larger questions about systemic bias and discrimination — a position dissonant with the one that police officers hold. The latter feel as though they are being treated unfairly by media depiction[2]. Their anxiety concerning their physical safety, the lack of fulfillment they get from their job, and their pressure to perform under the growing scrutiny of the media: these require our empathy and understanding.

In the dialogue between acknowledging the above, and the oft-ugly bias and discrimination that can occur endemically in police departments through long-standing fears and stereotypes, exists some surprising common ground. Born out of the link between officers’ desire for physical safety and the public desire for greater officer accountability is an unmet want to make the process “better”. For both parties, this means making more effective arrests, and more efficiently keeping crime down — the methodology is all that differs.

Enter predictive policing — a system of law enforcement that ostensibly enhances efficiency and accuracy through data transformation. Some assert that the benefits are proof enough for their widespread implementation. One such program that used “hot spots” — areas algorithmically predicted to see high rates of crime — saw crime rates reduced by 4 crimes a week in several areas of Los Angeles that it monitored[3]. Another has demonstrated the ability to reduce robberies by constructing likely housebreak patterns, using historical data from the Cambridge Police Department[4].

We have all heard of Minority Report; we use it as a warning sign (as I am sure Spielberg intended) — a line in the sand that cannot be crossed. But we draw so readily upon it when we discuss predictive policing because of the powerful emotions it evokes in us regarding fairness. When Anderson, the protagonist, is framed for his crime, we protest because of the authority that the predictive policing system of the “pre-cogs” presents: its word as truth, and thus indictment. Justice, prevailing, the ability to resist pre-determined fate: these resonate powerfully with us because we desire to live in a society that broadly treats us fairly.

The current design choices in predictive policing systems do present some unsettling questions, echoing historical conversation about justice and fairness in policing and sentencing. Civil liberties organizations voice concerns with due process rights violations for the individuals implicated[5], with some pointing to historically problematic (perhaps even biased) policing in regions with high minority populations[6]. And there is ongoing debate as to whether these technologies are “fundamentally discriminatory”[7], or reveal a startling objectivity through the data they transform.

Proponents will contend that these concerns about worsening police-community conflicts are unfortunate, but necessary trade-offs for improving the overall quality of life. After all, they argue, aren’t these concerns just issues with the current state of the system? As such technology becomes more widespread, would it not be likely that said technology becomes more accurate, and reduces such biases?

To answer this question, I will structure my argument throughout this series as the following set of points:

  1. Predictive policing algorithms are often flawed because they rely on variables that traditionally proxy with race — which should prohibit the usage of these variables — to make risk assessments, primarily regarding the recidivism of potential criminals.
  2. By going over the different sorts of predictive policing algorithms out there (with case studies), as well as what methodology they rely on, I will attempt to argue that bias is unexpectedly introduced into the algorithm as a variable because of point 1, and explore how it penetrates the aforementioned policing algorithms.
  3. I will detail how the ongoing debate with developing predictive policing technology concerns the balance between “efficiency” and “equality” — and how the push for fairness is primarily motivated by a desire for algorithmic transparency, and better understanding the models that make these risk assessments. I will argue for fairness and equality, and thus transparency, over efficiency, and argue that understanding the system doesn’t guarantee its manipulation. To do this, I will draw upon the conversation between Rawls and Nozick, and refer back to my examples.
  4. I will explain some potential changes to increase fairness in these contexts.

Link to Part 2

References

[1] Brustein, Joshua. “This Guy Trains Computers to Find Future Criminals.” Bloomberg Technology. Bloomberg L.P., 18 July 2016. <https://www.bloomberg.com/features/2016-richard-berk-future-crime/>.

[2] Pew Research Center, Jan. 2017, “Behind the Badge: Amid protests and calls for reform, how police view their jobs, key issues and recent fatal encounters between blacks and police.” <http://assets.pewresearch.org/wp-content/uploads/sites/3/2017/01/06171402/Police-Report_FINAL_web.pdf>.

[3]Chammah, Maurice. “Policing the Future.” The Marshall Project. The Marshall Project, 3 Feb. 2016. <https://www.themarshallproject.org/2016/02/03/policing-the-future#.ejqhOmft7>.

[4] Rudin, Cynthia. “Predictive Policing: Using Machine Learning to Detect Patterns of Crime.” WIRED. Conde Nast, n.d. <https://www.wired.com/insights/2013/08/predictive-policing-using-machine-learning-to-detect-patterns-of-crime/>.

[5] Patel, Faiza. “Be Cautious About Data-Driven Policing.” The New York Times. The New York Times Company, 3 Dec. 2015. <http://www.nytimes.com/roomfordebate/2015/11/18/can-predictive-policing-be-ethical-and-effective/be-cautious-about-data-driven-policing>.

[6] Gangadharan, Seeta Pena. “Predictive Algorithms Are Not Inherently Unbiased.” The New York Times. The New York Times Company, 19 Nov. 2015. <http://www.nytimes.com/roomfordebate/2015/11/18/can-predictive-policing-be-ethical-and-effective/predictive-algorithms-are-not-inherently-unbiased>.

[7] Ibid.

--

--

David Chang

I write about technology and policy, and sometimes culture.