PART 2: Predictive Policing, and Algorithmic Transparency as Anti-Discrimination

Automated risk assessment: Understanding the methodology, and a brief summary of case studies

David Chang
5 min readAug 5, 2017

To address these different perspectives, it will be helpful to unpack what “predictive policing” AI systems entail, and how they are designed.

“Predictive policing” was coined by Police Chief William Bratton and the Los Angeles Police Department, who — around 2008 — began public outreach about the benefits of data transformation and predictive analytics for police departments[8]. Various symposiums would be held over the years, coinciding with magnified media coverage around this topic, helping to clarify the term’s definition in addition to the question of its future usage and implementation by law enforcement.

As for the mainstream argument from its proponents regarding why predictive policing works? The prior cited RAND Report asserts that predictive policing functions on the (well-tested) assumption that crime is predictable. Criminals feel comfortable operating in familiarity[9]: committing the same crimes, around the same time and location, with sufficient frequency. This matches popular research done on criminal rational-choice theory, and thus makes predictive modeling possible.

The report additionally has a useful framework to formulate the types of general functions such systems perform[10]:

  1. Methods for predicting crimes.
  2. Methods for predicting offenders.
  3. Methods for predicting perpetrator’s identities.
  4. Methods for predicting victims of crimes.

While historical biases about race and gender — our main concern for why predictive policing requires greater transparency — possibly come into play for any of these general functions, we should be concerned most with the first three, as they have the highest potential for adverse impact on individuals’ lives[11]. Though this categorization generally describes predictive policing algorithms, we must also consider the two broad categories at which predictive policing algorithms generally operate — at the group level and at the individual level, and it is important to focus on both as they pertain to the equality-efficiency equilibrium. I will explore each in greater detail as follows:

Group-level programs attempt to broadly categorize characteristics about criminal activity through data collection and transformation, and leverage the latter to generate predictions about criminal activity and assess risk. “Hot-spot” technology is a group-level program, and includes PredPol, an algorithm that I will detail below.

PredPol is a crime prevention program, born out of a collaborative research between the LAPD and UCLA. It professes to only rely upon three inputs — crime type, location, and date / time — and asserts that by doing so, it has eliminated “any personal liberties and privacy concerns[12]. PredPol processes historical data and probabilistically calculates potential crime at an hourly rate, accomplishing this by using a methodology analogous to predicting seismic waves. By aggregating the history of crime in an area and visualizing it into patterns, PredPol can predict where crime will happen next[13]. This is also known as a “near-repeat” model, which operates on the assumption that areas recently experiencing higher levels of crime will additionally sustain higher proximal crime in the near future — an “aftershock rate”. This technique works best with burglaries, and less so with other crimes (RAND 42–44). PredPol installation has two categories for primary focus of the algorithm: Part 1 focuses on violent crimes (i.e. homicide, arson, assault), while Part 2 expands this scope to include “nuisance” crimes like vagrancy and drug consumption / transactions (O’Neil 86).

Individual-level programs build upon these broadly characterizations in conjunction with a more granular dataset on the criminal to create a more personalized profile of risk-assessment. They make individual classifications. This includes identification software and questionnaire-based risk-assessment software, and I will briefly detail some notable examples below.

Northpointe is a proprietary company that focuses on local and state government partnership to improve correctional decision-making at an individual and policy level[14]. Its flagship product is the “COMPAS” algorithm, which assesses the recidivism rate — the likelihood for another arrest — for recent offenders preparing to transition to community life[15]. To calculate this score, Northpointe relies upon a questionnaire that asks the offender about gang activity, residential stability, and questions about criminal thinking, among other queries[16].

A team of Shanghai University researchers have developed an individual-level algorithm that attributes criminality based on traditionally forbidden / impermissible classifiers[17]. In a nutshell, the four classifiers of race, gender, age, and facial expressions are argued to have consistent success in determining criminality, with the researchers positing that “normal” people have more similar faces. They generated their results by using computer vision and pattern recognition on a database of roughly two thousand faces. Computer vision, in this context, should be understood as the ability of a program to “see” — meaning it can recognize certain objects in the environment and create certain outputs with the express goal of increasing its own efficiency and robustness across different objects / environments[18].

The aforesaid Richard Berk has also emerged as a prominent figure in the developing field of predictive policing, creating both group and individual-level algorithms. He’s conducted work with the Minneapolis Police Department on reducing the recidivism of domestic violence[19], as well as with the Pennsylvania Board of Probation and Parole to assess risk for inmates. For the Philadelphia Board, Berk developed an algorithm that drew upon a historical dataset of a hundred thousand cases from the 1960’s to forecast the recidivism rate; he trained the program to focus on predictors like age, gender, or prior neighborhood, among other factors[20].

Link to Part 1
Link to Part 3

References

[8] Perry, Walter et al. Predictive Policing: The Role Of Crime Forecasting In Law Enforcement Operations. RAND Corporation, 2013. p. 4–5.

[9] Ibid, p. 2–3.

[10] Ibid, p. 8–9.

[11] Most people would agree that the last, while having potential to discriminate, has a largely beneficial effect, as it caters to victims rather than perpetrators. False positive rates have a lower threshold for harming the individual involved for predicting victims of crimes.

[12] “About PredPol.” PredPol. PredPol, n.d. <http://www.predpol.com/about/>.

[13] O’Neil, Cathy. Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group (NY), 2016. p. 86.

[14] “About Us.” Northpointe. Northpointe Inc., n.d. <http://www.northpointeinc.com/about-us>.

[15] “COMPAS — The most scientifically advanced risk and needs assessments.” Northpointe. Northpointe Inc., n.d. <http://www.northpointeinc.com/risk-needs-assessment>.

[16] For example, questions phrased as “If someone insults my friends, family, or group, they’re looking for trouble” with possible answers from STRONGLY AGREE to STRONGLY DISAGREE.

“Sample-COMPAS-Risk-Assessment-COMPAS-”CORE”.” ProPublica. Pro Publica Inc., n.d. <https://www.propublica.org/documents/item/2702103-Sample-Risk-Assessment-COMPAS-CORE>.

[17] Wu, Xiaolin, and Xi Zhang. “Automated Inference on Criminality using Face Images.” arXiv preprint arXiv:1611.04135 (2016). <https://arxiv.org/pdf/1611.04135.pdf>.

[18] Learned-Miller, Erik G. Introduction to Computer Vision. Amherst, Massachusetts: University of Massachusetts, Amherst, 19 Jan. 2011. <https://people.cs.umass.edu/~elm/Teaching/Docs/IntroCV_1_19_11.pdf>.

[19] “Individuals — Sherman & Berk (1984).” Center for Evidence-Based Crime Policy. Center for Evidence-Based Crime Policy, n.d. <http://cebcp.org/evidence-based-policing/the-matrix/individuals/individuals-sherman-and-berk-1984/>.

[20] Labi, Nadya. “Misfortune Teller.” The Atlantic. The Atlantic Monthly Group, Jan. & feb. 2012. <https://www.theatlantic.com/magazine/archive/2012/01/misfortune-teller/308846/>.

--

--

David Chang

I write about technology and policy, and sometimes culture.