Future Crime and the Politics of Predictive Policing

Egwuchukwu Ani
surveillance and society
3 min readDec 2, 2021
Crime radar Image forecast.

In the post below, Daniel Edler Duarte reflects on his article, “The Making of Crime Predictions: Sociotechnical Assemblages and the Controversies of Governing Future Crime,” which appeared in a recent issue of Surveillance & Society.

///

Crime forecasting systems are often explained with clichéd references to “The Minority Report” novella. But far from sci-fi imagination, the police have been interpreting statistics, mapping crimes, and attempting to stay one step ahead of suspects for almost two centuries now. Until a few years ago, crime analysts had only managed to describe the distribution of incidents and to identify clusters. But recent developments in environmental criminology, computer science, and econometric modelling have arguably moved us closer to the “holy-grail of policing.” In virtually every major city, police officers are increasingly looking into digital maps that draw predictive knowledge from data on past crime to get informed about where they should go and what kind of crime dynamics they should be aware of.

Yet evidence of the impact of crime forecasting remains scarce. As much as enthusiasts applaud it as a “brilliantly smart idea,” the lack of in-depth studies of specific systems casts a shadow over predictions. Due to the challenges of doing research with the police and the private tech sector, we still have little information on crime dynamics, can barely evaluate results, and cannot tell how inputs are transformed into outputs. So automated decisions with deep influence on society are made by algorithms hidden in black boxes. To make matters worse, critics have frequently raised concerns regarding these systems’ potential for discriminating against poor, black, and migrant
communities.

Automated decisions with deep influence on society are made by algorithms hidden in black boxes.

My research aims at filling this gap by telling the story of the making of CrimeRadar, an app developed by a Rio de Janeiro-based think tank in partnership with local police authorities. Based on 16 months of fieldwork, the article documents the many political choices, criminological assumptions, and inherent uncertainties involved in the algorithmic analysis of crime patterns in Rio, raising evidence against the supposed impartiality and color-blindness of predictive policing solutions.

Although the paper demonstrates that biased policing strategies should not be “tech-washed” as objective analysis, I also argue that critics should remain skeptical about the implications of new security technologies. My point is not that claims of stigmatization and racism are unfounded but that detailed examinations of technologies-in-the-making may uncover different discriminatory practices and effects. In the case of CrimeRadar, instead of contributing to the over-policing of black, poor communities, it further invisibilized and marginalized them.

In the case of CrimeRadar, instead of contributing to the over-policing of black, poor communities, it further invisibilized and marginalized them.

Finally, the paper directs our attention to how crime and violence are made (un)knowable and how controversies and contestations are rendered (in)visible in the making of digital policing devices. Once we grasp the ordinary practices of “othering” ingrained in crime forecasts, we may have a
better understanding of how such tools induce security anxieties and how they may authorize practices of control against specific groups. Perhaps, we may even produce insights about how to build more effective spaces of contestation.

--

--