In the post below, Aaron Shapiro reflects on his article, “Predictive Policing for Reform? Indeterminacy and Intervention in Big Data Policing,” which appeared in a recent issue of Surveillance & Society.
In December 2015, I participated in a ride-along with a St. Louis County Police officer in the highly segregated western suburbs of St. Louis, where Ferguson is located. Although it was a Ferguson police officer that shot and killed Michael Brown the previous August, it was St. Louis County cops that policed the ensuing protests — St. Louis County Cops who showed up in riot gear to patrol peaceful vigils, who called in SWAT teams, who used tear gas and fired bean bag rounds on protestors, striking television news crews and a state senator in the process. In short, St. Louis County had an image problem.
And so it was that I was on that ride-along. I traveled to St. Louis County for my research with the product team at HunchLab, a predictive policing platform that St. Louis County’s police department was about to start using. It was my first time sitting in the front of a police car (I had some dumb run-ins as a teenager), and it was the longest conversation I’ve ever had with a cop — the ride-along was three hours.
I had just begun researching predictive policing, and a lot of coverage of the technology was cynical — and for good reason. While law enforcement leaders were raving about algorithms making policing better, fairer, and less discriminatory, stories of police officers killing unarmed African Americans seemed to be circulating on the news and social media at an unprecedented rate. Critics lambasted predictive policing as just another tool in a long string of police tech for repressing poor people of color.
The two narratives didn’t square: how could predictive policing be the solution to the cops’ image problem and make things worse? Indeed, how and why would anyone possibly think that crime-prediction algorithms could help fix policing? This is the question that I explore in my contribution to Surveillance & Society.
How and why would anyone possibly think that crime-prediction algorithms could help fix policing?
What struck me most about the ride-along, then, was how unremarkable predictive policing is in practice. When the cop “used” HunchLab, he drove to areas that corresponded with grid cells on the software, scoped the scene, and then left the area — which is pretty much what he did without the algorithm. The most “action” occurred as we were traveling between grid cells. The officer spotted a rental car and tailed it for a few blocks — rentals, he explained, can be a sign of trouble. When the driver turned without using his signal, we pulled him over, using the traffic violation as a pretext. The officer walked over and took the driver’s license and registration. When he returned, he explained that the driver smelled like marijuana, so he was calling in assistance. The police searched the man’s car and found nothing (the cop guessing that the driver may have dispensed with his weed by imbibing it). And a few minutes later, they let the driver go — probably for my sake.
When I think back on that ride-along, I can’t help but notice that predictive policing wasn’t responsible for any of that interaction — just regular, old policing. And the “war on drugs.” And a criminal justice system that uses criminalization and incarceration as a way to deal with people who can’t get a job because of previous run-ins with the law. We need tools to dismantle these repressive systems as soon as possible. The algorithm ain’t that, but nor is it the algorithm’s fault.