The history of predictive policing in the United States

VRKRBR
7 min readMay 29, 2022

--

“Police officers […] are getting ahead of the bad guys by figuring out where crimes will be committed before they take place.”

That is how TIME Magazine described predictive policing in 2011 on its “The 50 Best Inventions” list. At the time several police departments in the US had started to run trials with predictive software. The technology was widely lauded as the future of law enforcement, an opportunity for police officers to predict the future and to stop crimes before they occurred. Numerous media outlets credulously repeated the software developers’ claims that predictive policing would eliminate human bias, make policing more targeted and save police resources.

Predictive policing first became attractive to police after the 2008 recession, when departments across the US faced budget cuts. Law enforcement agencies wanted to cut costs by using software to target their operations more precisely. In addition, large federal grants were awarded to develop smart policing solutions. In 2009 $3 million went to the LAPD for one of the first trials of location-based predictive policing. The goal was to predict areas where crime would occur and to proactively deploy officers as a deterrent. The effort was led by Police Chief William Bratton. He had spearheaded data-driven policing in New York City in the 90s and brought his acumen to the LAPD in the early 2000s. The involvement of Bratton, who was highly esteemed in law enforcement circles, lent credibility to the technology and contributed to its adoption by other departments across the country. In 2014 a survey of 200 departments found that 38% were using predictive policing, with 70% stating that they were planning to implement the technology in the next two to five years. The LAPD’s trial was carried out in cooperation with UCLA professors Jeff Brantingham and George Mohler. In 2012 they would go on to create PredPol (now Geolitica), the world’s most widely used predictive policing software. According to the company’s website, their product “is currently being used to help protect one out of every 30 people in the United States”.

While the market is now firmly in the hand of private businesses, the fundamental ideas of predictive policing can be traced back to methods developed by the departments themselves. Bratton was NYPD Police Commissioner in 1994 when his Deputy Jack Maple developed Compstat (COMPuter/COMParative STATistics). The system tracked crimes across the city by compiling information on the time, place, and victims of incidents. For the first time officers were able to access crime statistics for constant analysis. Nowadays Compstat is used by most major police departments. Making its own foray into more advanced digital systems the NYPD began cooperating with Microsoft in 2008. With the tech giants help the NYPD build its Domain Awareness System (DAS). According to its developers, DAS is “a network of sensors, databases, devices, software, and infrastructure that delivers tailored information and analytics to mobile devices and precinct desktops”. The programme is one of the largest surveillance networks in the world.

At the outset, DAS was used as the NYPD Counterterrorism Bureau’s central repository for sensor data from security cameras, license plate readers, and radiation sensors. In 2010, geocoded NYPD records, such as complaints, arrests, emergency calls, and warrants were added together with alerting and pattern recognition analytics. The NYPD rolled out DAS for its entire force in 2013. In addition to Compstat 2.0, predictive algorithms were incorporated into the system to support resource allocation. By April 2016 all of the NYPD’s roughly 36,000 officers were able to access DAS on their phones or tablets. New features keep being added to DAS, such as the machine learning model Patternizr in December 2016. The recommendation algorithm correlates occurrences across crimes and offenders to establish crime patterns. DAS illustrates a key feature of predictive policing: it is embedded in a wider surveillance architecture. A large variety and quantity of data beget new forms of analysis to help (or enable) officers to process vast quantities of information and instruct decision-making.

Like the first trial carried out in Los Angeles, early iterations of predictive policing were location-based and fundamentally computerised versions of traditional hotspot policing. The underlying theory is that future crime is more likely to occur in the vicinity of past crime (what analysts call a near repeat effect). Based on past crime data and other factors the software makes spatial-temporal predictions about which areas police should patrol to prevent the next burglary or car theft. Initial trials seemed to validate claims about the technologies effectiveness. In 2012, having used predictive methods for six months, the LAPD reported a drop in burglaries by 25% compared to the previous year. Similar results were found in other areas of California and outside the state in cities such as Seattle and Atlanta. However, information on the impact of predictive methods was either released by the police departments themselves or the software developers they partnered with. Independent researchers came to less conclusive findings. A 2014 report on a predictive policing experiment in Louisana found no significant difference between the treatment districts that used predictive policing and the control districts that did not. To this day the only randomised controlled trial stating that predictive policing led to a reduction in crime was co-authored by the founders of PredPol.

With the introduction of increasingly person-based prediction models, public concern about privacy and the reinforcement of biases grew. The programmes process diverse data in large quantities (e.g. social media, vehicle registries or sensor data) to track and profile potential offenders or identify victims of future crime. One case that garnered attention was New Orleans’s secret six-year cooperation with Palantir. In 2018 it became public that the city’s police force had been using the company’s network-analysis software to compile lists of people likely to be involved in shootings, either as victims or perpetrators. Police used the ranking to target at-risk individuals for intervention and many were subsequently indicted. According to reporting by The Verge, key city council members were not aware of the police department’s partnership with the software company. The Mayor’s office terminated the cooperation, one month after it became public.

Initially, politicians and police alike saw predictive policing as a chance to legitimise police work and reduce discriminatory practices. They welcomed the tool and proclaimed a new, unbiased era of policing. Like in St. Loius where the technology was introduced in 2014. After the police killing of unarmed Black teenager Michael Brown in nearby Ferguson the Department’s leadership expressed their hope the software would lead to more objective policing. For this purpose Philadelphia start-up Azavea developed HunchLab, which was later acquired by ShotSpotter the company known for its controversial gunshot detection technology. HunchLab makes predictions on the two areas with the highest risk of serious crime within an eight-hour time span. The software mainly uses crime statistics to which census and weather data, as well as information on population density, businesses, schools, transportation hubs, and social events, can be added. Using HunchLab police are supposed to discover correlations. For example, crimes are less likely on cold days or car thefts occur more often close to schools.

The scandal in New Orleans and similar reports from other cities contributed to the enhanced scrutiny of predictive policing. Critics claim that algorithms reinforce and legitimise discriminatory practices instead of removing human bias from policing. In their 2016 study researchers, Kristian Lum and William Isaac demonstrated that supplying PredPol’s algorithm with historic data on drug crimes in Oakland resulted in further patrols in already heavily policed areas. As a result, Black people were roughly twice as likely to be targeted as White people, despite similar rates of drug use. According to Lum and Isaac: “It [predictive policing] is predicting future policing, not future crime”. They warned that using PredPol could lead to a spiral of over-policing. Police interventions that are based on algorithmic recommendations become part of police data, which inform subsequent predictions. If the software indeed results in higher policing of non-White neighbourhoods, then the database becomes increasingly skewed the more prediction-based interventions enter the system. A 2021 leak of 5.9 million crime predictions by PredPol seemed to validate the company’s critics. In their analysis, Gizmodo and The Markup found that the Whiter and richer a neighbourhood, the less likely the software would predict a crime there. This pattern emerged even in predominately White areas, where blocks with a high percentage of Latino residents were singled out for police patrols.

Pushes for legislation emerged bolstered by the Black Lives Matter movement and a country-wide reckoning with violence and racism in policing. In 2016 the American Civil Liberties Union (ACLU) launched its Community Control Over Police Surveillance (CCOPS) campaign. The ACLU’s model bill gives local officials control over the type of surveillance technology used by police and sets transparency standards. To date, 22 jurisdictions have passed CCOPS laws. In early 2020 the LAPD was one of several police forces that stopped using PredPol amid activist pressure, to cut costs and because the technology failed to lead to a reduction in crime. In June 2020 the NYPD, the US’ largest police force adopted a CCOPS law. The same month the Santa Cruz city council banned predictive policing, a first in the US. In response, PredPol CEO Brian MacDonald dismissed the backlash to his company’s product. According to MacDonald, human error is the reason why racial biases enter policing, not technology. Nonetheless, across the pond, the European parliament is taking more radical steps and pushing for a complete ban on predictive policing as part of the EU’s AI Act.

Further reading

Brayne, S. (2021). Predict and surveil: Data, discretion, and the future of policing. Oxford University Press. https://doi.org/10.1093/oso/9780190684099.001.0001

Ferguson, A. G. (2017). Policing Predictive Policing. Washington University Law Review, 94(5), 1109–1189. https://openscholarship.wustl.edu/lawlawreview/vol94/iss5/5

Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19. https://doi.org/10.1111/j.1740-9713.2016.00960.x

Perry, W. L., McInnis, B., Price, C. C., Smith, S., & Hollywood, J. S. (2013). Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation. https://www.rand.org/content/dam/rand/pubs/research_reports/RR200/RR233/RAND_RR233.pdf

--

--