6 Reasons Why You Should Care About the Future of Policing

Henrik Chulu
15 min readMar 6, 2018

--

This long listicle was created in preparation for a session at the Internet Freedom Festival 2018.

New technologies crystallize preexisting social relations.This is not a natural law as much as a general observation and one with obvious counterpoints. But as a heuristic for science and technology studies it offers ways of dealing with the conundrum formulated by Melvin Kranzberg as his ‘first law of technology’:

“Technology is neither good nor bad; nor is it neutral.”

This is why policy and activism is always urgent in response to new technological developments. Because left to their own devices, engineers will more often than not engineer systems that extrude the social dynamics of power, capital, culture and discourse into physical and digital reality.

This is currently the case in the world of policing, where big data and machine learning techniques are being put to the service of attempting to predict future crime. But as Ingrid Burrington writes, “The future of policing, it seems, will look a lot like the present of policing, just faster and with more math. Instead of using inherent bias and simplistic statistics to racially profile individuals on a street, cops of the future will be able to use complicated statistics to racially profile people in their homes.”

One feature of predictive policing that Burrington points to is whose problems it solves. While there is little evidence that it reduces overall crime rates, the main selling point appealing to police departments is that it helps them allocate scarce resources.

Creative Commons BY-SA Wacko Photographer

It is not so much a new way of doing street level police work, but a new way of making decisions about when and where to do it and who to do it to. “As it’s currently implemented, predictive policing is more a management strategy than a crime-fighting tactic,” writes Burrington.

The following are the most salient issues that I have noticed since I began tracking the field of policing technology while assisting the curators of The Glass Room, by Tactical Tech and Mozilla in New York City in December 2016. I have previously processed some of these ideas (as well as the 2016 US elections and EU refugee crisis) in fictional form in the science fiction short story Welcome to Their City.

1. Predictive Algorithms Turn Policing Even More Racist

A machine learning algorithm works by using statistical techniques to recognize patterns in large amounts of new data. But in order to learn which patterns to look for, the algorithm needs to be trained. The data used to train the algorithm is called training data and the result of feeding the algorithm with training data is its model of the world. Through this model it can recognize similar patterns in new data sets.

Crime analysts have applied statistics, mapping, and profiling to law enforcement at least since the 1960s, a practice that evolved into ‘hotspot policing’ in 1990s. This refers to an increased focus on preventing crime in concentrated geographic areas rather than traditional policing focused on people.

Predictive policing software brings machine learning techniques to traditional crime analysis in an attempt to better predict where future crime will happen and/or who will commit it. The algorithms driving most predictive policing software are trained on historical crime statistics from police departments. As a consequence, this approach runs the risk of reinforcing whatever biases are already built into current policing practices.

Alexis Madrigal did a segment of Real Future that caught a real life example of the way a slightly racist predictive model can turn already racially biased police officers even more biased (fast forward to about the five minute fifty second mark for the racism part).

This could have been an isolated case chosen to fit a narrative, but in a study of the geographic crime prediction tool PredPol, used in the Santa Cruz example featured in Real Future, researchers from the Human Rights Data Analysis Group (HRDAG) showed that the software generally recommended sending police patrols to predominantly black neighborhoods when applying the algorithm to crime data from Oakland, California. HRDAG is a non-profit research group that analyzes violations of human rights, such as war crimes and crimes against humanity, around the world.

The problem is that these neighborhoods are already disproportionately policed in comparison to predominantly white neighborhoods with similar crime rates. After a few iterations processing this data with the PredPol software, a feedback loop emerges, amplifying the inherent racial bias of where officers would go to make arrests. This imposes a real life discriminatory cost on the inhabitants:

“Using PredPol in Oakland, black people would be targeted by predictive policing at roughly twice the rate of whites. Individuals classified as a race other than white or black would receive targeted policing at a rate 1.5 times that of whites,” they write.

Algorithmic risk assessment is also being put to use guiding courts in the sentencing of convicted criminals where it has been shown by ProPublica to also produce racial disparities in its outcomes.

ProPublica’s investigation shows that for black defendants, the algorithm weighs false positives higher, while for white defendants, false negatives get weighed higher. This means that black defendants were twice as likely to falsely be predicted to re-offend as white defendants who were more often falsely classified as lower risk, not based on evidence in the data, but purely by machine bias. And even besides the racial disparities the effectiveness of the sentencing guidance algorithm is questionable:

“Only 20 percent of the people predicted to commit violent crimes actually went on to do so. When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip,” writes ProPublica’s investigators. This overall inadequacy of an algorithm’s predictive power coupled with the racial disparities of it’s outcomes, is an example of preexisting social injustices being baked into new technology.

As predictive policing software is rolled out in police departments, the historical biases inherent in the training data gets recycled and masked by the use of a seemingly neutral algorithm. This legitimizes past practices while at the same time reinforcing them and thus furthering disproportionate policing and associated costs to these communities, amounting to a discriminatory policy.

2. Predictive Policing Targets the Poor

Another problematic issue with predictive policing is how it reproduces class-based inequities. Besides the racial biases reproduced from the police crime statistics, the Human Rights Data Analysis Group researchers also showed how applying the PredPol algorithms to Oakland would unfairly target impoverished communities.

“We find similar results when analysing [sic] the rate of targeted policing by income group, with low-income households experiencing targeted policing at disproportionately high rates,” they write.

In general, predictive policing systems are targeting street crime which is predominantly perpetrated by poor people. This is a case of political choice masquerading as technological inevitability. Starkly illustrating this is White Collar Crime Risk Zones built by Brian Clifton, Sam Lavigne and Francis Tseng. This is a full fledged predictive policing system that instead of street crime models financial crime.

The creators’ use a definition of white collar crime from the American criminologist Edwin Sutherland: “a crime committed by a person of respectability and high social status in the course of his occupation.” In their white paper, they present their tool as an augmentation to the current policing regime, arguing that it is necessary given the historically low clearance rates of financial crime.

Instead of training the predictive algorithm on street crime data produced by a police department, White Collar Crime Risk Zones use a model based on data going back to 1964 from the Financial Industry Regulatory Authority (FINRA), a private corporation in charge of self-regulation for the US financial sector.

According to its creators, the model is “able to achieve 90.12% predictive accuracy. We are confident that our model matches or exceeds industry standards for predictive policing tools”

Their methodology resembles state-of-the-art predictive policing, but instead of predicting street crime, it predicts financial crime with the granularity of city blocks, including which types of criminality are most likely. Furthermore, the tool provides the user with a facial approximation of the likely suspect based on a composite image of corporate executives of the financial companies in a given risk zones.

So far no police department is using the White Collar Crime Risk Zones for managing patrolling or for their enforcement practices. And the question remains if increased policing of impoverished neighborhoods is the right answer to predictions of higher crime rates, or whether dedicating more resources to schools and social programs in those would be a more effective response to lower the risks.

3. Predictive Policing Predicts Policing, Not Crime

Machine learning algorithms recognize patterns in the data that is fed to them. The output of the algorithm reflects the input, so when predictive policing algorithms are fed crime data from police records, the patterns they discover are patterns in how police create and store records about crime, with the biases and incompleteness this entails, rather than a pattern of the world as it exists outside of police records.

As the researchers from Human Rights Data Analysis Group write, predictive policing is aptly named, because “it is predicting future policing, not future crime.” Like their simulated example using data from Oakland demonstrates, a real life example makes the same case.

The Chicago Police Department’s Strategic Subjects List (SSL) is a predictive analytics tool that enumerates individuals at high risk of being victims of gun violence. However, a scientific analysis by researchers from the RAND Corporation shows no significant drop in crime rates or victimization resulting from the Chicago Police Department implementing the system.

“Individuals on the SSL are not more or less likely to become a victim of a homicide or shooting than the comparison group, and this is further supported by city-level analysis. The treated group is more likely to be arrested for a shooting,” writes the RAND researchers.

While the list does not in fact help reduce crime rates or reduce the risk of the members of becoming victims, it does increase their likelihood of coming into contact with police officers, resulting in an increased risk of being surveilled or arrested by them.

In short, the system does not predict who will commit crime but who the police will interact with in the future, leading mathematician Cathy O’Neil to write that, “From now on, I’ll refer to Chicago’s “Heat List” as a way for the police to predict their own future harassment and arrest practices.”

Whether predictive policing will ever be able to reduce crime rates is an open question, according to the researchers. From their review of the academic literature, there is no hard evidence that it does currently.

They write that, “there is little experimental evidence from the field demonstrating whether implementing an advanced analytics predictive model, along with a prevention strategy — “predictive policing” — works to reduce crime, particularly compared to other policing practices in the field.”

4. Predictive Policing is Implemented Without Public Scrutiny

Across police precincts in the United States, predictive policing systems are being implemented with little to no public oversight, community engagement or impact measurement.

In a recent investigative piece for The Verge, Ali Winston reports that the incredibly secretive defense and intelligence contractor Palantir used New Orleans to test their predictive policing system, without the knowledge of the city’s council members. This system is similar to the SSL or so-called “heat list” used in Chicago, and Palantir now uses the deployment in New Orleans to market their system to other cities.

Palantir could supply the New Orleans Police Department with predictive analytics without oversight because it constructed a philanthropic relationship with the city. Combined with the so-called “strong mayor” governance model of New Orleans, the system was able to completely evade oversight until outed by Winston’s investigation.

“If Palantir’s partnership with New Orleans had been public, the issues of legality, transparency, and propriety could have been hashed out in a public forum during an informed discussion with legislators, law enforcement, the company, and the public. For six years, that never happened,” he writes.

Through public records requests, an investigation into the business operations of the company show a high degree of law enforcement customer lock-in. “Palantir’s customers must rely on software that only the company itself can secure, upgrade, and maintain,” writes Mark Harris for Wired, as the New York Police Department learned when they decided to leave their partnership with the company. Palantir would not hand back the NYPD analytics data in a standardized format that could be used in the system built by the department itself.

Even as the company’s business practices are laid bare, the inner workings of the software from Palantir and other vendors are not open to public scrutiny in the form of public records requests, having been created by for-profit companies who guard their algorithms, training data, and predictive models as precious trade secrets.

The independent academic inquiry into the PredPol algorithm done by the Human Rights Data Analysis Group is exceptional in that it was done at all since all the vendors of predictive policing solutions keep their algorithms as heavily guarded black boxes.

In the end, the Oakland Police Department decided against using PredPol to inform its law enforcement practices, in part because of the system’s tendency towards racial bias and in part because of its lack of effectiveness in preventing crimes even as it made it possible to predict them.

Adding insult to the injuries of racial and socioeconomic disparity that result from implementing predictive policing systems, there are few options for the policed communities to scrutinize, not to mention question or object to the algorithmic decision-making they are at the often lethal end of.

Even for proponents of predictive policing, the lack of transparency and accountability should be a critical issue since it also impedes the possibility of effectively measuring their impact. With no external access, there is no way to independently verify the validity of the claims made by vendors of the software.

In their review of the social justice implications of these technologies, the non-profit policy analysis group Upturn writes, “we found little evidence to suggest that today’s systems live up to their billing, and significant reason to fear that these systems, as currently designed and implemented, may actually reinforce disproportionate and discriminatory policing practices.”

5. In the Smart City, Police Gets Data from Everywhere

In a demonstration video of the Hitachi Visualization Suite, it is possible to glance a variety of the data types that are directly available to police dispatchers through the user-friendly interface of a cloud-based web app.

The client displays objects like building schematics, vehicles, and surveillance cameras, as well as events like social media posts with specific keywords, emergency calls, gunshots registered by the ShotSpotter microphones installed throughout the city, or whenever an automatic license plate reader registers a car. This is all overlaid on a map, in this case of Washington D.C. including optional live weather and traffic data.

A dispatcher using this system has access to all of this data, neatly visualized both in real-time as well as retroactively for reconstructing events that lead up to a crime or police confrontation. As the demonstration drills down into a specific camera, it shows not only its live feed but also how it can be rewound to what the camera was recording at the time of specific events.

The types of objects and events that are available in the app are customizable and completely up to what a given police department is interested in accessing.

While the Hitachi system is not in itself a predictive policing technology, it clearly shows the multiplicity of data a future system can ingest in order to make crime predictions. The possibilities are endless.

Creative Commons BY-SA Wacko Photographer

If there is a sensor in a public space producing data, police will be able to access it, and as a result, sensor data from smart cities will most likely be part and parcel of the predictive policing software of the future, merging advanced statistical models with street level surveillance.

In the case of the Palantir system deployed in New Orleans, which drew on people’s social network connections, criminal records and social media history in order to predict future criminals, the New Orleans Police Department gave Palantir wide access to internal data.

“In January 2013, New Orleans would also allow Palantir to use its law enforcement account for LexisNexis’ Accurint product, which is comprised of millions of searchable public records, court filings, licenses, addresses, phone numbers, and social media data. The firm also got free access to city criminal and non-criminal data in order to train its software for crime forecasting,” Ali Winston writes for The Verge.

In the future, as predictive policing software is fed from a larger and larger variety of data sources, it is likely that the impact of the biases and disparities already present in the current systems will be felt more widely.

This is the emergence into civil life of what philosopher Manuel DeLanda calls panspectric surveillance. In his groundbreaking book War in the Age of Intelligent Machines, he argues, that this is a change in kind rather than in degree from the concept of panoptic surveillance explored by Michel Foucault:

“Instead of positioning some human bodies around a central sensor, a multiplicity of sensors is deployed around all bodies: its antenna farms, spy satellites and cable-traffic intercepts feed into its computers all the information that can be gathered,” writes DeLanda.

And as new technologies are innovated, it becomes possible to feed more and more diverse sources of data into to the systems. An obvious contender for addition to the policing stack of the future is Persistent Surveillance Systems, a relatively low tech product with a literally city-wide scope.

The system consists of an array of high definition digital cameras mounted on an airplane that circles continually at a high altitude taking a snapshot of a whole city every second. In the resulting time-lapse, it is possible to go back in time and meticulously track the movements and whereabouts of vehicles and people.

It was originally developed for the US military to counter improvised explosive devices in Fallujah during the occupation of Iraq but the inventor has since been looking for civilian customers.

Persistent Surveillance Systems have thus flown their wide-area surveillance plane over Philadelphia, Baltimore, Indianapolis, Compton and Charlotte in the United States and Nogales and Torreon in Mexico. Like with the predictive analytics systems across the US, these flights took place with no public oversight.

6. Predictive Models Can Be Turned Back on Policing Itself

One could ask, if police departments believe predictive policing software is reliable at forecasting crime why don’t they use the technology to root out bad cops? What if algorithms could be used to determine which cops are likely to abuse their power?

This is an area where academic research combined with practice in the field shows predictive analytics to be quite effective, and where the impact of faulty predictions are much less severe than when a person on the street is falsely suspected of committing crime.

“Whereas the output of a predictive policing system (accurate or otherwise) may result in someone’s stop, search, or arrest, signals produced by predictive early intervention tools would more likely lead to increased counselling , training, and mentorship for officers,” writes Upturn in their report Stuck in a Pattern.

False positives is in this context have very little impact on the lives of the officers in question compared to a member of the public falsely classified as a criminal.

Creative Commons BY-SA Wacko Photographer

In a study with the Charlotte-Mecklenburg Police Department, researchers from University of Chicago developed a predictive model that outperformed the existing system put in place to intervene when officers would show signs of going out of bounds.

“Multiple studies by the Department of Justice have highlighted the value of Early Intervention Systems in advancing community-oriented policing goals and in helping reduce citizen complaints against officers,” writes Upturn.

However, police departments are reluctant to implement these self-monitoring systems despite their proven high-benefit, low-risk track record. The main driver of this reluctance, according to Upturn, is pressure from police unions. And even departments that do attempt to implement such systems are not always successful, Oakland being an example of such a failed attempt.

In the case of police monitoring technologies, a slew of new types of data is also available with the potential of increasing transparency and accountability of police behavior and guide a movement towards more community-centered law enforcement.

One example is the The Henry A. Wallace Police Crime Database that documents over 9000 criminal cases against state police officers in the United States, in a searchable format. According to Vice Magazine it tracks criminal charges and not convictions which incidentally is the same way police departments generate crime statistics in general.

Another example is police body-worn cameras that, while by no means unproblematic, do have an potential for helping rat out out crooked cops.

If you enjoyed reading this, please support my work on Patreon. Thanks!

--

--