Predictive Policing, and Why it Represents the Worst of our Law Enforcement

The YX Foundation
The YX Foundation Journal
8 min readOct 10, 2020

by Julius Ewungkem

Mentor: Dr. Roslyn Satchel; Student Editor: Nadine Bahour

Graphic of policeman using technology to surveil a city.

Imagine if a teacher could predict which child in their class was most likely to cheat, and rather than use that information to make sure that student understands the material, they set up cameras and put the students’ desk next to theirs just to ensure they catch them making a mistake. Sounds backwards doesn’t it? Yet, this is the logic that dominates our law enforcement and has led to the destruction and exploitation of many communities. If you were to ask random people on the street the question “what is the purpose of the police?”, you would hear a variety of different answers, ranging from “to protect its citizens” to “stop crime”. The police are supposed to be the people we feel confident calling when there is an issue, and we should always feel that they have our best interest in mind. However, our law enforcement have constantly failed to meet these expectations, and the relationship between them and American citizens, especially within the African-American community, has always been fractured. From the beating of Rodney King, to the extreme criminalization of drugs, and the behemoth of mass incarceration, the criminal justice system has become the enemy of the people it ought to serve. Most recently, the murders of George Floyd and Breonna Taylor at the hands of our police force opened the eyes of all Americans to the daily injustice faced in the country. There have been calls for the defunding, reformation, and even complete abolishment of the police due to the many issues in how they operate. One specific topic in regards to the operation of police that has been the cause of much discussion is predictive policing: the use of algorithms and AI to predict where crime will happen, when it will happen, and who will be committing it. While this all sounds appealing, the use of predictive policing and algorithms is not only harmful to disadvantaged groups but its method reasoning of implementation is inherently flawed because the society we live in is still discriminatory.

The use of predictive policing tools is a relatively new development that quickly became popular throughout law enforcement. They can do a wide range of things, and how they are used depends on the specific algorithm. They can track communities to find which areas have the highest crime rate, they can predict who, based on their past criminal record, is most likely to commit a crime, and they can find at what times are crimes most likely to be committed. This seems to be a step in the right direction; the idea of using technology to stop crime before it even happened seemed flashy and futuristic. However, their implementation has been riddled with issues that have only worsened existing racial issues. For example, let’s say we have two neighborhoods: one rich and one poor. A predictive policing algorithm is then told to analyze the two areas’ historical rate of crime, and it finds the rate to be higher in the poorer area. Based on this, more police are assigned in the poorer neighborhood. Consequently, more crime is documented, and this info is fed into the algorithm, putting even more police in the poorer neighborhood, and the cycle continues. The rates of crime between the poor and rich neighborhoods might not be that much different, but much more people are going to be caught and penalized in the poorer neighborhood. This is the issue with predictive policing tools; they operate in a self-sufficing circle, and the more data is gathered, the more heavily skewed it becomes. Communities that are already struggling are more heavily penalized, only widening the already present socioeconomic divide.

A good example of this is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a risk-assessment algorithm that has been used to predict the likelihood of a criminal defendant to re-offend after being released. An analysis was done of the data in Broward County, FL, an area where COMPAS is popular, and the results showed extreme race bias. Black defendants were often predicted to be at a higher risk of recidivism than they actually were. It was found that black defendants who did not recidivate over a two years were nearly twice as likely to be misclassified as higher risk compared to their white counterparts, and white defendants were often predicted to be less risky than they were. Additionally, even when controlling for all factors, black defendants were still 45 percent more likely to be assigned higher risk scores than white defendants. With all this evidence, it becomes extremely clear that these algorithms and tools are heavily influenced by the already present biases, and what is even worse is that they aren’t able to self-reflect or question information. These algorithms simply are not reliable enough to be used as tools that are dealing with something as important as law enforcement.

Going even further, what is the point of using and investing money into predictive policing tools? Julia Dressel, a student at Dartmouth College at the time, and Hany Farid, a professor of Computer Science at Dartmouth College, performed a study to investigate the if algorithms truly predict crime better than humans. After they recruited 400 hundred volunteers through a crowdsourcing site, they gave them each a test. Each person was shown a short description of defendants, and they had to predict the likelihood that they would re-offend in the first two years after their release. These results were compared to those of the COMPAS tool which had also looked at the data and made its own predictions based on the same pieces of information. As individuals, the people were right 63% of the time, and as a group the accuracy of their answers was 67%. COMPAS, by contrast, had an accuracy of 65 percent, which is lower than the group of random individuals who haven’t even been trained. This study really questions the value of these tools, and the money going to fund the budget of this technology could go into helping the very same people it penalizes. This money could even go into predicting which officers are likely to break protocol. It has been shown that police misconduct follows consistent patterns, and using this to seek out officers that would be prone to make mistakes and give them further training would only improve law enforcement. The way we tackle the issue of criminal justice has to completely change, and our society will keep pushing down marginalized groups until that switch happens.

Moving forward and looking beyond just predictive policing, if we truly want to improve our criminal justice system, we have to ask ourselves what is the goal we are trying to achieve through law enforcement. Many people will look at all the evidence showing the problematic nature of predictive policing and still support it because it still puts “criminals” in jail. However, this way of thinking is our biggest problem. When deciding how to use these tools, why is our first instinct to use the predictions to place more people in jail rather than put more resources in that area for more people to succeed? Rather than using the data to try and find which neighborhoods are in need of rehabilitation centers, education reformation, and reinvestment, they are used to find potential drug addicts and put them in jail, where they won’t get the help they need. There is nothing wrong with trying to predict where crime is most likely to occur; but rather than penalize those areas, we should be putting more resources into those communities to build them up. Even if we were somehow able to make a perfect AI; free of bias, issues, and completely “fair”, the society we would be implementing it into isn’t fair at all. Just because something is prohibited under the constitution doesn’t mean it won’t happen. The legacy of slavery continued after the 13th amendment through sharecropping, and redlining took place after The Civil Rights Act of 1964. This discrimination still exists; it is just becoming less overt in its appearance. We are trying to be fair within a system that hasn’t and continues to not be fair to certain minority groups. This isn’t to say people who commit crimes should not be punished or that they don’t have control of their actions; but why are we so eager to find them and throw them in jail before they have even stepped foot outside? It doesn’t make sense, and this flawed reasoning is why our law enforcement system has been in need of a complete reformation.

It is clear that in the current state of our country, predictive policing algorithms are extremely flawed, and their implementation is impossible because the data they analyze is riddled with bias. It is possible that predictive policing could be useful in some way in the future; however, this cannot occur until the way they are used is changed. The use of technology in this way represents the core issues in how we enforce the laws that govern our society, and if we want to improve our law enforcement as a whole, we need to change the way we think about policing and society in general. No longer can we be reactionary, trying to fix our community after the system has decimated them. Rather, we have to be proactive, finding issues before they become problematic and getting communities the resources they need. Until then, predictive policing tools and algorithms have no place in our society.

Sources

--

--

The YX Foundation
The YX Foundation Journal

The YX Foundation is a coalition dedicated to community engagement at the intersection of deep technology and critical race theory.