Arrested by Biased Data

Predictive policing is perpetuating systemic racism. Here’s why this is happening and what we can do about it.

Jiin Kim
9 min readSep 15, 2021

Imagine a grid representing a police department’s jurisdiction, with each block representing a specific area. Since there are not enough officers to patrol the entire jurisdiction at the same time, the police department must be strategic in their placement of officers to ensure that they catch the most crime possible.

To make the best use of their resources, the police officers (represented by the red blocks) must be put in areas with the most criminal activity.

Now imagine that there is a “magic box” that will predict the likelihood of crime occurring in a specific area. Say, 70% chance of crime occurring in block A4. Now, the department will reallocate its officers like so:

This is essentially what predictive policing is, with the “magic box” being an actual tool (with no magic involved) that outputs crime area predictions. However, there are layers of ethical concerns that arise from this “magic box”. Before we jump in, let’s first flesh out…

What is predictive policing?

What does it predict?

Policing, duh. But wait, there’s more:

As demonstrated above, predictive policing creates location-based predictions. It identifies “hot spots” where crime is most likely to occur, relative to the entire jurisdiction.

What is less obvious is that predictive policing also produces risk assessment. This includes recidivism, or the likelihood that an individual will re-offend. It also predicts the likelihood that an individual will fail to appear in court. All of these factors impact the prediction of not only the location of crime, but also individual people’s likelihood of committing a crime.

How do they make these predictions?

Predictive policing tools learn primarily from historical data. These include:

  • Date and time of past crimes
  • Crime type
  • Employment status
  • Education level
  • Age
  • Arrest history
  • Community ties
  • Substance abuse

…and more.

How are the predictions used?

The introduction from earlier presents one use case of these predictions: optimizing parole efficiency. Lack of police resources leads to an uneven distribution of parole. Since there aren’t enough officers for the entire jurisdiction, police departments will dispatch officers only to areas where crime is likely to happen. Ideally, parole leads to crime deterrence, but it also means that if there is a crime in those areas, then there is a quicker response. Parole efficiency also helps identify chronic potential future offenders by letting officers keep a close watch on them.

The predictions are also used for the courts system through risk assessment scores. These inform the court on decisions such as bail, sentencing, and parole for defendants and incarcerated individuals.

What can go wrong with predictive policing?

Predictive policing is often implemented with machine learning, a subset of artificial intelligence that can be fine-tuned to make predictions using trends found within a specific dataset. While machine learning models can be trained to perform at high levels of accuracy, there are underlying issues that lead to undesirable impacts.

Problems with algorithms

1. Algorithms may learn to discriminate based on race.

A solution would be to prevent race from being considered as an input. However, simply removing race does not prevent other inputs that are highly correlated with race, such as name and zip code, from impacting the machine learning model’s decision.

2. Algorithms do not consider all relevant information.

Landline calls allow the police to easily pinpoint a caller’s location, but due to wide smartphone usage, the majority of households do not have a landline anymore. This makes it more challenging to find the exact location of a crime and may affect the accuracy of collected data. One could also know a person who committed a criminal activity, but simply knowing that they are a criminal is inadmissible evidence in court and therefore will not be added to crime datasets. Such relevant information, including many more, is unable to be accounted for by the algorithm.

Bad training data

All machine learning models require training data to learn from in order to produce accurate decisions. However, the data used for predictive policing is very biased.

First, arrest records do not accurately represent real crime.

Recall that police departments allocate police to areas known to have high crime. However, over time, the police will end up patrolling only certain areas while neglecting crime occurring in other areas. At the same time, arrest records will reaffirm the fact that crime does occur in patrolled areas simply because the police were more likely to catch criminal activity in those areas.

In addition, some arrests occur due to planted false evidence. Others may occur because some criminals were released after arrest and then went on to commit more crimes.

It is also important to recognize that for historical reasons, people of color are overrepresented in arrest records. According to the NAACP,

  • A black person is 5 times more likely to be stopped without just cause than a white person.
  • African Americans and Hispanics make up approximately 32% of the U.S. population. However, African Americans and Hispanics make up approximately 56% of U.S. prisoners (this overrepresentation is largely explained by racism stemming from slavery and prejudice towards immigrants).

Because of this, predictive policing causes greater police patrol areas with majority POC populations. Since more police are in those areas, and since POC are more likely to be stopped by police, then POC arrest records increase.

Lastly, arrest data is skewed as most reported crimes are nuisance crimes, such as jaywalking or public drunkenness. In comparison, high-profile crimes like murder and kidnapping, or white-collar crimes like tax evasion and bribery, are rarely reported.

For these various reasons, arrest data used to train predictive policing models are biased representations of real crime.

Feedback loops

Just now, we showed that nuisance crimes and racially biased arrest records make up the training data for predictive policing models. With that in mind, we can visualize the predictive policing process like so:

Once the model outputs (biased) predictions, officers will go patrol the predicted area expecting to make an arrest. From here, there are two scenarios that could occur:

Scenario 1: an arrest is made

Any arrest made after following the prediction serves as positive reinforcement for the predictive policing model. In other words, it lets the model know that it made a “correct” prediction. This feedback helps the model alter itself to continue predicting similar areas, creating a positive feedback loop. Since the model is now more likely to predict the same areas, the possibility of receiving positive feedback from other areas decreases greatly, since police will continue to patrol the same areas.

Scenario 2: no arrest is made

Although the officer does not make an arrest in the predicted area in this scenario, there is no constructive feedback that will help the model alter its results. Since there is no evidence that other areas will have crime, the model will not know to predict those areas. It will simply continue to produce decisions based on the preexisting training dataset, which is biased.

Viewpoints contributing to the problem

Fairness vs Efficiency

The premise of predictive policing is that it will help police departments more efficiently allocate police in order to limit waste of its resources. However, efficient actions are not necessarily accurate actions. The reality is that such “efficient” patrolling will make mistakes: innocent people in patrolled areas become more likely to be arrested than guilty people in non-patrolled areas.

On the dilemma of fairness vs efficiency, William Blackstone states,

“It is better that ten guilty persons escape than that one innocent suffer.”

This quote serves as a foundation for Western justice systems, but unfortunately, predictive policing produces results that at times conflict with this quote.

Human vs Machine

A common trend in our modern society is tech-washing: we tend to not question algorithms because they are perceived as “impartial” or “based on science”.

However, the truth is that algorithms are efficient, not true; because unfair humans write the algorithms, algorithms are not always fair. In fact, they perpetuate unfairness: computers let us carry out discrimination on a larger scale.

A matter of perspective

Clearly, predicting that crime might happen does not guarantee that it will happen. However, the attitude of many police aligns more with the latter.

Consider this from the Pasco County Sheriff’s Office: one former deputy described the office’s directive as,

“Make their [civilians’] lives miserable until they move or sue.”

This quote strays far from the police’s role of stopping crime before they happen. Instead, it reveals that, to put it bluntly, some police intend on harassing the people they patrol regardless of whether they are guilty or innocent.

Before we move on…

Here are some key issues we have identified so far:

  • Removing race as input does not prevent it from impacting predictions through other, highly correlated inputs
  • Algorithms fail to consider all relevant information
  • Arrest records do not accurately represent crime data as they are racially biased and mainly consist of nuisance crimes
  • Positive feedback loops cause predictive policing models to only predict areas that it predicted previously

Here are some bad assumptions that have been made about where and how to apply predictive policing:

  • Predictions are used to optimize patrol efficiency, but efficiency does not mean accuracy
  • Predictive policing seems impartial because it is based on science and computer algorithms, but algorithms can be biased because humans write them
  • Police officers are supposed to deter and catch real crime, but some are intent on punishing anyone they interact with

Potential Solutions

Algorithmic affirmative action?

The goal of “algorithmic affirmative action” would be to counterbalance bias in the data by using different risk thresholds. For example, we could adjust predictive policing models to make predictions such that for every 3 arrests of a black person, there are 2 arrests for a white person.

However, is explicitly taking race into account a good idea? Regardless of the answer to this question, as with any type of affirmative action, such a practice would be extremely controversial.

Algorithmic transparency

Since machine learning became more widely used, there have been many requests for the people’s right to know why an algorithm predicts what it does.

Unfortunately, machine learning models are known to be a “black box”, in that it is very hard for us to know its exact reasoning when producing specific outputs. However, more recently it has been customary to establish model cards that document a machine learning tool’s performance on relevant cultural and demographic groups. While model cards fail to provide a clear answer for how a machine learning model comes to a conclusion, they do provide insight on varying levels of performance among different demographics. Such information is useful for determining whether a prediction could be biased.

Transparency with the public?

The Public Oversight of Surveillance Technology (POST) Act requires the NYPD to disclose all surveillance technologies that they use. However, does the public have a right to know if law enforcement is using predictive policing? Is revealing surveillance information a concern for public safety? These questions must be answered before the predictive policing process is revealed to everyone.

Public Policy + Legislation

Currently, using a risk assessment tool is subject to the same level of regulations as buying a snowplow. In other words, there is not much regulation at all! There is a definite need for more regulation on predictive policing tools. These could be in the form of a government regulatory body like the FDA, or through peer review systems.

More thoughtful police funding

Some police departments had to turn to predictive policing because of…

  • Budget cuts
  • Perceived need for more efficiency
  • Calls for greater fairness (ironic, right? We can look towards tech-washing as to why this is a reason)

In order to resolve these issues, we must figure out how to use funds to protect communities but not overpolice them.

Conclusion — Is predictive policing good or bad?

The answer is, it depends.

From our discussion of predictive policing, we learned that while machine learning models can recognize patterns in data, they cannot guarantee fairness.

We must work towards increasing awareness of what machine learning algorithms can and cannot do. That way, we can be more mindful and selective of how and where we apply them.

This article was adapted from ACM’s AI Ethics Series: “Predictive Policing” by Jason Jewik and Sumedha Kanthamneni.

For more resources on machine learning and tech ethics, you can read our other AI ethics presentations and UCLA ACM AI’s blog!

--

--