photo by IBMAnalytics

Artificial Intelligence Predictive Policing: Efficient, or Unfair?

Akshara K
Akshara K
Jul 29, 2020 · 4 min read

In 2014, 18-year-old Brisha Borden and a friend made an impulse decision to take an unattended scooter and bike, which they immediately returned after the owner of said items showed up. Nevertheless, the girls were charged for burglary and petty theft for the items they stole, worth $80. In the previous year, 41-year-old Vernon Prater, with previous charges of armed robbery and a 5 year sentence in prison, was arrested for shoplifting a Home Depot with goods worth $86.35, similar to Borden.

However, when in jail, Borden, being black, was assigned the label “high risk” for future convictions, and Parker, being white, was assigned “low risk”, despite their criminal histories indicating the opposite. So why were police departments making inaccurate, biased predictions? Were officers making these calls?

Not quite.

Borden and Parker’s risk assessments were done by an Artificial Intelligence algorithm—COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)—meant to predict future crime by convicted individuals, where the results are given to a judge in order to aid them in deciding their sentences. Risk Assessments are part of a larger subset of AI technology being used in Law Enforcement, commonly known as predictive policing.

Predictive policing has been a discussed topic for decades now, but has only been widely implemented in law enforcement relatively recently. The process of predictive policing involves Artificial Intelligence algorithms analyzing large data sets of criminal activity, including arrest and conviction rates as well as demographics of certain areas, in order to determine the at-risk rate for individuals’ future convictions and to determine the concentration of police force in areas based on their rates of crime.

In theory, predictive policing seems like a win-win: avoiding the bias of human error as well as promoting efficiency within Police Departments. But the truth is far from this idealistic take on a technology with the potential to have an extremely large impact on many lives and communities in the US.

Because the AI algorithms base their predictions using historical crime data sets—including data from periods where police departments engaged in unlawful, racially and socioeconomically biased practices—areas that previously have had high rates of crime will automatically be assigned the label “high risk neighborhood”. This leaves such areas unable to reform their image, as being assigned the “crime-ridden” label by the AI algorithm causes the area to be overpoliced, subsequently increasing their rates of arrest and conviction and ultimately perpetuating systemic bias through the unending cycle of policing-arrest-risk. Furthermore, individuals with little previous significant criminal record, like Borden, are assigned the image of “high risk of recidivism” solely due to their racial and socioeconomic background.

Take PredPol, for example. A company based in Santa Cruz, PredPol uses predictive analytics in order to assist law enforcement to predict future criminal activity, essentially dividing cities into sections and assigning them a sort of “crime forecast” in order to determine the concentration of police force in a given area. According to the US Department of Justice Figures, black people are five times as likely to be arrested than white people. This means that the data that the PredPol algorithms draw upon are biased regardless of their supposed impartiality, defeating a major purpose of using such technology: to avoid human partisanship.

The accuracy of predictive policing is not limited to only systemic bias. For instance, if an area had an unusually large rate of crime for a single day, such as a mass murder, then the area would be assigned as “high risk”, causing law enforcement to increase the magnitude of police force in said area. With more officers monitoring citizens, more are bound to be arrested, thus cementing the area into the “high risk” image, showing how one instance can have a permanent impact on a location if predictive policing continues to be heavily utilized by law enforcement.

The AI cannot determine on its own what is biased/inaccurate; it can only interpret and spit out what data was fed to it. With a lack of historically unbiased data, the possibility of an unbiased predictive policing scheme seems unlikely in the future.

So is the technology salvageable? Debatable. To decide upon the future of people’s lives and communities as a whole based off of the judgement of biased algorithms does not seem fair, yet many police departments still use the software. Perhaps in the coming decades, the data that the AI use may be less skewed as it is today, but to think the technology will be able to account for all of society’s nuances is not a feasible expectation, but only time will tell.

Sources

Predictive policing algorithms are racist. They need to be dismantled.”, MIT Technology Review, Will Douglas Heaven, 17 Jul. 2020

Why Hundreds of Mathematicians are Boycotting Predictive Policing”, Popular Mechanics, Courtney Linder, 20 Jul. 2020

Machine Bias”, ProPublica, Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, 23 May 2016

Predictive policing is a scam that perpetuates systemic bias”, The Next Web, Tristan Greene, 21 Feb. 2019

The Black Box

A publication covering fairness and ethics in AI.

Sign up for The Black Box

By The Black Box

Receive updates about articles and case studies from the Youth for Ethical Technology Institute! Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Akshara K

Written by

Akshara K

The Black Box

The Youth for Ethical Technology Institute’s blog.

Akshara K

Written by

Akshara K

The Black Box

The Youth for Ethical Technology Institute’s blog.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store