Panelists tackle algorithmic fairness in criminal justice at Mind to Mind conference

By Paula McCloud

Algorithms and data have become so ubiquitous in our society that they have been adapted by the criminal justice system. But these algorithms don’t just improve efficiency. They also create and follow cycles of discrimination said researchers and data journalists.

It is this dark side of algorithms that guided the panel discussion on algorithmic discrimination at the Mind to Mind Symposium on Friday at Stanford University.

The symposium was a joint effort by The Center for Investigative Reporting and the journalism program at Stanford to encourage collaboration between academics and journalists.

Panelists voiced their concerns about the disparate impact of data and algorithms in the criminal justice system. A key example is the Correctional Offender Management Profiling for Alternative Sanctions risk assessment tool, used to determine the recidivism (reoffending) rate for offenders, said Jeff Larson, a data journalist at ProPublica.

“The foundation to our criminal justice system is you’re innocent until proven guilty, this algorithm says you’re guilty until proven innocent,” Larson said.

Larson and colleagues at ProPublica investigated the use of the algorithm as part of their Machine Bias project.

They found that African-American offenders who don’t reoffend are more likely to be labeled as having a higher risk of re-offending by the algorithm than white offenders who don’t reoffend.

Risk score higher for African Americans who don’t reoffend. (ProPublica/Machine Bias)

ProPublica also found that white offenders who ultimately do reoffend were twice as likely to be mislabeled low risk in comparison to their black counterparts.

Although this appears to be a problem in need of correction, Sharad Goel, an assistant professor in the department of Management Science and Engineering at Stanford University, and his colleagues have said in a Washington Post article the problem is that Northpointe, the developer of algorithm, and ProPublica each have a different criteria of fairness when analyzing the algorithm that can’t both be satisfied at the same time.

Northpointe said that its algorithm is fair because the risk scores mean the same for white offenders as they do for black offenders.

ProPublica said the algorithm is unfair because it is twice as likely to mislabel black offenders who don’t reoffend as higher risk than white counterparts.

The disparity in the algorithm arises because African-Americans have a higher likelihood to reoffend, Larson said.

But it’s more complex than that, said Ifeoma Ajunwa, professor of the Industrial and Labor Relations School at Cornell University.

“Nobody knows who commits more crime, nobody knows,” Ajunwa said. “We only know who’s arrested more and who’s convicted more.’’

But why are blacks more likely to be arrested?

Some researchers say police just pay more attention to black neighborhoods and individuals.

For example, the Human Rights Data Analysis Group, found that PredPol, a predictive policing algorithm, “targeted black neighborhoods at roughly twice the rate of white neighborhoods when trained with historical drug crime data in Oakland, California” despite a roughly equal distribution of drug use across racial groups.

Also, the Stanford Open Policing Project, which has collected data on millions of traffic stops by law enforcement, found that black drivers are more likely to be searched, ticketed and arrested by police than white drivers.

Really, Larson said, if judges are going to use the algorithm, they just need be aware of the false positives and fallibility of the tool.

“These things should come with warning labels” he said.

Paula McCloud is a junior at Stanford University double majoring in Communication and African and African American Studies. She is taking a journalism class at Stanford that is covering the Mind to Mind conference.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.