The New Racial Judge and Jury -Computer Generated Risk Assessments

Risk assessments have been around for a century. During the past two decades they have become a prominent part of the American judicial system thanks to advances in social science.

Modern day risk assessments are basically algorithms aka software that attempts to predict an individual’s criminal recidivism percentage using statistical probabilities based on factors such as age, employment history and prior criminal record.

Risk assessments are used at some stage of the criminal justice process in nearly every state. Some judicial systems use the tools to guide decisions about which prisoners grant parole. Risk assessments are becoming increasingly popular as a way to help set bail for inmates awaiting trial. The sate of Pennsylvania has moved to assert them to dictate sentencing lengths for convicted people.

The arguable issue with risk assessments is they return possibilities — not certainties.

I’ve studied the fascinating report (and white paper) released by ProPublica detailing how risk assessment algorithms in the criminal justice system labels different races differently.

ProPublica’s report analyzed 7,000 cases in Broward County, Fla. The cases were given risk scores over two years using one of the most popular tests. The results were interesting.

One in five who were expected to commit violent crimes by the tests actually did so, and racial bias was far from eliminated. The formula was revealed to particularly incorrectly flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.

Yes even software can be racial. Or at the least software can be misguided.

Critics of tech driven assessments argue that the software unfairly judges people on crimes they haven‘t committed by drawing from a formula that includes crimes they may have (or not) committed. In layman’s terms the software paints a portrait of one’s future using some paint from one’s past

Additional arguments raise valid concerns about deciding a person‘s fate based on social and economic status, education, and habits of their parents or siblings.

The very make-up of some risk assessment software brings into question the effectiveness of the judicial systems that embrace them. By assessing people’s recidivism based on the number of times they have been incarcerated pretty calls into question how effective the system is at reforming individuals. One has to wonder if that system’s reform success and fail ratios are written into the algorithms.

Former Attorney General Eric Holder had this to say about risk assessments:

“By basing sentencing decisions on static factors and immutable characteristics — like the defendant’s education level, socioeconomic background, or neighborhood — they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”

Proponents of tech driven risk assessment tools push back on opponents by saying judges, parole boards and other decision-makers already make their own risk assessments. That’s true however,the question is who’s better at it — humans or technology?

Well-designed risk assessment tools “work,” in that they predict behavior. It’s how they predict behavior that comes into question. If the algorithm leans toward stereotypical contexts in its assessments then the results are bound to be biased.

Take for instance Blacks are more likely than whites to be arrested multiple times for small amounts of marijuana possession. Over the last 6 years the DOJ and some state judicial systems have moved to decriminalize possession of small amounts of marijuana. If risk assessment algorithms are updated to to take into the relaxed legal statues and social attitudes toward small scale marijuana possession will those updates “grandfather in” relaxed assessments toward those formerly convicted of small amounts of marijuana possession?

Another pressing question in regards to judgmental software is — does the algorithms account for the racial profiling of Blacks that have been proven to be prevalent among law enforcement all across the country?

I think what has to happen here is an independent group of Silicon Valley techs well versed in law and certified not biased to anyone, any cause, or any color, needs to review this assessment software’s algorithms. The code has to be evaluated line by line and number by number.

Every since Black folks have been on American soil there has been some tool, law, some device to shackle and exile Blacks to some irrelevant position transparent poison. Now we have technology that doing the shackling.

Food for thought. The largest group in America that hates Blacks has been copiously reticent the last decade. They haven’t bombed any churches, hung any Blacks, or really made a huge presence anywhere. Or have they?

Who’s writing the programs that are determining our freedom?

Do you know? I don’t. And I’m sure there are some programmers that are experts at being vetted.

Am I suggesting some sort of conspiracy theory is at play in the chips that drive assessment software? I’ll answer that question with with a question.

Hasn’t there always been a conspiracy theory at play in this country in regards to equal anything for Blacks?

Then there is no dismissing ProPublica’s factual findings.

“We also turned up significant racial disparities, just as Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.”

The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.

Same old song. Just another day in paradise with no palm trees.

White defendants were mislabeled as low risk more often than black defendants.”

People change. You know that and I know that. Computer code that has not been amended to compensate for a changing of an individual's ways and circumstances does not know that.

Assessment software has a place suggesting some individuals should receive more intense scrutinizing than others but it’s a dangerous gamble to accept it as final judge and jury.

Comments?