IMAGE: Max Pixel — CC0

Justice and algorithms: California and bail

Enrique Dans
Enrique Dans
Published in
3 min readSep 11, 2018

--

The governor of the state of California, Jerry Brown, has signed a law that, as of October 2019, will replace cash bail with an algorithmic system that will estimate flight risk or the likelihood of a defendant committing a crime while awaiting trial. The algorithm, which counties will be obliged to use, either through a provider or by developing it themselves, will qualify risk as low, medium or high and, depending on the result, a judge will decide to release alleged offenders or keep them in custody.

Civil rights organizations have hailed the elimination of bail in the form of monetary payment a success, who argue the practice is discriminatory and creates a two-tier justice system for the rich and the poor, whereby those lacking economic resources spent time in prison while awaiting trial, which, in many cases, became part of the problem. In Brown’s words,

“Today, California reforms its bail system so that rich and poor alike are treated fairly”

However, it has been shown that algorithms, fed with data obtained from the justice system itself, have in some cases shown racial or socio-economic bias. Algorithms are not inherently neutral: they reach their conclusions from the data they are fed, which increases the risk of perpetuating biases that in many cases already existed.

On the other hand, many algorithms are defined as owners that protect the intellectual property of the companies that develop them, the so-called black boxes against which there is virtually no appeal. This issue has been highlighted with the use of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), an algorithm developed by Equivant to assess the risk of prisoners reoffending and to help judges make decisions on their sentences. A decision made by that algorithm was the subject of an appeal in Loomis v. Wisconsin, in which the defendant alleged that the use of such an algorithm in the calculation of his six-year sentence, violated his rights because it prevented him from contesting the scientific validity and accuracy of evidence against him because his defense team did not know how the algorithm worked and also because it also took into account variables such as the gender and race of the accused.

The legislation approved by the State of California establishes a period of review for the algorithmic system until 2023, after four years of operation and data generation, at which time the decisions it has taken will be examined. Algorithms could certainly help reduce courts’ workloads, particularly in the early stages of routine cases, albeit with the appropriate oversight. However, this would require mechanisms to allow all parties involved to know how the algorithms work, the variables used to determine the results, in addition to ensuring that the data used to train these algorithms do not contain factors that create biases. No easy task, but one whose results could help prevent the collapse of the justice system in many countries, prompting campaigns along the lines of justice delayed is justice denied.

How will the system approved by the state of California evolve? Will it provide better justice and equality than the previous system? And what discussions will we need to have has more and more algorithms are used by our judicial systems?

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)