Recidivism Risk and the Impacts of Algorithmic Prediction on Judges’ Discretion
The risk assessment tools, mainly ‘COMPUS’ in the USA, predict the recidivism of an offender with machine-learning algorithms by analyzing an enormous amount of ‘data’. These statistical tech-solutions are chosen for informing judges of an offender’s likelihood of recidivism, calculating an overall score that classifies an offender as low, medium, or high risk, while making parole, probation, bail, and sentencing decisions. Such a shift in the criminal justice settings is called predictive justice, and such predictive algorithms are as ‘risk assessment tools’ or ‘evidenced-based methods’.
2. Apart from the USA, an Asian country — Malaysia — adopted a predictive sentencing system to determine the fates of accused drug users and rapists; and has already passed sentences for the first time on Feb 19 in 2020 in a magistrate court. However, although the recidivism risk scale produces powerful outcomes, there are also simmering debates already underway for long relating to its’ adverse impacts on bias data inputs (ProPublica, 2016; black box effects (Blacklaws, 2018), lack of oversight (Dressel & Farid), legitimacy and inaccuracy (however, a recent study tells a different story), judicial integrity as well as judges’ discretion.
3. Obviously, such ‘datafication’ in the criminal courts is also evidently impactful as having the force to outs or limits the scope for exercising judges’ discretion in the decision-making process. Judges’ discretion in judicial settings is ideally important as it can help best in promoting individualized bail and sentencing practices. Judges are committed, responsible professionals, who are institutionally trained and empirically experienced. So, the outputs of data explosion in the courtroom make a major shift from the individualized decision-making process.
4. On the other hand, it is now buzzwords that algorithms work with biased data and controversial variables; and statistical algorithms are as accurate as laypeople are in measuring and predicting normalcy and deviancy — predicting whether a defendant would re-offend at some points in the future. Likewise, it is aptly argued that “a new type of law came into being, analogous to the laws of nature, but pertaining to people” (Lynch, 2019). However, it is also recently shown that “algorithms can outperform human predictions of recidivism in ecologically valid settings”. it is therefore that such a debate is an ongoing subject that always attracts study after study to date.
5. So far as our context relates to, it is contextually pertinent to point out that although the Loomis’s court simply did not base the sentence on COMPAS risk-score (Schimel & Tseytlin, 2016), Justice Bradley nevertheless added that ‘judges must proceed with caution’ while using such risk assessments. Interestingly, the Loomis court itself also admonished that the predictive risk-score may disproportionately classify minority offenders as having a higher risk of recidivism.
6. But this warning seems likely to be ‘ineffectual’ in changing the way judges think about risk predictions. In other words, a high-tech tool’s recommendations pose as ‘burdensome’ and ‘compelling’ for human judges to ignore them although they are aware of algorithmic bias, inaccuracy, incomprehensibility, or even when they come to know about wrong or unfair outcomes. In practice, it is really challenging and unusual for a judge to defy algorithmic recommendations. So, quantification in the court has the transformative effects which “eclipse individualized, morally infused modes of judgment and intervention in the criminal justice field”(Lynch, 2019).
7. The reasons are not hard to comprehend. Firstly, it is more likely that a ‘high estimate of risk’ may play an ‘anchoring role’ in the process of decision-making by judges; which have the potential to make change their sentencing practices in order to match the predictions of the algorithms. Secondly, a quantitative risk-assessment by computer software, by and large, seems more reliable, scientific, and legitimate than other sources of information, including judges’ sense of individualization and intuition about a defendant.
8. This is the case not only for judges, but it also true for the technically-trained-professional. Moreover, beyond this external pressure, psychological biases also reign and encourage the use of algorithmic tools. As aforesaid, any risk scores suggestion may influence a judicial mind with powerful AI-driven outputs, which itself is a product of the pre-designed tools developed by the non-judicial bodies only who has if any at least the ability to explain their decision-making process.
9. Even it is the fact that many states across the US are seriously considering the use of the predictive algorithm in sentencing by use criminal history as a predictor of future risk, which would constrain individualized discretion; and some already require the same to be used even in sentencing proceedings, and in effects, such widespread endorsement of these sentencing tools eventually tends to communicate to judges that the tools are efficient and reliable too. Ultimately, it would be easier for judges to place such tools in a focal point — in effect anchoring the final determination — as is always the starting point in sentencing (Bennett, 2014). Should we let them be that?
That's why, although such ‘warnings may alert judges to the shortcomings of these tools, the advisement may still fail to negate the considerable external and internal pressures of a system urging the use of quantitative assessments’. In such a condition, it is not unusual that the existence of the pressure within the judicial system to use these assessments and the cognitive biases supporting data reliance, may make judges misstep, or ignore the notions of fairness and individualized adjudication. Hence, now is a big question again: isn’t it an intricate problem anymore?