IMAGE: Kittipong Jirasukhanont — 123RF

Robotic justice

Enrique Dans
Enrique Dans

--

Over the past few weeks I’ve been using a Wired article entitled “Courts are using AI to sentence criminals. That must stop now”, to generate discussions in some of my classes and conferences: the use of algorithms in judicial processes in the United States to determine, for example, the probability of a defendant’s likely recidivism, as well as their use to hand down a sentence.

Yesterday, the New York Times commented on the same issue as the one described in Wired, that of Eric Loomis, a criminal sent to prison last March by a judge who used, at the instance of the prosecutor, the results of COMPAS, an algorithm created by Equivant, and stated in the summary:

“You are identified, through the COMPAS assessment, as an individual who is a high risk to the community.”

Loomis’ defense has alleged that the use of the report generated by the algorithm violates the rights of the defendant, because it is a secret algorithm that cannot be inspected or studied, which prevents it from being effectively discussed or questioned.

It’s not only that the algorithm is secret due to the interest of its creator, but, as we said some time ago, that the processes of machine learning generate black boxes that human intelligence cannot monitor, that can only be evaluated on the basis of the quality of its output, which generates transparency questions.

The process is clear enough: a bank, for example, could easily begin to feed a machine learning algorithm with all its loan and mortgage files and its results regarding repayment or non-repayment, and quite possibly find that over time, the results of the algorithm could improve on the decisions of a risk committee composed of people with extensive banking experience. If this were the case, the bank could choose to get rid of its risk committee and replace it with the automated process, not only to reduce costs but also to improve efficiency. However, that decision, which would decide whether the bank decided whether or not to grant a mortgage based on an algorithm, could have two problems:

  • Lack of transparency: fine, you’ve turned me down for a mortgage … but, what should I do if I still want to get it? What are the variables that have made you turn me down, and how should I work to improve my eligibility to obtain that mortgage later on? How do you explain a decision that you yourself do not understand, and that you only know that maximizes your chances of getting the loan back?
  • Biases: Contrary to what some people might think, using an algorithm does not guarantee greater objectivity, but instead simply a reflection of biases that might exist in the original data. If the data with which the bank fed its algorithm, for example, reflected some kind of historical tendency in which it usually rejected, for example, applicants from a certain group based on criteria such as sex, ethnicity, religion, etc., these biases could be consolidated in the algorithm obtained from them, but be difficult to identify by a human intelligence.

In American justice, processes are increasingly being colonized by algorithms: many lawyers are beginning to use algorithms like Ross, based on IBM’s Watson, not only to determine which jurisprudence is most relevant to the case they are preparing, but also to examine the possible biases of the judges assigned to their cases, based on all their previous decisions in possible similar cases, with all that entails in order to optimize the arguments used. As these types of work methodologies are consolidated, the processes of machine learning that support them become increasingly black boxes of greater and greater complexity, with processes that a human is increasingly unable to replicate or understand.

The robotization or algorithmization of justice could, on the one hand, become a way to deal with saturated courts: any case sufficiently clear, simple or obvious would no longer occupy the valuable resources of a judge or jury, and instead be determined by an algorithm, as already happens with many traffic violations, in which the algorithm, in fact, is not particularly intelligent: it is sufficient a certain requirement is met, for a result to be generated. As the complexity of a case increases, we find cases like the one described above, which can lead to defenselessness: how can I defend myself against a verdict based on a black box that works on the basis of ​​a series of inputted variables and then churns out a result? Should we require that the factors used to reach the verdict based on algorithms be explained in a way humans can understand?

A few days ago, at Netexplo, I had the opportunity to interview on stage the founder of a medical diagnostic imaging company, Qure.ai. The conversation raised some very interesting questions: over time, once doctors adopt the use of their technology to determine if an X-ray, a scanner or a CT scan shows a tumor, a physician’s ability to manually examine an image could be lost due to lack of practice, and although the machine, as it improves its algorithm, will be able to offer much better results than human doctors, we would then start to see humans lose skills they had developed. This is a similar argument used by Nicholas Carr in “The Shallows”: before mobile phones we were able to remember more phone numbers from memory: the internet is making us stupid because it replaces skills we used to have, making them unnecessary. It could also be argued that no one today would be able to write on a stone, because new technologies and media have made it unnecessary… something we simply accept. In this case, artificial intelligence is redefining human intelligence. But what happens when that loss of human ability is subjected to decisions by a black box we cannot interpret?

(En español, aquí)

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)