Tell me what to do — but don’t tell me what to do
On human decision making in machine learning processes
One of the most important conversations in the field of machine learning is the debate surrounding the use of predictive methods to influence or inform human decisions. Broadly speaking, the field of machine learning is the practice of programming computers to be more self-sufficient, and to create systems that can operate on their own, with minimal human guidance. This can extend itself to anything from data collection, to data analytics, to creating decision trees. Machine learning processes and predictive methods can, hypothetically, make decisions for humans, but should they? And if we allow machine learning techniques to begin informing our decisions, or making decisions for us, where should we draw the line? In a recent lecture cohosted by the Center for Data Science — as a part of the ongoing NYC Data Science Seminar Series (DS3) — Jon Kleinberg, a Computer Science Professor at Cornell University, gave a talk titled, “Human Decisions and Machine Predictions.” He spoke about the possibilities and limits of allowing machines to facilitate decision making processes in a number of fields — from neural networks, to chess, to criminal justice.
Machines making decisions for humans can sound like the plot of a science fiction film, and Kleinberg opened his talk dispelling the frequently cited idea that “the machines are taking over.” Like many machine learning practitioners, Kleinberg stressed that machine learning is most effective when conjoined with human intelligence. Intelligence is not a single variable, and one of the foundational points for machine learning is the idea that computers and humans have differing strengths in the wide field of intelligence: computers are much more adept at arithmetic and counting, while humans are remarkably well trained at logic and reasoning. Kleinberg believes that these differing forms of intelligence are compatible, not diametric opposites.
“Algorithms are a lens into human decision making,” Kleinberg said. With its plethora of complex choices and pieces, the game of chess has historically been the parameter for using data analytics to track human decisions. With all the possible decisions in a single chess move, algorithms can create decision trees, which allow humans — or a computer program, like IBM’s Deep Blue — to see all of the resulting consequences from one move. Kleinberg then asked the question, “How can machines and algorithms help us catch bad decisions?”
How can machines and algorithms help us catch bad decisions?
The talk then moved to a field with more gravitas than the game of chess: criminal justice. Kleinberg has chosen to research how machine learning techniques might apply to decisions made in a courthouse. A quote from the former Attorney General, Eric Holder, was displayed on the screen. Holder once said that, within the context of the criminal justice system, the superficial application of algorithms has the potential to: “exacerbate unwarranted and unjust disparities that are already far too common in our society.” And there is certainly truth to this. Machine learning systems only use readily available data, and so if there is already a discrepancy in the criminal justice system — based on either race, class, or gender — a program calibrated against our already established norms might only exacerbate these disparities.
But Kleinberg doesn’t believe that machine learning should replace a judge or jury, rather, he believes that machine learning techniques can be used as a way of catching biases in judicial situations. Kleinberg used the example of a defendant seeking bail while awaiting trial. When determining whether or not a defendant should be granted bail, a judge is supposed to look at a myriad of factors to inform their decision: prior arrests, flight risk, and any previous convictions, among other variables. The judge is not supposed to take into account extraneous factors — race, how the defendant is dressed, or how the defendant acts in court — but the judge may do so anyways. Kleinberg believes that, instead of having machines make decisions for judges, machine learning systems have the potential to look at whether or not a judge is taking extraneous factors into account when making judicial decisions.
A superficial application of algorithms could “exacerbate unwarranted and unjust disparities that are already far too common in our society”
Machine learning systems can easily comb through huge amounts of data, and Kleinberg posed the possibility of a machine learning system going through a judge’s entire judicial history. A machine learning system can analyze all the cases in which, say, a given defendant has one prior arrest, and no prior convictions, and look into instances when a given judge does and does not grant bail. A defendant’s record will not reflect how the defendant acted in court, but it will reflect the ethnicity of the defendant, and the defendant’s place of residence. If a judge is using extraneous factors to inform their decisions in otherwise similar scenarios, a machine learning system will notice.
The idea of machine learning keeping the criminal justice system in check, as opposed to replacing the criminal justice system, circled back to one of Kleinberg’s first points: machine learning processes should never replace human decision making processes, but machine learning processes can help humans catch their own mistakes.
Originally published at cds.nyu.edu on June 5, 2016.