New Book Examines How Humans Judge Machines

Can this relationship be saved? Professor César Hidalgo tackles the complex issues of human/machine ethics and trust in the world of AI

MIT IDE
MIT Initiative on the Digital Economy
5 min readSep 29, 2020

--

By Paula Klein

Do we trust humans more than we trust machines? If so, why, and who is responsible for artificial intelligence (AI) outcomes? Professor César Hidalgo is asking some of the most provocative ethical and philosophical questions of the digital age.

As intelligent machines permeate our decision making and everyday tasks — and as we become more dependent on the accuracy of their results — we also must consider the implications of human versus machine consequences, according to Hidalgo, who chairs the Artificial and Natural Intelligence Institute (ANITI) at the University of Toulouse. He is also an honorary professor at the University of Manchester, and a Visiting Professor at Harvard’s School of Engineering and Applied Sciences. Hidalgo previously led MIT’s Collective Learning Group where he examined human/machine bias.

César Hidalgo

His research focuses on “collective learning,” which takes place in teams, organizations, and economies, and how to improve it. Hidalgo dives deep into these topics in a new book just launching today, September 29, How Humans Judge Machines, (MIT Press), co-written by Diana Orghiain, Jordi Albo Canals, Filipa De Almeida, and Natalia Martin.

The book raises challenging questions about the interaction between humans and machines and exposes human biases and preconceptions.

“Machines are here,” Hidalgo said at a recent seminar at the MIT IDE, “they also make mistakes and get things wrong. How do we feel about that?”

In recent years, advances in big data and algorithms have created a world where it is not only possible, but common to use algorithmic business decisions in tasks from HR hiring, to marketing decisions, to financial loan approvals. We are increasingly looking to smart machines for more accurate answers, suggestions, and predictions. But how will humans react to greater inclusion of AI in organizations and how much will they accept or judge the decisions of these new co-workers?

At the seminar, Hidalgo discussed dozens of experiments he has conducted documenting the different ways that people evaluate human and AI actions — some humorous, some extremely serious.

In particular, “our data shows that people do not judge humans and machines equally, and that these differences can be explained as the result of two principles,” according to the authors. “First, people judge humans by their intentions, but judge machines by their outcomes.” The second result is that “people assign extreme intentions to humans and narrow intentions to machines.”

What this means is that people are willing to excuse humans more than machines in accidental scenarios, but they excuse machines more in scenarios where actions are perceived as intentional.

Assessing Human/Machine Bias

To test these ideas, the research team created hypothetical scenarios where people saw humans or machines taking the same action or making the same decision.

The researchers collected data from 80 experiments with 200 people and then analyzed the viewers’ reactions on five criteria: The level of harm the action caused; fairness; loyalty; authority, and purity in each situation.

In one experiment, a devastating tsunami was approaching a town and politicians had to decide whether to evacuate all or some of the residents based on estimated survival rates. The same scenario was then simulated with machine algorithms making the decision instead of humans. (Figure 1)

The data analysis showed that machines were judged more harshly than humans when the algorithm failed to save lives. One explanation, Hidalgo said, is the idea of “moral agency,” where people are responsible for their actions and are expected to make mistakes.

By contrast, machines take no moral responsibility for their actions and therefore, they are judged more harshly when they fail. Humans were also given credit or rewarded more often for their successes.

Bad Business?

In another case, embarrassing marketing mistakes were made because the machine creating a marketing ad didn’t understand the nuances of the instructions. Ethical questions were raised about how much harm was caused by the ad and who should make corrections; who had agency? Did it matter if the error was made by a human? These are the types of dilemmas that businesses and citizens will have to consider as AI comes of age, Hidalgo said. Significantly, the answers to these questions may impact how quickly robots displace human labor, or how widely driverless cars are adopted.

Hidalgo said that by comparing “people’s reactions to a scenario played out by a machine or a human, we create counterfactuals that can help us understand when we are biased for or against machines. As machines become more humanlike, it becomes increasingly important for us to understand how our interactions with them shape both machine and human behavior,” Hidalgo writes. “Are we doomed to treat technology like Dr. Frankenstein’s creation, or can we learn to be better parents than [Frankenstein]?”

These considerations will shape technology implementations in the future. “In a world with rampant algorithmic aversion, we risk rejecting technology that could improve social welfare,” according to the book. “For instance, a medical diagnosis tool that is not perfectly accurate, but is more accurate than human doctors, may be rejected if machine failures are judged or publicized with a strong negative bias. On the contrary, in a world where we are positively biased in favor of machines, we may adopt technology that has negative social consequences.”

--

--

MIT IDE
MIT Initiative on the Digital Economy

Addressing one of the most critical issues of our time: the impact of digital technology on businesses, the economy, and society.