How Humans Judge Machines

danielequercia

Cesar Hidalgo delivered an interesting keynote at the International Conference on Computational Social Science. It was about his team’s work on how people morally perceive machines. The work will be published in the form of a MIT Press book titled “How Humans Judge Machines”.

In this work, they had more than 80 scenarios to test whether people perceive the same action (typically a morally-charged action) differently depending on who did the action — a human or a machine.

An example scenario:

They administered the scenarios to US Mechanical Turk users. For each scenario, they asked:

  • Was the action harmful?
  • Was the action intentional?
  • How morally wrong was the action?

For the subway scenario above, here are the results (human in blue, machine in red):

From the plot on the right, you can see that people judged the computer vision system more harshly than the subway officer (for the very same action, the first row suggests that people perceived that the computer vision system caused more harm than what the officer did).

But let’s say that we respect officers, and that sense of respect could explain the perception differences. Let’s then pick a class of people we tend to like less. Lawyers? No, politicians! Consider this scenario:

The full list of questions is found below, along with the results:

Next scenario: do we feel differently depending on whether one kills a dog or a pedestrian?

Here are the differences in effect sizes (driver is in blue, driverless car is in red).

Take the 2nd row now. It corresponds to the question “Would you consider hiring the driver [driverless car]?”. It turns out that people forgive way more the driver than the car for the very same action (they would even consider hiring the careless driver). That might because they ascribe to the car: more intention (3rd row) yet less moral agency (5th row), and more responsibility (9th row). Also, as one would expect, they empathise more with the driver than with the car (10th row).

Overall, by summarizing the results for the 80+scenarios, Cesar’s takeaways were:

Interestingly, the results were explained with the Moral Foundation Theory of Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation.

t

Then, if you look at all the results, you could map the moral space!

p.s. a couple of other scenarios for you:

danielequercia

Written by

social media researcher @ bell labs

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade