Should a Machine Make Moral Judgements?

Peter van der Putten
4 min readNov 20, 2016

Machines seem to be developing all kinds of human qualities and abilities. Some argue robots have already ‘cracked’ human intelligence, learning, playing games and creativity, so perhaps now it is time to move into another area that is quintessentially considered to be human: ethics and moral judgement.

This can be seen as a scary development — but if any superintelligence will ever arise, machines should better have an understanding of human values and norms. Robots are essentially optimization machines, and if you don’t teach them ‘what is good’ this can have some very nasty side effects. So relatively recently computer science researchers have started to work on a range of topics related to ‘Robot Ethics’.

The issue is that you cannot foresee all ‘rules’ for all situations to be followed, so there needs to be some element of machine learning. For instance, Mark Riedl and team from the Georgia Institute of technology use stories to teach machines how to act right in mundane every day situations such as ordering some meds in a pharmacy. Get the pills for your master as quickly as possible but not at the expense of killing humans queuing in front of you. Or as another example, MIT researchers are crowd sourcing solutions for moral dilemmas for self driving cars, to obtain training materials as well as raise awareness. If the brakes stop functioning, should we take a left and kill granny or crash the car full speed into the vehicle in front of us, possible doing more damage?

However, in ethics and social science research itself, intelligent empirical approaches are still scarce. In a recent paper presented at the Intelligent Data Analysis conference, we presented the Morality Machine, a system that has learned to understand and track ethical sentiment in Twitter discourse [Teernstra et al, 2016]. The system is based on a theoretical framework from ethics, Moral Foundations Theory, which stipulates that statements can be categorized into 6 major categories dealing with concepts such as care, fairness, loyalty and authority. We then labelled 2000 Tweets about the Grexit into the most appropriate moral foundation, and used this data to teach an algorithm to classify new Tweets in real time. And yes, we could have used the Brexit or the US elections, but would have had more discussions on the topic rather than the research.

Some of technical details on the results can be found in the image below, but to translate it into plain English, the machine could place Tweets in the ‘correct’ category in 64.7% of cases, which is a good result given the ambiguity of the Tweets in the examples above, and whilst not evaluated in depth some of the results seem to suggest it would be close to human performance (volunteers welcome). Also the amount of labelled Tweets seemed to be sufficient for the algorithms used.

So far the alternative approach was to use keyword vocabularies painstakingly handcrafted by social scientists. So we also compared algorithms that completely automatically created keyword vocabularies with models that could also leverage the handcrafted ones — but the performance was the same. In other words, letting a system learn the best keywords from scratch was at least as good. We could then use the algorithm to see how the moral sentiment changes over time in the Twitter discourse on the Grexit. See below for more details on the experiments and outcomes.

So what do you think? Could Moral Machines assist humans, or at the very least philosophers, economists and social scientists? Or to put it in a moral fashion: should they?

Note: This work is based on a paper on Media Technology thesis research by Livia Teernstra, who should be considered to be the principal researcher.

[Teernstra et al, 2016] Livia Teernstra, Peter van der Putten, Liesbeth Noordegraaf-Eelens and Fons Verbeek, The Morality Machine: Tracking Moral Values in Tweets. In: Fifteenth International Symposium on Intelligent Data Analysis (IDA), 2016

Note: an earlier version of this story was published as a LinkedIn article

--

--