How the Moral Machine Test is becoming a terrifying reality.

Aleksandra Przegalinska
The Startup
Published in
3 min readMar 23, 2020

--

Remember the Moral Machine Test (http://moralmachine.mit.edu)? It’s a platform designed by The Scalable Cooperation Group at the MIT Media Lab for gathering a human perspective on moral decisions made by self-driving cars. The Group generated various moral dilemmas, revolving around the well-known trolley problem, where a driverless car had to choose the lesser of two evils, such as, for instance, killing two young people or one pregnant woman and one senior. More than 2 million people participated in the test, submitting their answers on which outcome they thought was more acceptable. They could then see how their responses compared with other people. In fact, at the end of the test, they obtained a full summary of their own biases. In many cases, The Moral Machine was for many people the first instance of making those internal biases painfully explicit, verbalised and tangible. As I noticed, for my students of Artificial Intelligence and Management at Kozminski University it was a shocking experience. Trained in building machine-learning models, they rarely felt their moral gravity in the way Moral Machine Test exposed it.

Many critics of the Moral Machine approach, including lawyers, rightfully pointed out its lack of compliance with the Human Rights that are “inherent in all human beings, regardless of their age, ethnic origin, location…

--

--

Aleksandra Przegalinska
The Startup

Artificial Intelligence fan and researcher. Associate Professor @Kozminski University. Former post-doc @MIT and current Senior Research Associate @Harvard LWP