The Moral Machine

Karl Fezer
The Codex
Published in
4 min readOct 6, 2016
On the Phrenology scale, I’m somewhere between a dog and that guy who would bowl in your lane when you weren’t looking.

I’m going to start this post with a disclaimer: I’m bad at ethics. I’m not saying I’m a bad person (I totally could be), but in terms of computers and ethics, let’s just say I literally and figuratively chose not to attend that class. hat said, why not push the boundaries of what we can do? (see Manhattan Project, Redstone Rockets)

You may or may not have heard of the Moral Machine, a study conducted by MIT. It’s a formatted as a binary survey. Namely, who would you kill in the situation of a runaway car at a crosswalk.

Step One for better AI Drivers: Start ticketing jay-walking dogs.

You also get to see your results at the end compared to averages from everyone who has taken it thus far.

I’m curious if a newer version covers Trump V Hillary Supporters.

While it’s set up as a learning tool for AI driven vehicles, it is more of a study concerning humans and our own morality. There are two main reasons I say this:

  • At this point, humans teaching machines how to treat other humans has a notoriously bad track record (see Microsoft’s racist chat bot).
  • By the time we have prolific autonomous cars, these situations should be avoidable with the proper application of sensors. However, at some point, worse-case scenarios like these will have to be thought out in the programming, even if it’s simply save the most amount of people in any given scenario.

This experiment is really an ice-breaker to pave the way for Machines to make decisions concerning human life. It may seem far-fetched, but as we go beyond having our AI contained to the Internet and factory floors and out in the wild controlling a few thousand pounds of metal, this type of thinking is inevitable.

However, we have already shown lenience toward machines in the case of the Tesla accident fatality. Teslas weren’t recalled, there are still people using the autopilot feature, and there wasn’t a massive public outrage. Justifiably, the driver was held accountable for his own death as the autopilot feature was still in beta testing. To that end, I don’t see the future of judging AI being any different; it’s better than the alternative: us.

One thing that the MIT Moral Machine study shows is that AI is going to be held to a higher standard than humans. We have already seen that it is already better than drivers at avoiding accidents. We would never blame a human in any of the situations outlined in the Moral Machine study. We empathize with our own shortcomings in rapid decision making. Since machines aren’t limited by reaction time, we can and will hold them to a higher standard. I’m not saying there is going to be blame when a machine kills a driver of a car instead of a dog in the road, but definitely expect a software update when incidents occur.

How we deal with machines making decisions in our world is going to be a learning experience for everyone involved, both human and machine. When you know a car is autonomous and has the ability to stop in time, will you be more tempted to jaywalk?

Going forward I think we also need to consider how we treat machines. How courteous do we need to be? I’ve often noticed myself yelling at my Alexa when she doesn’t understand me or is improperly triggered. Just because a machine doesn’t have feelings (yet) doesn’t necessarily mean that you should treat it as such. The Golden Rule should still apply, both to remind us of our own humanity, and, more importantly, so that machines learn from our behavior. While they aren’t at a child’s or even dog’s level of development yet, they will be soon, and we need to be ready to treat them that way. Just like children and dogs, machines are going to learn how to interact with us based on our responses. We need to be able to teach machines to be better people than we are.

This story is part of The Codex, a collective of independent thought. Subscribe to our newsletter to get a weekly digest of our best stories and be sure to like and follow us on Facebook and Twitter.

--

--

Karl Fezer
The Codex

Wanna think about our soon-to-be Robot-Overlords and hear about my adventures in meat-space?