Can you teach a machine to be moral?

Maxwell Anderson
THE WEEKEND READER
Published in
6 min readNov 21, 2015

This post is part of a broader series on Machines and Morality. Below are my some of my thoughts on the matter. I’ll lead with a few predictions.

Predictions for 2035 (20 years from now):

  • The field of “machine ethics,” which isn’t really a field yet, is going to bee a major area of philosophical inquiry
  • Every major tech company will have at least one “machine ethicist” on staff. The larger companies will have whole departments.
  • Ethics of machines will have be a campaign issue in the 2036 presidential election and there will be bills in Congress concerning the regulating the ethics of artificial intelligence programming.

You heard it here first.

AI and the Coming of Autonomous Machines

This month Google dropped a bomb on the tech world by announcing it will make TensorFlow, its platform for artificial intelligence, open-source and free to the world. As Cade Metz points out, this is not an unheard of strategy for Google. It launched Android as a free smartphone platform in late 2008 and seven years later Android is the dominant platform for mobile phones, with 85% of the market. Maybe Google thinks the next market they need to dominate is AI.

As big as Google is, this announcement just the latest news in a larger developing trend. AI may be the next big thing in world history. Autonomous intelligent machines and robots would be the next big thing after that. Which means the next big question for society ought to be: If machines will be autonomous, will they also be moral? What will be the ethics of the technology we create?

What does ethical technology mean?

When I hear the word “ethics” my first thought is “right and wrong.” When I hear the word “technology,” my first association is “computers and digital tools.” But what if we thought of ethics simply as the study and practice of “those things which cause people and societies to flourish”? And what if we conceived of technology just as “tools and techniques for doing things?” With these definitions, it would be clearer that every invention is a technology and every invention inherently has an ethical question embedded in it: will this thing help people to flourish? Here are three examples.

  • Exhibit a): Plastic water bottles. It is a new technique for transporting water and conveniently staying hydrated, but all that plastic has a cost for the environment and the environment is essential for human flourishing. Is the cost worth the convenience?
  • Exhibit b): Atomic weapons. In the early-mid 20th century, we created atomic bombs that could be used as a new tool for ending wars decisively and for disincentivizing future wars. But the technology is so horrifically devastating that some question whether their use or very existences is a violation of basic human ethics.
  • Exhibit c): Corn. Experts say we will need to double world food production by 2050 if we are going to feed everyone and account for population growth. By genetically modifying corn, scientists can make each acre of farmland orders of magnitude more productive and efficient for feeding people. But many people, especially in Europe, worry that GMOs are worse for our bodies and our environment than most people understand.

These are just three of of a multitude of technological ethics questions that already exist and are massively controversial. But I believe they are somewhat simple when compared to the moral dilemmas we will face with advanced intelligent machines. The difference is that while each of these ethical questions is hotly contested, at the end of the day it is people who are doing the contesting. With autonomous robots and machines, humans are no longer in the driver’s seat of making ethical decisions. And that makes us uncomfortable.

It’s a question of trust

If I were to make to sketch out how much I trust different things, it would look like this:

I haven’t met the theoretical person on the left, but I’m pretty sure I’d trust him. Otherwise I trust people I know and have had good experience with the most. Then I trust the people whom others recommend pretty highly, especially if I trust the recommender.

People I know and don’t trust are far below that. In fact, I’d put most dogs ahead of them — with a dog, you basically know what you are going to get and there’s a lot of loyalty in man’s best friend. But they are still below all humans whom I and others find trustworthy because sometimes dogs pee on the floor and tear up your furniture.

For me, cats are far less trustworthy than dogs. You never know what they are thinking. They are hot and cold. You’re always worried you’re going to offend the cat and it won’t let you pet it for a week. But still, I have more trust in cats, (and all mammals) than I do in birds or lizards. My level of trust in birds is really low. The phrase “bird brain” exists for a reason. I’m not convinced they think about anything for longer than 5 seconds at a time.

The lowest of the low on the trust scale is a spider or any kind of insect. I’m afraid they’ll bite me. Female spiders regularly kill their mates. Spiders are icons of Halloween for a reason: because they are super creepy! As Tim Urban says, Part of what makes them creepy, besides their ability to bite, is that they are so foreign. You can imagine the mind of a dog, a cat, and a lot of other animals. Instinctually, you know that Birds and lizards think differently (e.g., the lizard brain). But instinctually, don’t you just have the feeling that insects and spiders are distinctly “other?” It doesn’t mean that they are evil but they are very distant from humans and that makes them very difficult to empathize with.

The question with machine intelligence is whether a super intelligent computer is more like the benevolent genius or more like an unfathomably smart spider.

I believe what drives technological optimists to think fondly about the future is they believe that autonomous robots and super AIs will look like the fellow on the left. Those who are more wary of the future worry that whatever we create won’t actually be like “us” and will be as foreign and potentially menacing as the spider in the basement. From what I’ve read, whether an “expert” thinks it will be one or the other seems to be more a matter of temperament and belief of the researcher than established fact.

If we indeed to develop super intelligent machines, then the importance of making those machines more like the left hand side of the page and less like the right is paramount.

The advancement of artificial intelligence and the related ethical questions and opportunities it will bring is one of the main reasons I’m working on a project to bring together technologists, entrepreneurs, pastors, and theologians to talk about philosophy, theology and morality in the machine age. If you are interested in what we’re doing or have someone you think I should read or meet, or if you want to get involved in a future symposium, drop me a line.

Subscribe to The Weekend Reader, my weekly review of culture, ideas and technology, here.

--

--