Machine Ethics Part 1: Introduction to Reasonable Machines and Machine Ethics

Franziska Lippoldt
4 min readDec 5, 2017

--

What the moral machine from MIT has shown, is that there is a huge need for ethical reliability in technology development. From fridges that integrate into our daily life to autonomous cars — the more we integrate technology into our daily life as a emotional support or social device, the more we have to care about rules and standards for machines to interact.

The required ethics is complex and requires reponsible actions. In specific, the type of ethics needed for machines varies. But the more and deeper we integrate machines into society, the more complex do they get. Asimov’s law is just the beginning of a thought on machine ethics. Machine ethics needs a basis of safety, trust and knowledge.

Overview over the topic of Machine ethics and its main points of interest

Here follow the common misconceptions about technology development and the reason why we cannot seperate machines and ethics.

We expect too much in too fast

An average human spends his childhood with this parents, in the best case 18 years, before he turns into a responsible individual of society. Now with all of the new machines and gadgets coming to the market, helping us to improve our social life, we expect them to automatically behave properly already in the first stage of product design. That is to say, we expect machines to adapt to human laws within one or two years, what humans learn in 18.

We need to prove reliability before technology is adapted by the broad audience

Let’s start with one of the basic machines we have today- a coffee machine. We can (if desired) come up with a new model, a new type of beans or powder and a completely new taste. In the worst case scenario, various batteries are defect. Customers will require us to exchange them, and a huge fine might be needed to pay.

Yet, current technology start-ups are not interested in producing common coffee machines. Technology inventions nowadays do far more than just that, they help people connect to each other, they adapt to your personal style, they coordinate your daily life…. in other words, they make you smart…

Think about it. You are giving your life into hands of technology that has been developed for a few years at best, while you have been on earth trying to find a perfect life for 20 years and more.

What is happening right now, regarding the development of technology as exponential, is that device developed are far more futuristic. Autonomous cars for example, are one major field of interest. Having cars that act responsibly on the street, decreasing the amount of annual accidents and at the same time inetegrating into daily life. Yes, we expect them to be safe. And yes, we expect them to help us and be a great asset in daily life.

But how do you actually proof that? Is it that we can just release autonomous cars into the normal traffic and hope for the best, because we had a team of great software engineers, that never make a mistake?

Now imagine our future product design is a coffee shop inside an autonomous car, i.e. an autonomous coffee shop. We could target the right customers at the right time by calculating optimal routes for the car to go, passing by universities, office complexes and train stations.

That product is a great idea, we do not need any employees anymore, the car can sell coffee at prime time, what a faciliation. But not only the list of advantages increases, but also the list of disadvantages. From traffic accidents, to spilling coffee over customers, to not selling any coffee, to an exploding car (in the worst case in the middle of a busy road).

Advantages and disadvantages are a couple.

The more we expect from the technology to be disruptive, the more we need to cover the possible negatives sides. Future technology will be positive, negative or a state in between depending on how we decide to constraint it, how we shape the future and how we deal with mistakes.

What I ask for is … ethical debugging for “social machines”

Proof that your product is doing the right thing before bringing it on the market. Simulate your product in different environments. But this issue should not be dealt with on a company/ start-up level, but should be supervised by society itself. Especially large companies, that have the means to fund deep research, are not only responsible for creating new technology but proving that this technology is well-behaved for various situations in society.

We need to proof that machines integrated into social life are trust worthy, that those machines are conform with ethical standards.

Humans are trust worthy after they passed their childhood and become adults. We need to specifiy what an “adult machine” is, i.e. a trust worthy machine. The following articles in this series will discuss trust of future machines and developers behind those.

--

--