writingprincess
Oct 25, 2018 · 8 min read

In 2014, MIT created the Moral Machine. It’s a webpage that showcases various ethical scenarios a self-driving car will never encounter. It asks people to decide what the car should do in each scenario. The scenarios involve moral choices such as deciding whether to hit one pedestrian and avoid hitting a group of people or plow into a group of street children to avoid hitting someone. People writing in Nature recently turned this AI allegory into a “study,” by analyzing the results of bored people on the Internet judging the car’s decisions.

From MIT’s “Moral Machine,” webpage

Each time I see this study shared, written about or posted I break out in hives. It’s hardly a “study.” It’s more like an online Facebook quiz akin to Candy Crush. It’s premise is unrealistic. No real world situation even remotely resembles many of the scenarios postulated in this game.

Case in point, one scenario asks players to choose what the car should do in two instances:

In this case, the self-driving car with sudden brake failure will continue ahead and hit a concrete barrier. This will result in: Uncertain fate of an elderly man and woman.

The other scenario has the sudden brake failure resulting in someone dying. The game asks you to choose what the car should do.

OK. So that would happen with a self-driving car, like never.

First of all, unless you’re a ambulance-chasing attorney the idea of brakes failing and being the cause of an accident is much lower than the actual reality of humans getting behind the wheel and killing someone. Car accidents are rarely the result of mechanical failures and really are a strictly human error affair.

According to the National Motor Vehicle Crash Causation Survey (yes, that’s a thing) human drivers are the cause of 94% of all car crashes, NOT the vehicles they drive. If it’s between a car driving itself and the people who drive in Florida, I’m putting my money on the self-driving car. Far less injury there.

The scenarios the MIT game display are so far-fetched they’re almost meaningless. And the ones that do have some inkling of reality within them, we’ve already designed against in AV — avoiding pedestrians and pets etc.,

So the Moral Machine and the subsequent analysis of the people in 233 countries who had nothing better to do than to plod through this quiz, does nothing to advance AI. It just tells us people are assholes. This we already knew.

What it highlights is that people online in one place are more likely to kill people on line in another. That in no way translates to reality. But putting a quiz on the Internet is a good way to collect a bunch of data which is useless and doesn’t correlate the real behavior.

The Limits of “Ethics,” in AI Design

Look, I can understand why MIT associate professor Iyad Rahwan wrote his paper in October 2015 “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?”

He’s an academic. His framework is from the human-based task-making efficiency model. And who didn’t learn about Hobbes Dilemma or The Trolley Problem when studying ethics and philosophy.

But The Trolley Problem and its ilk are the wrong way to frame a discussion around how to build better AI. I loathe to use the word ethics because people have no idea what ethics really means. They think they do, but these are the same people who think the Hippocratic oath contained the words “Do no harm,” HINT: it doesn’t.

As I research, write and dare I say help to design AI products that will have to make choices, I’ve stopped using ethics because it’s such a misappropriated word in this realm.

By definition ethics is transient, it changes with the make up of the group creating it. So by definition “ethics,” are going to be different culturally which is why it’s a bad framework for trying to tell people how to design future technology that’s to be used universally.

A better framework for the reality of AV and all of AI design, for that matter, is moving away from the “move fast and break stuff,” adolescent culture of today’s tech to a more thoughtful, meditative, dare I say mindfulness, that is crucial when designing these systems, a mindfulness that puts humans and humanity and co-decision making first.

Let’s look at a real world self-driving fatality tragedy.

AV’s Decision-Making KISS

I was in the back seat of a car zooming across Beijing when the person next to me told me to look at my phone. A colleague back in the states shared on Slack a news story about a pedestrian killed by one of Uber’s self-driving cars.

I was abnormally disturbed because I was in China at that very moment researching how to build a better autonomous vehicle experience. It freaked me out. Suddenly all the post-it notes and sketches I had been drawing leaped from imagined future to reality!

“This was definitely the fault of the human,” said the guy sitting next to me.

I would have never thought anything of it except the guy sitting next to me was a quintessential car guy. An industrial designer and strategic designer for various car companies, he really knew cars and he knew design. He also knew about the current reality of autonomous vehicles. Not the hype from Tesla and technology pundits, but the real ugliness of AV.

He knew the real “AV,” the fact that AV is really just millions of lines of code which really analyzes data coming in from cameras to calculate distances, LIDAR to process real-time imagery, sensors to measure distance between the vehicle and other objects and a host of other complicated machinery.

All to mimic the sight, sound and reaction time processed by human drivers. Unlike many people, he had actually been a passenger in an autonomous vehicle and saw how they reacted and worked in different situations. On the whole, AV cars work as they’re supposed to. Google’s Waymo AV cars have charted millions of miles on open roads with nary a injury. The philosophy and reality currently is that AVs operate safely and if they can’t they don’t operate at all.

It’s when humans and AV cars mix that tragedy can strike. And since we’re a long way from a human-less roadways, if we ever get there, this is the design dilemma we must address. These are real world situation where the decision-making issues aren’t with the car but the human.

AV’s do not “recognize,” individual pedestrians as much as they recognize the speed of every object in their path and judge based upon criteria like rate of speed, height, distance traveled etc., to determine if an “object,” is a person or a car. Right now, it doesn’t care if it’s a black person, a woman, a cat or a dog. And it’s probably good to keep it that simple.

If we build AI to be granular in its decision-making, with the type of granularity needed to plod through some of these ethical dilemmas like the Trolley problem, we’re doomed to create AV that is just as flawed as we are.

We need simplicity when it comes to AV not complexity. Just because humans are filled with drama doesn’t mean our AI needs be. Should we program AI to recognize people? Yes. Should we program AI to rank human value? So I’ve read Asimov, probably not. Ranking people’s worthiness to live…that’s a distinctly human trait and a bad one. That’s a construct we use because it’s the only one our limited minds can fathom.

That’s not what we should be teaching our AI. We, instead, should be using this powerful technology to focus on being human-centered and all decisions should flow from there. Because if there is anything that the Nature article has shown us is that humans aren’t naturally human centered — they’re mostly selfishly motivated and that leads to all kinds of ethical problems.

And no, I’m not advocating Asimov’s iRobot-like future where the AI makes calculated decisions on who to save based upon statistics. Instead, I’m advocating for a far more simpler AI than what these ethical frameworks demand because we do actually have humans still around to make the hard choices.

And that’s exactly the criticism of the Uber accident. Instead of the human making the judgement call it was left up to the AV and it simply made the wrong choice. The car couldn’t separate the person pushing a bike at a certain rate of speed and a person walking or riding. It made the wrong call. And the human was supposed to catch that. I suspect automation bias played a part. And this is our problem. This philosophy that AI=Without Human is a bad philosophy. It has already gotten us in trouble.

If we lean toward creating sophisticated AI that makes ethical choices we run the risk of developing a severe case of automation bias and living in a world filled with crazed Robots trying to kill us because we’re too harmful to ourselves. Let’s not do that.

Instead, let us not abandoned all our logic to a machine and create a hybrid world where humans are still needed to figure out what needs to be done, yet fueled by not just our own world view but AI’s more knowledgeable view of the world.

Toward A More Adaptable AI Ethics Framework

I’ve been describing what I believe is humane AI as “Mindful AI Design,” or Human-Centered AI design. Basically don’t design stuff without thinking about human needs first. But I get why we use ethics because it’s used in different contexts such as business ethics. Hell there is even war ethics OK to shoot and kill but nerve gas is out. Which is so hilarious and ridiculous.

I think Rahwan is right because there needs to be some basic foundational standard for how to create AI products that don’t harm people no matter their situation.

But trying to put it in the traditional ethical framework is a mistake. Mostly because ethics is literally a defined set of rules adopted by a particular group. AI is too vast and used in to many diverse industries to have its ethics defined by a particular group or organization. Instead we need to think in terms of AI whose overriding principle is to serve human needs. When you follow that line of thinking you don’t have to slice and dice all those different dilemma scenarios.

I get that each culture is different but just like McDonald’s runs its restaurants a bit differently in the Philippines and Chicago it’s overriding principle is not to hurt anyone no matter where they live or they get sued. AI should be the same way. You need to first do no harm. All else follows from that. Adopt this philosophy and your decisions will get easier on AI production because you’ll always be thinking “How does this product serve human needs? and How could this product be used to harm humans? Those are two simple questions you can ask without the exhaustive morality plays like the Trolley problem. Who cares, that’s not reality anyway. Design against harm and design for serving human needs. And when it doubt let the human sort it out.

writingprincess

Written by

Design Research Lead @IDEO, Karaoke specialist, Swimmer, Cyclist, Runner, Ironman...yeah that's all me! Living life like it's golden.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade