Know your design ethics: deontological vs. utilitarian

Ben Sauer
Slapdashery
Published in
3 min readOct 3, 2017

On last week’s AI Retreat in the Juvet hotel in Norway (more on that later in the week), Cennydd Bowles and I talked about ethics in passing, and something came out of it that stuck with me: designers new to ethics may not know the two opposing stances in ethical philosophy, and certainly don’t seem to be discussing them much.

Deontological example

Thou shalt never use psychological methods to manipulate a user.

(The means are a wrong in themselves)

Utilitarian example

Thou shalt only use psychological methods to manipulate users when it makes their lives better.

(The ends justify the means)

So where do people sit on this morality spectrum? Turns out, it’s somewhere in the middle, or at least, they waver. The classic example that’s been studied is the trolley problem: do you pull a lever on a train track so that the runaway train will kill one person instead of five?

When people are posed this challenge where they have to pull a lever, they’ll choose the utilitarian option: actively choose to kill one person to save five. When people are posed the same challenge with a twist, i.e. they have to push that one person onto the track to save five, they won’t do it (thou shalt not kill).

So, the nature of the action can change your moral philosophy, even if the outcome is the same.

Deontologists often view the utilitarian approach as ‘morally flexible’ — there’s some truth in this, because you can post-rationalise almost any action towards some imagined ‘good’. The tyrant believes the world will be better without those dirty outsiders. But again, it’s more complicated than that, because it depends what the deontological thing is that you believe. It’s hard to pick apart specific deontological beliefs from deontology itself.

For example, I’ve often clashed with anarchists on the topic of state control. They’ll believe something along the lines of: my freedom is paramount, and I should never be coerced into following a rule I didn’t consent to. It sounds like it makes sense, but it falls apart very quickly when you examine the outcomes: I do not consent to laws against murder. I shall not be coerced into installing seatbelts in the cars I make. At the same time as critiquing specific deontological principles, it’s true that our law, and arguably our civic wellbeing, is upheld by them.

I tend to agree that the deontological lens is not nearly used enough in the technology industry. You only have to look at Uber to realise that nothing is sacred; there are no lines that cannot be crossed in service of growth or ‘better’ transport.

I’ll end with a deontological belief: if we, as designers, are to dedicate ourselves to making things more wonderful for humanity, then there should be hard lines that aren’t crossed. My hope is that designers spend more time thinking and discussing what those lines are.

--

--

Ben Sauer
Slapdashery

Speaking, training, and writing about product design. Author of 'Death by Screens: how to present high-stakes digital design work and live to tell the tale'