What AI can learn from UX Design’s mistake

Clo S
3 min readNov 1, 2017

--

I read a lot.

Sometimes I read things that do not easily pass over my head. I feel concerned, because I can relate, because I experience these things as well — possibly also because I’ve always had a soft spot for dystopian narratives.

I’m always slightly taken aback by people who don’t grasp the threats going along technological progress. That’s partly because individuals outside the tech environment — and even more so from older generations — don’t get to hear about — or realise — those threats. Mainstream media barely acknowledge them. Those are matters that I often discuss with relatives, friends or family, as I feel an urge to raise awareness about such topics. Mass surveillance, uncensored hate speech, attention theft, mass manipulation, to name a few.

From Death to Stock

There is evidently a lot of articles, videos, and talks demonstrating the risks and wrongdoings of the digital system we spend the most part of our days in. However, I’d like to come back to this article, by Ian Leslie / The Economist 1843, published a year ago.

In the late 1990’s, scientist B.J. Fogg’s work led to a new field of study, and thus a new tool, Behaviour Design.

Behaviour Design is ‘persuasive technology’ — I’m quoting Fogg’s book’s title here — it’s at the crossroads of psychology and technology. Basically, it’s shaping products in a way that will influence user behaviour.

“The emails that induce you to buy right away, the apps and games that rivet your attention, the online forms that nudge you towards one decision over another: all are designed to hack the human brain and capitalise on its instincts, quirks and flaws.” — Ian Leslie

The article tells us that Fogg now worries about the practical applications of his research. Yup.

From Death to Stock

What happened is we let people use the tools without making sure it’s used for good, useful applications, rather than creating, let’s say, social media addiction. Even more important than using it for good, people should simply know when it is used, be able to recognise it, be aware that some mechanism is trying to influence — dare I say manipulate — their behaviour, whether it’s for good or bad.

Now, I know this isn’t really doable. Once it’s out there, one does not simply channel how research outcomes will be used. However, I firmly believe that we should tend towards a way to regulate this, especially for research with such global implications.

From Daniel von Appen on Unsplash

And this is what we’re going through with Machine Learning. Just like Behaviour Design, there’s tons of very cool, helpful, useful stuff we can do with it — feel free to check out this super insightful article about 🎧 Spotify’s ‘Discover Weekly’ playlist — but we have to watch how and for what purpose it is used. Deep Mind’s principles are a nice start on this topic.

After raising awareness, I guess the next step for me will be to actually do something about it. Several companies and organisations fight for privacy, ethical design, and so on. Those are the places I want to work at.

Now let’s get to work — From Death to Stock

--

--

Clo S

Founder, This Too Shall Grow • Consultant & Coach in Mindful UX & Digital Wellness