The Juvet A.I. Retreat

Ben Sauer
Slapdashery
Published in
4 min readOct 6, 2017

I must have lucked out somewhere on the line; last week’s trip to the Juvet Landscape Hotel was an extraordinary moment in my life. A setting so perfect that it seems vaguely unreal (but perhaps that’s just the movie location talking!). Challenging conversation; delicious food, tranquility; my colleague Andy Budd pulled off something very special (so much so I think we’re going to do it again). I met some very special people: thanks to all of them for making the leap to join us. The activities and the outputs of the retreat I’ll leave to others (keep an eye on Josh Clark for more), here are some of my musings.

“All problems are people problems.”

There’s currently an enormous gap between the contemporary realities of AI (better known as deep learning) — let’s call this A, and the popular conception of what’s coming (embodied AI, robots, consciousness, replicants) — let’s call this B.

A is already having an effect on society. Example: we can talk to machines now (in a limited way), after thousands of years of dreaming of it. We know A is going to continue to challenge us with weird and wonderful effects, but I suspect we won’t deal with them well unless we get past people thinking that B is happening.

And that brings to me to perhaps my principle Juvet musing. We think the problems that AI poses might be new, but my hypothesis is that in fact, the root problems are not really new at all — it is a novel piece of technology that obfuscates the unoriginal nature of problems, confused by the collective dreams we’ve been having about it for decades.

Example: Bill Thompson and I spent some time at Juvet discussing why the public’s perception of AI is so poor. TLDR: journalists have no incentive to get it right. If more people click on a pic of a sexy robot, then so be it. Happy editor, more ads, and repeat.

Or, the labour problem. Let’s say self-driving trucks leave huge swathes of truck drivers unemployed. Who holds the collective, societal bill for that? Tech companies love to externalise a problem like that. Nothing new for corporations.

Is the media industry enabling mass-deception new in any way? Have there been changes in technology that have rendered human endeavour useless in the past? Continually.

Or, let’s take the bias problem. We create racist algorithms because the data we put into them is based on us, i.e. a bunch of racists.

Let’s say the algorithms start making more decisions behind closed doors: is it the first time we’ve been worried about the obfuscated nature of bureaucratic decision-making that oppresses individuals? Kafka wrote volumes on the subject.

A real life example: I recently consulted a startup that has a deep learning algorithm that can identify shoplifting before it happens (based on fairly consistent body language). Much as this seems like the dawn of a new Minority Report era, I’m not 100% sure it is. I’ve been eyed up by a suspicious security guard enough times to know that we’ll inevitably replicate problematic decision-making that already exists today.

AI presents old problems disguised as new ones; perhaps let’s keep looking for the root issues. You don’t ask a plumber to simply cap a leaky pipe when the pressure’s too high; you turn down the pressure first. One problem leads to another.

The hand-wringing about embodied AI reminds me a little bit of the anxiety about cloning. What will it be like when we clone humans? Plenty of clones roaming around un-noticed; we commonly call them ‘twins’.

Excusing my anthropomorphism for a moment: perhaps creating AI is like parenting. What values do you choose to imbue it with? Which of your own are you unaware of? Your kids might do new things, but they’re borne back into the past, beating against the waves by what we teach them. Which leads me to wonder if…

AI = new behaviours, old problems.

Old problems. We can’t challenge them unless we break through our own conceptions of the world. Bring on the weirdness. The freaky ideas, the unsettling ones, the ones that decouple us from our conception of how the world should be. Explore the liminal spaces; the taboo, the sacred, cross the boundaries of normality in order to redraw those boundaries. For if we do not carve out how we want that future to change us (and it will), then we won’t have a say in it. If we can’t think beyond our existing metaphors, we won’t smell that future coming. I’m lucky to have been with people who have good tools for doing that.

Epilogue: In thanks to Chris Noessel for his excellent Future Wheel workshop, I’ll be using his method to make some wacky predictions on slapdashery every Friday from now on.

--

--

Ben Sauer
Slapdashery

Speaking, training, and writing about product design. Author of 'Death by Screens: how to present high-stakes digital design work and live to tell the tale'