‘Unboxing’ at Behavior Design Amsterdam #16

Kars Alfrink
7 min readSep 19, 2018

--

Below is a write-up of the talk I gave at the Behavior Design Amsterdam #16 meetup on Thursday, February 15, 2018.

‘Pandora’ by John William Waterhouse (1896)

I’d like to talk about the future of our design practice and what I think we should focus our attention on. It is all related to this idea of complexity and opening up black boxes. We’re going to take the scenic route, though. So bear with me.

Software Design

Two years ago I spent about half a year in Singapore.

While there I worked as product strategist and designer at a startup called ARTO, an art recommendation service. It shows you a random sample of artworks, you tell it which ones you like, and it will then start recommending pieces it thinks you like. In case you were wondering: yes, swiping left and right was involved.

We had this interesting problem of ingesting art from many different sources (mostly online galleries) with metadata of wildly varying levels of quality. So, using metadata to figure out which art to show was a bit of a non-starter. It should come as no surprise then, that we started looking into machine learning — image processing in particular.

And so I found myself working with my engineering colleagues on an art recommendation stream which was driven at least in part by machine learning. And I quickly realised we had a problem. In terms of how we worked together on this part of the product, it felt like we had taken a bunch of steps back in time. Back to a way of collaborating that was less integrated and less responsive.

That’s because we have all these nice tools and techniques for designing traditional software products. But software is deterministic. Machine learning is fundamentally different in nature: it is probabilistic.

It was hard for me to take the lead in the design of this part of the product for two reasons. First of all, it was challenging to get a first-hand feel of the machine learning feature before it was implemented.

And second of all, it was hard for me to communicate or visualise the intended behaviour of the machine learning feature to the rest of the team.

So when I came back to the Netherlands I decided to dig into this problem of design for machine learning. Turns out I opened up quite the can of worms for myself. But that’s okay.

There are two reasons I care about this:

The first is that I think we need more design-led innovation in the machine learning space. At the moment it is engineering-dominated, which doesn’t necessarily lead to useful outcomes. But if you want to take the lead in the design of machine learning applications, you need a firm handle on the nature of the technology.

The second reason why I think we need to educate ourselves as designers on the nature of machine learning is that we need to take responsibility for the impact the technology has on the lives of people. There is a lot of talk about ethics in the design industry at the moment. Which I consider a positive sign. But I also see a reluctance to really grapple with what ethics is and what the relationship between technology and society is. We seem to want easy answers, which is understandable because we are all very busy people. But having spent some time digging into this stuff myself I am here to tell you: There are no easy answers. That isn’t a bug, it’s a feature. And we should embrace it.

Machine Learning

At the end of 2016 I attended ThingsCon here in Amsterdam and I was introduced by Ianus Keller to TU Delft PhD researcher Péter Kun. It turns out we were both interested in machine learning. So with encouragement from Ianus we decided to put together a workshop that would enable industrial design master students to tangle with it in a hands-on manner.

About a year later now, this has grown into a thing we call Prototyping the Useless Butler. During the workshop, you use machine learning algorithms to train a model that takes inputs from a network-connected arduino’s sensors and drives that same arduino’s actuators. In effect, you can create interactive behaviour without writing a single line of code. And you get a first hand feel for how common applications of machine learning work. Things like regression, classification and dynamic time warping.

The thing that makes this workshop tick is an open source software application called Wekinator. Which was created by Rebecca Fiebrink. It was originally aimed at performing artists so that they could build interactive instruments without writing code. But it takes inputs from anything and sends outputs to anything. So we appropriated it towards our own ends.

You can find everything related to Useless Butler on this GitHub repo.

The thinking behind this workshop is that for us designers to be able to think creatively about applications of machine learning, we need a granular understanding of the nature of the technology. The thing with designers is, we can’t really learn about such things from books. A lot of design knowledge is tacit, it emerges from our physical engagement with the world. This is why things like sketching and prototyping are such essential parts of our way of working. And so with useless butler we aim to create an environment in which you as a designer can gain tacit knowledge about the workings of machine learning.

Simply put, for a lot of us, machine learning is a black box. With Useless Butler, we open the black box a bit and let you peer inside. This should improve the odds of design-led innovation happening in the machine learning space. And it should also help with ethics. But it’s definitely not enough. Knowledge about the technology isn’t the only issue here. There are more black boxes to open.

Values

Which brings me back to that other black box: ethics. Like I already mentioned there is a lot of talk in the tech industry about how we should “be more ethical”. But things are often reduced to this notion that designers should do no harm. As if ethics is a problem to be fixed in stead of a thing to be practiced.

So I started to talk about this to people I know in academia and more than once this thing called Value Sensitive Design was mentioned. It should be no surprise to anyone that scholars have been chewing on this stuff for quite a while. One of the earliest references I came across, an essay by Batya Friedman in Interactions is from 1996! This is a lesson to all of us I think. Pay more attention to what the academics are talking about.

So, at the end of last year I dove into this topic. Our host Iskander Smit, Rob Maijers and myself coordinate a grassroots community for tech workers called Tech Solidarity NL. We want to build technology that serves the needs of the many, not the few. Value Sensitive Design seemed like a good thing to dig into and so we did.

I’m not going to dive into the details here. There’s a report on the Tech Solidarity NL website if you’re interested. But I will highlight a few things that value sensitive design asks us to consider that I think help us unpack what it means to practice ethical design.

First of all, values. Here’s how it is commonly defined in the literature:

“A value refers to what a person or group of people consider important in life.”

I like it because it’s common sense, right? But it also makes clear that there can never be one monolithic definition of what ‘good’ is in all cases. As we designers like to say: “it depends” and when it comes to values things are no different.

“Person or group” implies there can be various stakeholders. Value sensitive design distinguishes between direct and indirect stakeholders. The former have direct contact with the technology, the latter don’t but are affected by it nonetheless. Value sensitive design means taking both into account. So this blows up the conventional notion of a single user to design for.

Various stakeholder groups can have competing values and so to design for them means to arrive at some sort of trade-off between values. This is a crucial point. There is no such thing as a perfect or objectively best solution to ethical conundrums. Not in the design of technology and not anywhere else.

Value sensitive design encourages you to map stakeholders and their values. These will be different for every design project. Another approach is to use lists like the one pictured here as an analytical tool to think about how a design impacts various values.

Furthermore, during your design process you might not only think about the short-term impact of a technology, but also think about how it will affect things in the long run.

And similarly, you might think about the effects of a technology not only when a few people are using it, but also when it becomes wildly successful and everybody uses it.

There are tools out there that can help you think through these things. But so far much of the work in this area is happening on the academic side. I think there is an opportunity for us to create tools and case studies that will help us educate ourselves on this stuff.

There’s a lot more to say on this but I’m going to stop here. The point is, as with the nature of the technologies we work with, it helps to dig deeper into the nature of the relationship between technology and society. Yes, it complicates things. But that is exactly the point.

Privileging simple and scalable solutions over those adapted to local needs is socially, economically and ecologically unsustainable. So I hope you will join me in embracing complexity.

Originally published at Leapfroglog.

--

--

Kars Alfrink

Designer, researcher and educator focused on emerging technologies, social progress and the built environment.