Your design will change my life. Or will it?
Everybody who has ever tried to change an undesired behaviour knows how hard this can be. And when you realise changing your own behaviour is hard, it will not come as a surprise that changing other people’s behaviour can be even harder. At least when you are attempting to change yourself, you have some form of acceptance of the necessity of the change, and some initial motivation (even if that motivation may quickly dwindle when the change process gets frustrating). When you are designing for other people’s problems, your target group often lacks awareness of what the outside world considers undesirable, and your audience more often than not lacks the motivation, capability, and opportunity for the actual behaviour change. In this text, I will address some of the issues that make it so difficult to design something that people actually want to use to solve their problems and, where possible, I’ll suggest some viable strategies to deal with the most pressing challenges.
Many designs for behaviour change are not aimed at an actual problem
Finding a Real Problem to design for can’t be that hard, right? There are so many issues out there: obesity, safety, sustainability, you mention it. Then how come there are so many designs for behavioural change aimed at what are obviously Not Problems?
Example: this traffic nudge in my hometown Nijmegen. According to the nudgers, behaviour change agency SHIFT, the sheer amount of cyclists at the roundabout is a constant cause of DANGER, CHAOS and ANNOYANCE. Duct tape to the rescue! This nudge received a fair bit of media attention, and the city gave it a more permanent finishing by replacing the duct tape with official street paint.
But, to be honest, the situation at the roundabout was by no means dangerous for cyclists. Alright, there was a bit of chaos and anarchism, but when has that ever hurt people? Quite nearby, meanwhile, the car lanes of the Keizer Karelplein were declared the most dangerous stretch of tarmac in the Netherlands (link in Dutch). It turned out this conclusion was based on questionable research, and this bit of fake news had to be retracted, but still. At the roundabout, lots of accidents happen because of unclear right of way situation, and because of cars running red lights. Hardly any of the accidents and incidents involved cyclists, though.
This is all very entertaining, but there is also a metaphor hidden in the whole situation. Design for behaviour change is often not aimed at an actual problem that the target group feels they have, but aims to solve a problem of the sender. In this case the problem solved was marketing a brand. Because of the research master in behavioural change at the local uni, you can hardly throw a brick in this town without hitting a newly-fledged agency for behavioural change. And they all need media attention to attract clients, so there is quite a bit of gratuitous nudging going on. Similar things are often seen in corporate culture, where employees are incentivised to move along with organisational changes that do not benefit themselves. I think the lesson here should be that we must aim to design in such a way, that senders’ and receivers’ interests align. Is that the case here? Hardly.
Too many solutions are an implicit game of Chindōgu
Finding a Real Problem may not seem hard, offering a Real Solution is. This is the reason why so many designs for behaviour change feel like Chindōgu, or the noble Japanese art of solving a problem by replacing it with a bigger problem.
I love Chindōgu as a design exercise (see some great collections here), but: many designs for behaviour change feel as if they are actually Chindōgu in disguise. Look at that marvelous street sign below, warning us to make room for cyclists. Or that brilliant attempt at alienation also known as the recycling bin…
Many digital technologies that claim to help you achieve your goals, only achieve to make your life more difficult. Lately, I have been trying out an app called LifeSum, following up on glowing reviews by a colleague. I was wondering what the relation was between the calories I ingest, and the calories I burn by physical activity. Measuring physical activity is fairly easy: by wearing an activity tracker. The calorie count is harder. Here, LifeSum claims to be able to help me out. All I need to do is scan each and every ingredient of my each and every meal, snack, and drink.
Take, for instance, today’s breakfast. The two slices of toast with, well, not butter, but soy margarine, and marmelade seem straightforward enough and take me ‘only’ three minutes to find on the app. Small problem: the app lets me choose the pre-set 100g of margarine, which I think is a bit rich for two slices of toast, or let me set the weight of the portion myself. Okay, off to the scales it is, to determine the weight of a butterknife full of margarine less the knife. I also ate that portion of oatmeal porridge that the kids did not manage; no problem there, the app has an entry for oatmeal porridge and I can add it within another mere minute. Next on: it’s winter and I made fruit smoothies for the whole family. No better way to start the day than by having your daily recommended dose of vitamins already inside you. I used two oranges, two apples, two pears, two bananas, and water, for four people. So now I’m tasked with adding half an orange, half a banana, half an apple, and half a pear to the breakfast menu. The insane amount of effort it costs me to simply log my breakfast, makes me suspect that LifeSum is in fact a clever bit of Chindōgu that made it to the app store. And that’s only breakfast. Lunch and dinner are a lot less easy to log if you tend to stick to fresh, unprocessed ingredients.
Once again, this is all very droll, but there is an important lesson to be learnt here. Our designs should aim to make life easier on the people who use it, not harder. Often, this is impossible to achieve at the current state of technology. There are no reliable food scanners available to automatically analyse the caloric value of our nutrition, no matter how much backing your product gets on indiegogo. This task requires menial input, and we should not be surprised by high churn rates / attrition, and a serious lack of engagement with our solutions on the part of the intended ‘users’.
We don’t test our designs rigorously enough
Is it too much to expect that a design for behaviour change is rigorously evaluated before it enters the market? Or that when a product is available for purchase, its efficacy has been proven? Actually, it is. One of the darker secrets of the design-for-behaviour-change scene is that we hardly ever test our designs thoroughly enough to know whether they have lasting effect on behaviour.
Two reasons: time and money. Proper evaluation costs more time and more money than designers and researchers are willing, or able, to spend. A good example of how time-consuming and expensive a serious evaluation is, comes from my own work. In the past few years, we worked on a research programme on the efficacy of feedback by digital technology to change deeply engrained behaviours. One of the projects involved was an evaluation of the 10sFork by SlowControl, a fork that sends out gentle vibrations when its user eats too fast.
At first glance, this product may come across as a solution looking for a non-existent problem, or as another elaborate piece of Chindōgu. But it turns out that a high eating rate is a known cause of obesity and associated with a range of other threats to our health, such as diabetes type II, and metabolic and gastric complaints. And eating rate, as a deeply ingrained habit, is put-near impossible to slow down by willpower alone.
So, a piece of smart technology that unobtrusively measures eating rate, and then provides feedback while we eat, may very well be a viable way to help people slow down their eating rate; and perhaps this will even help them lose weight. But how can we as consumers know if this product does what it says on the tin? Will it keep its manufacturer’s promises? To find out, we set up an elaborate research project. Firstly, we performed a usability evaluation of the fork (peer-reviewed paper here). After that, we tested the efficacy if the fork in a lab study, in which 114 participants ate a single meal with the fork. Half of them received vibrotactile feedback on their eating rate, the other half did not. And indeed, people did indeed manage to eat more slowly with the feedback (peer-reviewed paper here). And finally, because a lab test often says little about the real world, we recruited 163 participants with overweight and other eating rate-related complaints, all clients of dieticians, to use the fork in a one-month training at home. We tested their eating rate before and directly after the training, and after a two-month washout period. The results of this field trial are currently being reviewed for publication in a peer-reviewed scientific journal, and pending publication, I won’t disclose its results yet. Stay tuned for those results (slight foreshadowing: they may pleasantly surprise you), or follow my twitter feed for updates…
This three-part test will provide us with fairly conclusive evidence about the efficacy of the fork. But the disadvantages of our approach are obvious. Firstly, it took us two and a half years to obtain funding and to perform the research– and we still haven’t published all of its results. This, unfortunately, is the glacial pace at which science proceeds. A good trial in any shape of form takes time, especially if you want to test long-term effects. This is problematic, not only because few manufacturers have the time to shelf a product for a couple of years while they await testing results. A bigger problem lies in the disconnect between the time it takes to evaluate a design and the speed at which technological developments progress, which is impossible for science to keep up with. At the time we are done testing a product, its technology is already outdated.
Time is not the only problem here. Money is another. We could do this research because it answered fundamental research questions: can vibrotactile feedback from digital technololgy support us in changing behaviours that were previously impossible to lastingly change? This project was awarded a €100000,- grant from the Netherlands Organisation for Scientific Research (NWO). We cannot realistically expect the public sector to shell out €100k every time we want to test a new product, so normally, this money has to come from other sources. Most designers do not have access to this kind of budget for evaluation, and even if they did, they still had to find a solution for the time scale of the research.
The mismatch between the speed of technological development and the speed of scientific research is a challenge that we will need to solve in the coming years. This will be hard, but exciting work, of which we are already seeing the first results.
(While we are talking about real effects, we ought to at least mention the use of theory to inform designs for behaviour change; but since I have already devoted an entire Medium blog to this subject and will be working on more projects in that area, I will skip this theme here and jump straight to another one of my pet peeves:)
Our designs are either boring or frustrating
Designs for behaviour change have a second Dark Secret. In order to have effect, people have to actively engage with the proposed intervention, and with the behaviour change that the design wants us to achieve. But designing for engagement remains immensely challenging. Most people, in their heart of hearts, do not want to change their behaviour. They want the consequences of their behaviour to disappear, right enough, but the actual behaviour change is not something we enjoy. To go with Schopenhauer, all of behaviour change is either boredom or suffering.
This is also why it is so hard to convince people they have a problem that needs changing in the first place, something that came up at every stage of our smart fork research. In the evaluation study, participants thought the fork was fine: comfortable, and easy to use. They also had no objections using it in social settings. But not a single one of them thought the fork was for them: they could all think of a friend, relative, or significant other who definitely needed this product. But themselves, no thank you. A similar pattern arised in the lab study. In the field trial, participants were motivated to use the product, but found it very hard to adhere to the trial.
Engagement is not something we can conjure up with a cheap trick. In very limited circumstances, making an activity ‘fun’ can make people use your design, such as in these three designs for reducing litter.
These designs succeed in making actions fun because they are very simple, clear interventions aimed at changing a single behaviour: putting your waste in the bin at the location of the intervention. The greater part of our behaviours are a lot less simple. They are more generalised in time and location, more abstract, require effort, and because of all that, more difficult to make fun. Our attempts at doing so usually backfire: we give people points and badges, and all they do is show a reduced intrinsic motivation to perform the behaviour. We construct a leader board, only to find that everybody apart from the top-three players quits out of frustration because they can’t win. We try to make intrinsically meaningless physical activities meaningful by introducing an added narrative, only to find people are annoyed by the artificial connections we made. It’s all chocolate on broccoli, as the famous Anonymous Game Designer called it, and it won’t help us engage our ‘users’.
Too often, our designs are aimed at ‘users’, with an overly simplistic view of the conflicting motivations, needs, and urges that drive them, or of the context those ‘users’ use our designs in. Too often, we develop ‘persuasive technology’, based upon the idea that the technology itself is capable of driving behavioural change. Instead, we should aim at ‘lived informatics’, based upon the idea that people will actively select those resources that best support the behavioural change they seek.
Becoming better at designing for engagement with our interventions and the behaviour change we want to achieve, is another grand challenge for the coming years. A greater focus on the context in which our designs must do their work, and the actual problems that people wish to solve with our designs, should be our starting point in solving this challenge.
Offering real solutions with proven effects, and creating designs that make it easy for people to engage with the behaviour change, are challenges that are a lot easier to point out than to solve. It is one thing to criticise the way LifeSum needs my input to calculate my caloric intake, but quite another thing to think of a better way to do this — which I can’t, to be honest.
However, designing for behaviour change is an emergent field, with many interesting problems to overcome. In the coming years, we will definitely see progress in how we evaluate our designs, and in the possibilities of automatic tracking of behaviours that as of yet need manual input (which brings up a whole new issue about data ownership and privacy, but I’ll leave that for other authors and other texts). I am sure that in the coming years, we will also see a lot of quick fixes that are actually hidden Chindōgu, and designs that fail to engage. But there will also be designs that will work for you, and actually change your life. Let’s keep an eye open for those successes, and find out what makes them work!
I presented a previous version of this text as a keynote lecture at the Designing for Behaviour Change symposium, @DesignLabTwente, on 29 January 2018. In the coming weeks, I will add further material and make changes, so if you have comments, questions, better examples, or grammatical corrections, please leave a note.