Unintended Consequences

In the 17th century, houses in Amsterdam were taxed according to the width of their frontage — the trading area they presented to the canals. As a result, newer houses were built to be as narrow as possible. Apparently, there are still houses in the city where furniture can only be moved around via the windows.

Photo credit: The Cyclists Touring Club

Closer to the present day, in the latter half of the 19th century, long-distance roads in the UK and USA were in a pretty poor state. The success of the railways had killed off the coaching trade, and the roads began to fall into neglect. Groups like the Cyclists’ Touring Club successfully lobbied for better roads. This made life much easier for the motorists emerging in the late 19th and early 20th centuries. So much easier, in fact, that the cyclists were eventually driven off the roads they’d helped to create. They couldn’t compete with the automobile.

In 2011, protesters occupied Tahrir Square in Cairo, a focal point of that year’s revolution in Egypt. By many accounts, social media, especially Facebook, were crucial tools for organising this political action. Not bad for a site which began life as an unofficial Harvard student directory.

The so-called principle of unintended consequences, where changes in a system have wild, nonlinear and totally unpredictable side-effects is a fun idea to explore; it’s a big, chewy concept, with links to ideas about chaos, complexity and the dynamics of social change.

It’s also a popular subject among the designers and builders of software systems — I’m a technologist by trade — and we frequently see these kinds of complex, emergent interactions arise in the web of systems we create and the people that interact with them.

It’s generally spoken about in macro terms, though; unpredictable chain reactions at the level of societies, populations, organisations. What about the unintended consequences of networked software and its design at the micro level: the level of individual people?


Let’s start with the humble signup form.

This is one from a fairly unremarkable shopping app; it doesn’t matter which one. It’s a common enough pattern. Like many free apps, its signup was… thorough. This is to be expected. I was, after all, paying for its free-ness with my valuable demographic data.

Looks pretty straightforward, right?

Let’s consider it from the point of view of someone for whom gender isn’t as straightforward as a two-state radio button. How would that person feel about being faced with this choice?

You might be thinking that this isn’t such a big deal — it is, after all, just a box on a signup form. But we don’t encounter these things in isolation. Far from it. Think about every time you’ve signed up for something. You’ve likely had to fill in a bunch of information about yourself, checking the boxes which most closely describe an aspect of you. Reducing yourself down to a bunch of data points.

It’s not unreasonable to think that these repeated tiny choices, made over and over again, deciding which box you fit into, have, at some level, shaped how you see yourself fitting into the world.

So let’s think about that gender example. Think about having to make this choice over and over again, every time you’ve encountered a badly-designed signup form. What’s that going to do to your conception of where you fit into the world?


The power of repetition isn’t unknown in product design. It’s a growing field of interest to designers, and it’s even got a name: habit design.

This is an advert for Nir Eyal’s book Hooked. In it, he says:

The truly great consumer technology companies of the past 25 years have all had one thing in common: they created habits. […]
Apple, Facebook, Amazon, Google, Microsoft, and Twitter are used daily by a high proportion of their users and their products are so compelling that many of us struggle to imagine life before they existed. […]
[Habit creation] can be used by consumer web companies to build products that users not only love, but are hooked to.”

If you’ve ever read about so-called social media addiction, you’ll have seen references to BF Skinner and his infamous boxes, rats pulling levers for rewards, superstitious pigeons. Dopamine and reinforcement schedules.

I’m not going to go too far into that here; I’m sure it’s familiar territory to a lot of people by now. For now, let’s assume that a product’s interactions, rewards and feedback loops can be designed in a way that promotes habit-forming. Users become habitual users.

I think there might be a problem here, and that problem relates to personas, those weird aggregate personalities we plot from reductive data points. Personas aren’t real people, and aren’t intended to be. They’re abstracts, averages that help us design for particular audience segments.

And that’s all fine and useful when we’re designing flows that won’t be used too often by any one person. They help us make things more accessible to all. However, we’ve seen how powerful repetition can be in moulding a user’s worldview. If we’re talking about a flow that a user goes through daily, several times a day, more, a flow we’ve specifically designed to be magnetic, alluring, habit-forming, how can we be sure that those assumptions that we make about our users; their lifestyles, interests, desires, ideologies, aren’t being impressed back onto them through force of habit? What about the deeper social assumptions that we’ve encoded into the product without even realising it, products of our own unconscious biases? What about the behaviours that we’re killing off by not designing for them?

Are we building Skinner boxes by compelling our users to enact our assumptions about them over and over again?

Pushing our users through mazes we’ve built out of our assumptions and the things that we want them to do, combined with repetition and habituation is a powerful thing whose ramifications we don’t yet fully understand. We need to be sure we know what we’re doing with this stuff.

Do we?

Because there’s another industry whose designers talk a lot about reward schedules, maximising dwell time and designing for habit: the gambling industry. Natasha Dow Schüll has studied machine gambling in Las Vegas and how those machines are designed to exploit very specific quirks of human psychology and neurochemistry to keep punters in the seats, gambling. In her book Addiction by Design, she quotes one of these designers:

Once you’ve hooked ’em in, you want to keep pulling money out of them until you have it all; the barb is in and you’re yanking the hook.

Are we happy sharing a design space with these people?


It’s worth asking these questions, and keeping them in mind when we design things, because there’s this a curious effect that starts to kick in when creating and engineering at scale.

We all want our products to do well, to draw in and maintain a large audience, but there’s a distancing effect that starts to emerge when we start to think about users as a mass, rather than remembering that there’s actual people in there.

There’s no starker example of this than an experiment Facebook conducted in January 2012.

In this experiment, around 700,000 users had their news feeds manipulated, unbeknownst to them. Some users were fed more negative stories than normal. Some, more positive. Facebook’s Data Science team then analysed the activity of the manipulated users to see if they were posting happier or sadder content. Perhaps unsurprisingly, they were. Those who had seen more negative words posted more negatively. Those who had seen more positive content, in turn, used more positive words.

There was an avalanche of bad publicity when news of this study began to circulate in July last year. The underlying sentiment asked: how could Facebook’s data scientists have been so arrogant to have put its users in a petri dish like this?

Have a look at it the other way round, and you can understand how this could happen. Imagine you’re a data scientist working for Facebook, which had nearly 1.4 billion users at the end of last year. You’re routinely running analysis on millions, billions of datapoints. Likes, posts, time spent, photos uploaded, ad interactions, page interactions and so on and so on. Endless amounts of data.

It’s easy to see how you could lose sight of the fact that there’s individual people, fellow humans generating that data.

From Facebook’s point of view, an individual person is small enough to be insignificant, relevant only in aggregate, as a source of data. Researchers have noted the dehumanizing effect of thinking about other people in this way, a notion which appears to be supported in this post by a former Facebook data scientist:

It truly is easy to get desensitized to the fact that those are nearly 1M real people interacting with the site.

However, the reason that so many people reacted so badly to the news of this study is that from their point of view, they’re a lot more than an irrelevant datapoint. To feel like you’ve been treated that way is belittling. It robs you of your agency, makes you feel like a rat in a cage. No-one likes being made to feel powerless and manipulated.

Facebook’s business is built on personal data. That’s its value to its users, all 1.4 billion-odd of them. And the thing about aggregating all this personal data, giving people a view onto those they care about, is that it’s, well, personal. Things affect you more deeply in this kind of space than in the wider web. That’s why this experiment felt like such an intrusion — it’s as if someone broke into your home while you were out and replaced everything on your walls with versions in slightly darker shades, drew sad faces on your family photos. Just to see how you’d react.

We need to remember that people are more than just packets of valuable quanta, grist to the mill of Big Data.


In early 2014, Erika Sorensen, a software developer, updated the version of Android on her phone.

At the time, Google were pushing Plus really hard. They were also very keen on unified Google accounts, attempting to migrate everyone to one account across all of their services. I remember being quite affronted that I’d ended up with a presence on a social network that I’d never asked for, merely by having a Gmail account.

As a result of these pushes, and the deeper integration with Google services that were a feature of this particular release of Android, Erika found that the separate identities that she’d been maintaining online had been collapsed down into one, unified identity and that identity had been made public.

At the time, Erika was still identifying as male at work.

Here’s the thing: Erika never asked for this deep integration of Google services. She never asked for a unified Google account. All of these choices were made for her by various product teams inside Google, all acting in their own interests, according to their own assumptions: the social network needed users. Single identities are easier to mine and track and target. And, to be fair, engineer for, provide services for. Who couldn’t possibly want the convenience of a unified identity?

As a nonlinear, unpredictable result of these decisions meshing together, decisions in which she had no part, Erika’s life was rewritten for her.

Move Fast and Break Things by Rob Belmont

In software engineering, we’re used to the idea of failure. We move fast, we break things, we have the concepts of fault tolerance and redundancy. That’s fine, when the things that are failing are database calls, or HTTP requests, or bits of UI.

When people are involved — people who never asked for this stuff but were given it anyway, people who had these problems created for them and then had to sort them out for themselves — a fail condition that can affect the lives of those people is unacceptable.

When we’re designing systems that incorporate people, that are designed to draw those people in, that rely on their activity to even work, then we have a responsibility to those people. A duty of care.

People are more than just nodes in a social graph, to be co-opted and herded around at will for our own business ends.


There are, however, inherent pressures in our industry to do just that.

As designers and engineers, we like to talk about attention economy, usually as a way to explain that our users have a finite amount of attention to dedicate to our products.

We are encouraged to think of our audience as being fickle, easily distracted, with the attention span of a puppy, so we should make our flows and interactions as friction-free as possible. Don’t make me think”.

We’re also acutely aware that attention is the fuel that drives our products especially if we’re letting people look at them for free.

Once we have our users and their attention, whatever tactics we’ve used to sucker them in, then we can show them advertising, or gather data from their actions, or sell them things.

Thanks, @willsh!

So we develop strategies to make our products more attractive than those of our competitors, stickier, more useful, with increased dwell times.

Co-opting users into a new product by using sneaky signup tactics or finely engineering its habit-forming qualities are the examples I’ve discussed here, but our industry is full of clever people. I’m sure we can think of more.

All this to get our hands on a slice of that precious attention, and keep our hands on it once we’ve got it.

Which would be fine, if there were only a few of these attention-fuelled products in existence. But there aren’t. Now we’ve realised that attention is the One True Currency, everyone’s engineering for it.

Flip this around, look at it from the point of view of a person with their own finite supply of attention. They might interact with dozens of these products in a day, all with their own strategies to tap and hold those precious reserves of attention.

Portlandia S01E01

Ever more intrusive notifications and interactions make their demands, the user gets tugged this way and that, doling out the rations. A loop of inbox to status to alert to like to level up to message to picture and so on and so on and so on.

That’s before we even consider that this hypothetical person might want to do things other than interact with software products.

One person only has so much attention to give, for everything in their life. Not just software. Everything. Maybe we should be mindful of that.

We need to be responsible with the demands we make on our users’ time and acknowledge that our product may not be the most important thing in their life. Almost certainly not the only piece of software they’ll interact with that day.

We sit in an attention ecosystem, we should be responsible actors in that ecosystem. Maybe we should consider the ethics and sustainability of the attention economy.

Maybe these considerations could help us to make better products.


So what can we do about these unintended consequences? How can we make products which are more in tune with the people caught up in them? How do we fulfil that duty of care?

Ultimately, it comes down to empathy. Dan Hon has identified what he calls an empathy gap in much of modern service design, which he describes as:

that distance between an organisation and its audience such that at worst, it’s clear that the organisation is wilfully ignoring how its audience might feel.

The examples I’ve shown caused harm in the world and created bad products because the people involved along the chain of making them failed to ask themselves two basic questions when considering new features:

  • how does it make me feel?
  • how could it make a person outside my experience feel?

Looks simple, right? Two little questions which could have saved a lot of trouble, and which could help us to build better products.

It’s a matter of doing a bit of habit engineering on ourselves. If we, as the designers and builders of complex, social systems can get into the habit of asking ourselves these questions as we go along, apply this filter to new features and changes to the system, we can start to build things better suited to a world full of complex, unpredictable interactions between people and the things we make.

It doesn’t even require large, institutional change. It doesn’t require sweeping, disruptive changes to our working practises. It’s just an extra little step to drop in every so often. Even the second question here isn’t so much about trying to imagine the whole of human experience as it is just pausing every so often to think about things from a different perspective.

But if everyone along the decisionmaking chain gets into the habit of asking these questions, and asking them honestly and frequently, then the combined, cumulative effect could be powerful. It couldn’t hurt for that chain to represent voices from a wide range of life experiences, either. If we’re building software for everyone, shouldn’t everyone be building it, too?

If we can start encoding these little bits of empathy into our products, then we can build things which are not only less likely to cause harm, but which feel more welcoming, genuinely, to more people. Products which feel like they were made just for them. Products which make a human connection.

In his 1991 essay The Berlin Key, Bruno Latour said:

things do not exist without being full of people, and the more modern and complicated they are, the more people swarm through them.

As we build things that are more modern and complicated, as more people swarm through them, both as creators and users, then the opportunity to get fingerprints on them and to create moments of human-to-human connection in the design of things themselves increases.

If we can get into habits which allow everyone involved in the creation of a product to layer in these empathetic touches, then perhaps we can build better, more nuanced products that fit more comfortably, more sustainably and more respectfully into peoples’ lives.

That’s got to be worth a go.


This post was originally written as a talk for a conference that, eventually, failed to happen. Thanks for their help in research, writing, and generally working it all out to Louisa Heinrich, Tom Armitage, Justin Pickard, Ben Bashford, Stephan Hügel, John V Willshire, Andrew Sempere, Kate Towsey, Alyson Fielding, Paul Rissen, Kath Nightingale, Kerri Smith and Helen Morant. All mistakes are my own.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.