Product Design & The Asshole Contingency

Why not all users are your friend and you need to deal with it.

Designers tend to pull out the ole rose tinted glasses when they speak about the user.

The user, we proclaim, “is the key…to everything”.

And we, of course, are the valiant knights defending the honour of these innocent gazelles of the digital landscape. When the beloved user is questioned or sidelined, we spring to our feet like proverbial Don Quixotes waving rusty sharpies. This is usually not a bad thing.

To defend the user, to merge their best interests with our client’s business models in one continuous loop of mutual benefit — that is our job, and we do it proudly. Without that integrity, we’d probably all be drowning in interfaces covered in adverts and built in whatever way the backend developer thought was the easiest (I’m joking, backend developers, we love you too). We spend a lot of time considering our users, taking in their feedback and experiences and interpreting them into improvements that benefit everyone involved. This process helps us ensure we are solving real problems experienced in the real world. Being on the end user’s side is a key part of being a good designer. However, it makes it very difficult (and controversial) to admit that the user isn’t always defendable. This attitude is crystallised in a common saying:

If your user is using your product wrong, it is a design flaw.

Which is generally true. When a user can’t or is not using your product the way you intended, it’s usually your fault. Anyone who’s ever had an intense tug of war with a Norman door can testify to this: the user is not stupid — your design is. But here I propose a second, completing statement

If your user is using your product wrong, it is a design flaw — and they might be an asshole.

This is still your fault. It does not mean that they are an asshole if they don’t understand your product. This is about the fact that some users may use what you design to do something that you never intended it to be used for — in ways that range from mildly annoying to downright harmful; to other users, to you, to your client, or to society in general. The user is not wrong — they’re an asshole, and your design might have helped them be an ever bigger one. This is something we don’t like to think about. To paraphrase Eddie Izzard:

Guns don’t kill people, people do…
But the guns bloody help!

So how do we make sure we aren’t merrily sketching away at what will become the digital equivalent of a nuclear armament? We can never be absolutely certain. For example, I doubt that the designers at Facebook meant for closed groups to be used for a online society encouraging members to secretly photograph women eating on the tube. It is impossible to foresee and counteract every misuse of a design — also that level of restriction would be crippling to the creative process. But we must start to accept the fact that not all of our users will use our products in the positive way we envisioned — a step I like to call the asshole contingency.

And what is the asshole contingency? It’s quite simple. All you need to do is ask yourself two simple questions throughout your design process.

  1. Is there anything in this service that could cause a user to accidentally or otherwise ruin the experience for everyone else?
  2. If a real asshole used this service, what’s the worst that they could do?

Is there anything in this service that could cause a user to accidentally or otherwise ruin the experience for everyone else?

Everyone carries within them the tiny seed of an accidental asshole. Especially in the case of products with a lot of social interaction or collaboration, there are near infinite opportunities for users to annoy each other. If you’ve ever had anyone accidentally overwrite your dropbox files, tag you in a really incriminating photo on Facebook, or send you undecipherable email threads in the middle of the night, you know what I’m talking about. And don’t play the martyr, I am sure you are just as guilty as the rest of us.

This kind of accidental annoyance isn’t that malicious — it’s usually just caused by confusion, miscommunication and differences in personal preference.

Good ways of preventing accidental annoyances: make it possible for me to shut out the annoyance, make sure it’s not permanent, and warn me when I’m about to be the asshole.
Slack’s @channel warning message reminds the user that not all their colleagues are in the same time-zone

If you don’t want someone to mess up your files, you should be able to use settings to prevent them from making unapproved changes. If you don’t want to get a buzz every time John discovers a new totally cool gif, you should be able to choose not to get one. If someone spring cleaned dropbox and deleted that presentation that you’re giving tomorrow there should be a way of recovering it. If you‘re about to blow up Jenny’s mailbox at 3am you should be told to shut up and respect the timezones. Think how many mute settings you use in your daily life — how would using those services be without them?

Instagram took an even more radical approach than muting. How did they curb promotional spam? By making links in captions and comments unclickable. And while it seems extreme, the end result is a much less annoying comment section — with the additional unexpected (but ingenious) behaviour of users updating links in their profile in order to promote something specific. Perhaps unintentionally, Instagram has elevated the link from spam to a curated experience.


If a real asshole used this service, what’s the worst that they could do?

Sometimes, making sure that users don’t annoy each other isn’t enough. Sometimes, there is a very real risk that your product can be used to do significant financial, physical or mental harm to someone. These consequences can also be unintentional, but more often, are caused intentionally. Sometimes the user might gain something (for example, stealing your identity or your bank/credit card details) and sometimes, the user that causes the harm does not even gain anything from it.

When the product crosses over into enabling users to have a physical interaction, the stakes are suddenly very, very high. Chris Sacca famously passed on Airbnb because he was convinced it would turn into a candy shop for murderers — and in reality, his reaction was completely rational. Airbnb had to face a slew of logistical challenges (the subject of a recent ted talk) in order to counteract this. Uber has had its fair share of incidents despite tightening up security. Services that allow users to knowingly or accidentally share their location can be used by stalkers for nefarious purposes. The list of (awful) possibilities seems endless.

Making it possible to block, report, prevent other users from accessing personal information and requiring all users to identify themselves in a way that would make them culpable in a court of law are precautions that are often necessary in these kinds of services.

However, the possibility of enabling physical harm is not the only threat in designing a digital service — enabling psychological harm can be just as dangerous. Anonymity in particular has become a big question for the online community. Twitter has been in the limelight as the platform of choice for mass scale online hate a la #gamergate, as well as tragic moments like Zelda Williams closing her account due to harassment in the wake of her father’s death. (Not to mention ISIS has been known to use the platform as a recruitment tool.) Youtube has notorious issues with its comment sections leading to some of its most high profile creators choosing to turn off the commenting feature completely.

Instagram became the venue of choice for people mocking a 16-year old rape victim by posting pictures of themselves posed as her unconscious body. The trend appears to be that social platforms that allow anonymity suffer the consequences. The ability to create multiple anonymous profiles means there’s no consequences for intolerable behaviour, creating a freezone for the worst in people to run amok. At the same time, anonymity is sometimes necessary to protect users from each other.

So what is a poor designer to do?
Well, we can start by not being naive.

Our user stories are usually about users with a problem they want to fix — why not mix it up a bit once in a while? Consider your scenarios with imperfect users. Consider stories about users who are clumsy or insensitive to others, about users who want revenge, who are stalkers, who are angry or just want to mess things up for the fun of it. What can they do with the features you are designing? Is there anything you can do to prevent negative consequences? Will your users need the ability to undo, to block, to mute, to restrict access, to conceal personal information? Should you allow users to sign up anonymously? Do you actually need to share a users location with other users? If users can post content, do you have processes in place to monitor and remove offensive or harmful material? Do the positive consequences outweigh the negative possibilities?

We expect developers consider potential security exploits in their creations — and we should expect it from ourselves as well.

There’s unfortunately not a light at the end of the tunnel in which asshole-ness has become extinct. Some users will continue being assholes, and some services will continue enabling them. What you can do is to take off those rose tinted glasses once in a while, embrace reality, and design accordingly.


I work as a product designer at apegroup where I help companies create beautiful digital experiences. We are a digital agency in Hornstull, Sweden. Together we push boundaries through design of digital experiences, improving life for people. Want to know more about how we work with product design? Read more at our website.

If you enjoyed this article, please press Recommend below!