Our Online Behavior is a Design Problem

The internet has gotten meaner and less trustworthy, but there is hope.

Carl Alviani
Protagonist Studio
5 min readJun 13, 2018

--

image: Reuters/Brendan McDermid

Whether we’re talking about trolls, serial harassers, racists or purveyors of propaganda, it seems like the online behavior of the worst of us has leached out the civil discourse of the best of us. The vitriol that accompanied the 2016 US election in particular has raised painful questions. Does Facebook’s fake news problem mean we don’t care about truth anymore? Does the current vogue for digital dogpiling mean we’ve forgotten how to disagree without being disagreeable? Perhaps we’re all awful people deep down, and our tweets, posts, and comments are simply a clearer window into our dark souls.

Or perhaps putting an anonymous bullhorn into everyone’s hands and wrapping us in custom-built echo chambers makes us more likely to be terrible. Besides the genuine neo-Nazis and misogynists exercising the right to amplify their voices, many others are simply drunk on the power to provoke.

Perhaps putting an anonymous bullhorn into everyone’s hands and wrapping us in custom-built echo chambers makes us more likely to be terrible.

The response from digital media platforms, exemplified by Mark Zuckerberg’s post-election hand-washing, has long been: “Don’t blame us, we’re just the medium.” But that excuse is wearing pretty thin. No platform is truly agnostic: They all have their preferences, permissions, and prohibitions, every aspect of which is designed by people.

The current debate over social media’s impact on the recent election is just the tip of the iceberg. As our lives move further online, the way we design digital interactions is going to have even greater social consequences. It’s not hyperbolic to say that thoughtfully designed user interfaces can make us a freer, more humane, and more just society, just as poorly designed ones seem to have made many of us less compassionate, less informed, and more antagonistic.

Thoughtfully designed user interfaces make us a freer, more humane, more just society.

Twitter didn’t become a cesspool of trolling and harassment because it was forcibly occupied by a team of hatemongers — it turned out that way because of a unique combination of reach, anonymity, and lack of consequences. I love Twitter, but I’ve also witnessed firsthand how it can turn angry, disgruntled individuals into supervillains, able to intimidate and disrupt lives around the world with a few keystrokes.

Facebook, likewise, didn’t get hijacked by fake news purveyors so much as it created a system that made them inevitable. Clicks equal advertising revenue, people share what reinforces their beliefs, and the next suggested news item is what’s popular or trending, not what’s verified or newsworthy. It was only a matter of time before fiction-minded folks figured this out, and so far, there’s little to convince them to stop.

But people made these systems, and people can correct them. The designers who sweat out the details of how you interact with apps and websites play as big a role in determining how we communicate as our own choices do. Interaction designers are, by and large, smart and empathetic people who’ve made the technological world a friendlier, more useful place by paying close attention to how people use things, then going through endless rounds of trial and error until those things work better.

The problem isn’t one of mechanism, but of will.

In recent years, a product or service that “works better” has meant one that’s easy to use, widely adopted, and profitable. These are all admirable goals, and they’re a big part of why we’re currently able to communicate, learn, shop, work, and play in endless ways at global scale. Crafting our apps, websites, and devices to make this possible was a massive effort carried out by thousands of skilled people — and it wasn’t easy.

Bringing civility and credibility back to online discussion should be just as possible — but also just as difficult.

Make the wrong thing harder to do.

The online forum Nextdoor.com was originally envisioned by founder and CEO Nirav Tolia as a way of fostering community through neighborhood-specific discussion boards. To his dismay, it also became an unintended haven for racial profiling, with concerned residents posting about suspicious activity by strangers who they identified only by ethnicity. After an appeal by the Oakland-based group Neighbors for Racial Justice, Tolia took the unprecedented step of having the interface redesigned to try and reduce profiling.

It worked incredibly well. Early analysis of Nextdoor’s pilot redesign suggests it reduced racist postings by 75% almost immediately upon implementation. The mechanism is astoundingly simple: If you specify a suspect’s race when reporting a crime or suspicious activity, Nextdoor now asks for two other identifying characteristics, like height or an article of clothing, otherwise the post won’t publish. It turns out that making it just a tiny bit more difficult post racist comments can dramatically reduce their prevalence.

Arriving at this solution probably took considerable effort, but getting this sort of thing right — as well as deciding how many steps it takes to log in, or what happens when you swipe down on your smartphone screen — is what interaction designers do all day. So while Nextdoor’s interface tweak was unusual in terms of intent, it was run-of-the-mill in execution. That is what makes it repeatable.

If Facebook wants to battle fake news, it might be as simple as letting readers flag an item as false or automatically linking it to a related Snopes posting. Twitter’s troll problem is even more straightforward: Harassment victims already have some very specific requests (shared blocklists, autoblocking of new accounts, etc), and some users have even prototyped potential solutions, such as the series of anti-racist Twitter bots recently developed by an NYU student as part of a study in political behavior. The problem isn’t one of mechanism, necessarily, but of will.

There’s no such thing as an unbiased platform.

The primary criticism of this kind of “activist” interface design is that it constitutes social engineering and hinders free speech. But remember: There’s no such thing as an unbiased platform. Every digital interaction encourages certain behaviors, and every media channel has limitations. Actively shaping those parameters to encourage civil, factual discussion isn’t only justifiable — it’s an ethical necessity.

Fortunately, all of our recent soul-searching seems to be moving us in this direction. Both Facebook and Twitter have gone on record in the past month vowing to take these problems more seriously. Whether they, and other internet-based companies, ultimately do enough will depend on steady pressure from concerned users. And it will almost certainly be the subject of legislation at some point.

Interaction design has been a powerful and largely unacknowledged force in shaping our digital world. It’s made the homes of strangers as accessible as hotel rooms, it lets you buy practically anything on earth while standing in line for coffee, and it’s given rural African farmers better banking access than our parents did. Given this power, and given the urgency of the problem, we’re out of excuses. It’s time to redesign the internet to keep it safe for civilization — and it’s a task we’re ready to face both as users and designers.

An earlier version of this article appeared on Quartz in December of 2016, under the title “Design is the best weapon we have in the fight against fake news

--

--

Carl Alviani
Protagonist Studio

Writer and UX strategist. Founder of Protagonist Studio. Obsessed with design’s hidden consequences. Living in Glasgow, with my heart in the PacNW.