Designing Against Abuse in the Age of Twitter

Arunabh Satpathy
4 min readApr 6, 2017

--

Illustration by Amelia Shull

[NOTE: This piece was first published as part of a column called “Design Eats the World” in The Daily of the University of Washington HERE.]

Twitter has been through a lot, as have its users. From its roots as a revolutionary social media platform to a legacy product that both the President and your aunt use, it’s become the archetype of rapid-fire online democracy. As the platform has grown in influence, with close to 320 million active users in 2016, several massive incidents of abuse have dented the platform’s image. While Twitter’s design as a platform for allowing free speech is intact, its design against abuse and harassment has historically been poor. I suspect this is because it has consistently designed reactively rather than proactively.

Harassment on social media isn’t new or unique. From the cyberbullying of children on Facebook and Instagram to large-scale harassment campaigns against people like Leslie Jones, abuse and harassment has grown with these companies. The number of instances and the scale of both major and minor abuses of these platforms have increasingly taken center stage in discussions about social media.

Twitter in particular has become a touchstone for online trolls, partly because of the platform’s open design and partly because of its utter inability to think ahead in terms of designing a balance between encouraging free speech and hosting its abuse.

Its problems aren’t new. As early as 2008, blogger Ariel Waldman received threatening tweets that doxxed her and called her names. One of Twitter’s co-founders responded by apologizing for the incident, but he also laid down its essentially hands-off policy by saying, “Twitter is a communication utility, not a mediator of content.”

That attitude may have been swept under the rug in 2008 when the problem was still relatively small, but it has metastasized with the growth of the platform. In 2014, a Pew study found that 40 percent of online users were harassed in some form and 66 percent of these online users’ last experience of harassment was on social media. Especially high-risk groups were young adults and young women. Finally, the 2016 election cycle effectively weaponized online harassment for political ends.

This has led to real business consequences for Twitter. For instance, it was reported that Disney may have decided not to pursue an acquisition of Twitter because of its harassment problem and the image it created.

Public Domain sketch by ijmaki

Why has Twitter been unable to design good solutions? Firstly, it’s extremely difficult. Allegations of abuse can sometimes be impossible to disentangle and these systems are themselves vulnerable to misuse. Baseline definitions of harassment and agreement over them also differ from platform to platform. It’s an inefficient process that at scale has to work efficiently. But I suspect the big reason is that designing against abuse is not just a reactive act; it’s an imaginative act.

Some design theorists have posited that any product designed from the actual experience of a person outwards relies on engagement and respect instead of efficiency as its north star. Twitter has definitely not been doing that. Many of their user protection design elements were introduced after the fact.

For instance, the ability to report individual tweets on Twitter existed only on iPhones even as late as 2013, when an activist in the UK got many rape threats for pushing Jane Austen on to UK currency. In response to the story, a Twitter spokesman told The Age, “…we plan to bring this functionality to other platforms, including Android and the web.”

Around the time of Gamergate — a coordinated campaign of harassment against female game designers among others — Twitter rolled out “more streamlined forms for reporting abuse, dispensing with its cumbersome nine-part questionnaire and adding back-end flagging tools.”

To be fair, Twitter’s tone has changed recently. In early 2016, it created a “Trust and Safety Council” to give “input on our safety products, policies, and programs” and to create a safer Twitter. Though a council is more likely to look forward and anticipate trends, there is no guarantee it will in the long run. The effectiveness of this organization remains to be seen as the trolls took over the internet during the 2016 election.

A couple of weeks ago, a new default picture replaced the iconic “egg.” It was in response to the default picture being associated with accounts that harassed people in masse. In their design blog, the Twitter Design team posted, “We’ve noticed patterns of behavior with accounts that are created only to harass others — often they don’t take the time to personalize their accounts.”

Hopefully, Twitter and its users’ troubles will set the template for ensuring that our technology doesn’t again become a threat to its very users in the future.

Reach columnist Arunabh Satpathy at opinion@dailyuw.com. Twitter: @sarunabh

--

--

Arunabh Satpathy

Website: https://www.arunabh.space || UX/UI design, journalism, futurology, prog metal, and fried-food.