What World of Warcraft can teach us about Twitter harassment

Rosie Pringle
Jan 13, 2016 · 8 min read
Troll image credit Blizzard (I think). Raptors are cool.

There’s been a long running debate about anonymous/semi-anonymous networks versus ones which require real name use. I’m not going to go there. The pros and cons of each have been well-documented, and you open up various cans of worms by requiring everyone to use their real names online.

Twitter is probably the most prominent (optionally) anonymous platform. Many have expressed the opinion that it’s impossible to police because of the nature of anonymity.

One common critique that many people express when it comes to moderation is the need to maintain neutrality (and by extension, free speech). Never mind that free speech only extends to American citizens, and is irrelevant on an anonymous internet platform where stating your citizenship isn’t a feature.

I completely understand the impulse towards “neutrality”, but any network funded by money is inevitably going to lose neutrality. You’ve already given up any semblance of neutrality when you create a curated service such as Moments to push specific content forward.

The mission of Twitter is stated thus:

To give everyone the power to create and share ideas and information instantly, without barriers.

This makes sense, as Twitter is trying its damnedest to walk the tightrope between turning into an Orwellian censorship monster and answering to its shareholders.

However, there are barriers set up in the system. You can’t impersonate other people. You can’t post personal information of a user. You can’t set up an account where the primary directive is abuse of others (very dicey language here). Twitter has even implemented ghost banning for spammers.

Twitter has taken a stance on these items by writing them down in their TOS. Otherwise, they would end up looking much more like 4chan.


I’d like to explore some design and engineering alternatives that attempt to stem harassment and preserve neutrality rather than asking a network to police the speech of many (an impractical and impossible task with horrific psychological implications). Rather, I’d like to approach it from the stance of encouraging intelligent discourse and discouraging harassment. By designing a system, you are inherently taking a stance. You can choose to encourage one way or another through your design patterns.

As a designer, I try to keep in mind ways that people use or abuse the things I create. Otherwise I end up looking like the NRA — just making and distributing the guns and ammo, and abdicating responsibility for how people use them.

A great place we can look to for guidance is, ironically, the video game industry. As a former MMO junkie and current design researcher, let me offer some perspective (I’d love it if some people from inside the gaming industry chimed in!).

MMOs in particular have a slightly different mission than most other social networks. They explicitly want to create fun spaces and communities so that they can keep your subscription (you could call it a form of safe space). If people feel threatened or harassed, they will cancel and go do something else with their time.

Granted, there is still much work to be done, but I think there are insights that can be gleaned from examining what these systems have done to combat harassment. Online gaming is primarily driven by anonymity and the creation of alternate aliases and personalities.

The profanity filter

Wait, hear me out. I’m not advocating for a profanity filter that is turned on for everyone, with #&*$&*s abounding instead of fucks and shits.

One of the most graceful ways I’ve seen the profanity filter managed is in a very old MMO called Ultima Online. You can add whatever words you want to your very own personal profanity filter, that only you can see.

UO understood that it would be impossible for them to keep up with whatever was considered offensive slang, and that different kinds of people find different things offensive. To maintain their neutrality, they put engineering effort into creating tools for the people.

How it works

You have a self selected profanity filter with your choice to turn it on or off. You can fill it in with your offensive words of choice. For example, if I am triggered by the words “haggis-eater” or “froggie” because of my Scottish and French heritage, I can add that and any variations I can come up with.

The system then filters when people say the word froggie around me, changing it to a comical #%$@^% for my eyes only. The benefit of this is that it gives the users agency over what they see, especially in an environment that can get as nasty as quickly as online gaming can.

The link to edit the profanity list. (UI note: aren’t old game interfaces charming? Look at those gem buttons. So shiny.)

Limitations

Obviously, this isn’t a perfect solution. Someone can say froggy instead. Or frøggi3. But it’s a nice first step. This is my own personal experience, but it takes the bite out a little bit when people have to work a little harder to harass others.

Some of the most controversial game mechanic discussions stem from poorly designed ban mechanics. I experienced it firsthand when a friend in World of Warcraft was slightly inebriated and accidentally substituted a k for an l when they were trying to say they like pet turtles. The automatic profanity system detected his typo and automatically banned him for 12 hours. I petitioned a Game Master to review his typo and they told me they saw the mistake, but there is nothing they could do. This kind of sloppy engineering is exactly what we’re trying to avoid here.

The image filter

An interesting phenomenon was when harassers flooded the Gawker Kinja comments network with violent GIFs. Gawker in response added a “click to view” image option that warns of offensive content.

Tweet media already has a similar functionality. I took it for a little test drive but it didn’t work very well. I have a hunch that it’s more for graphic material relating to gory news stories.

Limitations

This method puts the onus on the reader to “chance it” and click to view the image or not. Hopefully the community can tweet if an image isn’t well received and warn other people, but it’s a lot to expect or ask for.

A potential harasser could circumvent a self moderated profanity filter using images with text in them, for example making some glitter text with the word “froggie” in it. Using the available robust technology for character recognition could address this.

The blockchain / multiple user managed lists

Already some users have put together scripts that automatically block people based on words or mentions. If you allowed users to create lists together — such as me and my other friends of French decent making a list of anti-froggies — you could allow users to mute or block the lists. It could look something like this.

Limitations

Obviously this would go a long way into making Twitter a more polarized network. However, the intent of simple tools like these is to raise the discussions to a higher level of intelligence and filter out the garbage. Again, engineering effort to build tools like this has already been spent to make anti-spam tools.

But if users are already building solutions, why not explore making it a reality? It wouldn’t be the first time Twitter folded in a functionality that was first conceptualized by users.

People could curate hate-lists, but the truth is that they already do that.

A nice mix of lists.

There are already tools out there that address this issue, but engineering within the system would do wonders for streamlining this. The import/export block list functionality is limited and could be expanded upon.

Example of a tool that can be used versus troll accounts.

A real name behind the anonymity

A radical adjustment would be to ask for real names behind the curtains during registration. I’m on the fence about this one, especially with various governments’ propensity to snoop in on what “troublesome” users and activists might be up to. If Twitter has the means to protect user identities from governments who might be interested in them for less-than-savory purposes, this could be a valid solution.

The “don’t give them attention/just ignore them” argument

Oftentimes we hear the following arguments: “Just ignore them!”. “It means you are successful if you get that kind of attention”. These sidestep the point of simple tools like this.

By engineering solutions to give user more control, you are giving them additional power to ignore their abusers. If a person can’t use a service because they are being flooded with hate/gore/whatever unintelligent drivel through easily addressed means, they are effectively silenced.

Basically, Twitter has to take a stance on whether or not they care about giving marginalized groups a platform to share their opinions.

What’s the point?

All of these methods attempt to give the user more tools to empower themselves, rather than leaving the responsibility to Twitter to police them. The goal isn’t to scrub the network clean — it’s to make it a little more difficult for people to ruin each other’s days.

The other point I’d like to make is that by taking some basic steps to acknowledge harassment, Twitter can throw some real weight behind their statements of their pledges to promote diversity and “making inclusiveness a cornerstone of our culture.”

As a designer, I’d welcome any constructive feedback on why or why not these methods would be a good or bad idea. Keep in mind the rule of edge cases. If someone can circumvent a proposed system by taking 3 hours to sidestep it, it doesn’t necessarily mean the system is a failure.

I’d also like to welcome anyone to share any other tools that have been successful for filtering harassment, on Twitter or any other online platform.

Thanks for reading.

Rosie Pringle

Written by

An interaction designer/society observer born in Florida but migrated to Brooklyn in 2008.