What It Looks Like On The Dark Side of the Web

Talking troll culture, online harassment and the state of the media with media studies scholar Whitney Phillips

© Wikimedia, CC BY 2.0

Whitney Phillips, an assistant professor at Mercer University, is one of the leading experts on trolls, web culture and ambivalent online behaviors. She has written two books on the topic of trolls, their motivations, and effects. 
In this interview, Whitney talks about the origins of trolling, what we can do about it and why our culture is part of the problem.

Whitney, the entire last year has been quite big in terms of online harassment, hate speech and tech companies taking different measures to tackle these problems. The internet seems to have become a hostile place. What do you think about how we ended up here?

I recently worked on a paper where I talk about all of the current issues in terms of coastal redwoods’ root systems. Bear with me (laughs). Coastal Redwoods are interesting because their roots provide nutrients and even structural support to the other redwood trees in the grove. Older trees basically raise the young trees by feeding them, which is a beautiful metaphor for interconnectivity. However, that also means that if one tree gets sick, they all get sick.

It’s helpful to think of journalism in terms of a similar process, alongside seemingly separate institutions like advertising and of course social media. Currently journalism privileges sensationalist framings as it is advertising-based, and so needs to predicate itself on outrage and emotional reactivity. That feeds into social media hyper-reactivity, which is problematised further by algorithms, which floats the most reacted-to stories to the top of trending lists and people’s social media feeds, around and around. Everything is feeding into itself and the major issue is that all of our capitalistic systems are actually working beautifully well together.

In combination, this creates circumstances that are actually not very good for human communication. All our current systems are working together to minimise thoughtfulness and reflection and empathy more broadly. Certain kinds of communication are valued and encouraged, it’s just that those kinds of communication tend to be negative.

© Wikimedia, CC BY 2.0

It’s not just one factor, but a combination of factors…

Yes, exactly. In order to talk about how we got here we have to consider the ways that all of these things feed into themselves and make already big problems even bigger. It’s not just a question of “Twitter has bad moderation policies” or “Facebook sucks at taking down offensive content.” Those things are true but you can’t really talk about Twitter’s failings and Facebook’s failings without talking about failings in newsrooms and failings in the broader culture to cultivate diversity and inclusiveness, and so many other issues. In a nutshell, it‘s everything. Everything humans have ever done has brought us to this moment and now we have to figure out what to do next.

You are one of the leading experts on trolls and troll culture, with your research culminating in “This is Why We Can‘t Have Nice Things”. How did you end up studying trolls in the first place and, more importantly, how did you do it?

Well, you have to have a strong stomach (laughs). I came to study trolls for two reasons. First, my brother was a troll and he kept trying to get me to go to 4chan. This was in 2007 and my reaction was “I’m not doing that.” What I didn’t understand at the time was that he was trying to troll me by making me go to 4chan, but I didn’t realise that until much later.

Concurrent to that it was an accident of history. I’d been accepted to a PhD program in the late spring and didn’t have “shit to do” in the meantime. I moved down to where my PhD program was and then I was just like “I don’t know, I guess I’ll just…go online.” That was in the run-up to the 2008 election when Obama was running against John McCain.
 
I started seeing a lot of things online that were very interesting to me. I had gone to a PhD thinking I wanted to study political humour. So, of course, I was drawn to the humorous exchanges that I was seeing on these different political sites. However, I soon started noticing that there was something else, a kind of tone that I couldn’t quite put my finger on.

Around that time my brother also finally convinced me to go to 4chan. Well, and lo and behold 4chan seemed to be the originator for many of the weird jokes and conversations that were then referenced elsewhere, or at least it was creating and helping spread a lot of the visual content that ended up in these other places, it was difficult to tell exactly what was happening. There was just a lot of content overlap I wasn’t expecting, and that’s what I found most interesting. That it wasn’t just confined to this site, which ended up basically foreshadowing the next ten years in popular culture.

Could you give us an example?

Well, they shared many of the same inside jokes and memes (as I came to understand the term, I hadn’t heard the word before then) about the different candidates, often to mock the opposing side, depending on who they were playing with, Obama or McCain. And again, the tone was similar. While this play was clearly political on these partisan political sites, on 4chan it was something a little different, not quite political in any kind of obvious or straightforward way, but still creating all this content that would then get sucked up into these more straightforward political spaces. I ended up writing a thing about the Obama Joker/Socialism image, that was a big one.

© Henry Jenkins

Something told me that there was something here, something more broadly cultural. That was when I started intently observing 4chan, for days and days and weeks and months on end, nonstop, which answers the second question about how one studies trolls: You just have to sit in front of a computer for thousands of hours and look at it all.

Of course, it’s not only that. Apart from a lot of time and patience it is important to have a bounded sense of what you’re trying to look at. Also, figuring out the contours of the community you are trying to study and how it fits into the broader online ecosystem is really critical.

Since you wrote your book the Internet has seemingly gotten a lot messier. Is it still possible to study trolls in the same way as you did?

It’s certainly trickier now. In the 2008 range trolling subculture was really clearly bounded, you could look at something with a particular tone or aesthetic and know that trolling was afoot, so to speak. That made it actually quite easy to study trolls online because you knew what you were seeing, when you were seeing it. It was also still easy to pinpoint trolls in public offline, because the jokes were so specific and interior to those communities.

You could really tell who’s who. It even got to a point where I could spot trolls in my classes, that was back when I was still a graduate student, based on their word choices and inside jokes, which they thought were “sooo funny” and no one in the room would know what gross thing they were actually saying. But I knew. And when I would respond with a similarly coded reference they’d turn red, like how did I know.

What started happening over the years is that trolling culture became more and more mainstream, popularised and more integrated into the broader online lexicon. The big watershed moment was in 2011 when “Know Your Meme” came around.

Know Your Meme, the online dictionary for, well, memes — © Screenshot

How come?

Suddenly there was a database for non-trolls to have access to this information and understand troll culture. More and more non-participants were able to engage in those conversations. After about 2012 it became almost impossible to spot a troll. Suddenly, if someone was using these highly insular trolling references it could mean that they were either a troll or it could mean that they had ever been on Facebook.

Plus, post-2012 journalists started covering trolling more and more, in ways that extended beyond the subculture as it had developed on 4chan. To be honest, I don’t really know any longer what people mean when they use the term “troll”. It’s now essentially meaningless because it can refer to so many different behaviours.

You already preempted my next question but maybe it’s worth asking it nonetheless: “Troll” and “Trolling” are terms so commonly used these days that they have, as you pointed out, lost their meaning. Still, can you give me a brief definition of what a troll is or what it used to be?

How far back would you like me to go (laughs)?

Trolling first emerged in an online sense in the 80s and 90s on Usenet, an early social networking site, and people used the term essentially as a social corrective. They would accuse other people of being trolls or accuse them of being “trollers” when they were disruptive to the community and were basically pains in the ass.

It was a way of signalling displeasure with someone else’s behaviour. It also sent the message “You are ruining our community”. Concerns about identity deception and community disruption played a big role in the development of the term “troll”. In the early 2000s when the term troll had already been around for ages, participants on 4chan, particularly the /b/ or “Random” board adopted it and started using it as a way of self-identification. Up until that point there really hadn’t been a cohesive word that encapsulated all the behaviours of trolling, but now there was.

4chan and its (in)famour /b/ “Random” board — © Screenshot

What did they call it before then?

When I was doing my research for “This is Why We Can’t Have Nice Things” I talked to some of the slightly older trolls and asked them: “What did you call it in the past” and their response was mostly: “Oh we don’t know, there was no word.” Sometimes they would just call it “messing around” or “being a dick” but there was no unifying sense of identity around these kinds of antagonistic, playful type of behaviours. The behaviours existed, there was just no linguistic framework for it.

So “trolling” originally stood very broadly for behaviour that was not socially acceptable in online communities?

Sort of. The way people understood it was essentially as disrupting other people’s emotional equilibrium for their own personal enjoyment or as amusement in the face of someone else‘s distress. Of course, those efforts corresponded with extraordinarily offensive, obscene and outrageous behaviour. If the goal is to get a reaction out of someone you won’t achieve that by being thoughtful and gentle in your speech.

You only get a reaction out of someone by being an extreme version of whatever it is that you’re doing. That’s what trolls did and that sense of the term as a kind of antagonistic emotional play has persisted over time.

What do you think of the ubiquitous usage of the term these days?

Well, it creates problems when you take the rigid subcultural framing of trolls and trolling and apply it, for instance, to Nazis. In these cases it minimises dehumanizing behaviour by placing it in the context of play, which is the one sense of the term that has persisted over the last decade. That there’s an element of performance or mischief, purposeful messing with people. Words can be extremely damaging, and applying this more playful framework to explicitly destructive antagonistic identity-based harassment, has a tendency to minimise the emotional human impact of online antagonism.

That’s why I tend not to use the word very much these days. At best, it complicates conversation about a topic such as online harassment. At worst, it minimises the experience of people who’ve been traumatised.

People often seem to think that trolls are just — excuse my sloppy language — dumb people, trying to shoot others down on social media. However, in your book, you mention both Socrates and Arthur Schopenhauer’s “The Art of Controversy” as huge influences for many trolls. Does that mean that we have to re-think what we usually think of trolls?

In a way, sure, at least when thinking about the broader cultural context for this sort of behaviour. An important point is that the trolls I worked with, the subcultural trolls, were always the first people to say “Socrates was a troll you know” or “Schopenhauer was a troll”. They were the ones bringing philosophical frameworks into the conversation.

And it’s not surprising why. They would then be able to place themselves in this great lineage of Western thought. They had a vested interest in this framing, it made them bigger. To be fair to them, they weren’t wrong. The strategies that Socrates used and the strategies that Schopenhauer was describing absolutely describe a lot of the trolling behaviours.

So yes, there is a lineage, and the trolls I worked with for the book certainly took a sense of self-satisfaction out of that lineage, getting to claim it and connect themselves to this broader historical or philosophical tradition. That was actually what got me thinking about how trolling rhetoric is embedded in broader mainstream norms.

Schopenhauer and Socrates, figures of inspiration for many trolls — © Wikimedia, CC BY 2.0

Trolling not as a separate element, but something we are all complicit in?

However they define the term, people tend to place trolls on the periphery of the culture and say “These are the bad guys. They’re doing abnormal things. They are not like us.” For many, this “They’re the bad guys and we’re the good guys”-dichotomy is very reassuring. But the fact of the matter is that you actually can situate trolling rhetoric within this broader philosophical lineage that people in the West are culturally simmered in their whole lives.

For example the boundary policing of emotional expression, which is what subcultural trolling basically boils down to, is at the centre of the western worldview. Logic, rationality and cool, calm collectedness are rhetorical strategies explicitly privileged in the Western tradition. And those qualities also happen to be undeniably male gendered.

The converse, really any kind of emotional expression or sentimentality, is gendered female. And not just gendered female, but pathologized because it’s not idealized male gendered speech, and therefore something to push back against. This is normalised in our culture. Male-gendered communication is expected to look like this, female-gendered communication is expected to look like this. And male gendered-communication is what’s valued culturally. Trolling rhetoric fits into this cultural bias towards a certain kind of communication that values antagonism, that values delegitimisation of emotional expression and which, based on existing cultural norms, explicitly devalues women.

That’s the core of the problem why it’s so difficult to deal with trolls. If trolls really were the bad guy, the cultural outliers so to speak, things would be easy. But what the hell do you do if trolling is actually central to Western thought? What do you do then?

Would you argue that we can learn something from trolls for ourselves?

I think that there’s a lot that we can learn from thinking about how they are more like us than we might like to admit. If trolls are so terrible — and we all agree that we need to find a way to get rid of trolls — what do we do when we’re doing a lot of the same things? Especially in this particular moment when everything is on fire, trolls can help us to question and reassess our mainstream assumptions and norms.

We’ve now talked a great deal about what trolling is, how it has changed over the last couple of years and what it might mean for us. On a more practical note, many people wonder what they should do if they are being attacked by a troll. Is there a best practice behaviour?

© The MIT Press

It’s a tricky question and unfortunately, the answer is that it’s a case-by-case response. The standard perspective that people employ is “Don’t feed the trolls!”. The problem with that framing is that it is embedded in a logic of victim blaming: “You should’ve just ignored it, but instead you let yourself get trolled, don’t you know how to internet.”

In general, I’m very wary of any kind of behavioural injunctions that place the onus on the person being attacked — especially when those attacks are aimed so frequently at historically underrepresented populations. Attackers shouldn’t attack. Period. It’s the same thing when we’re talking about sexual assault or issues of sexual violence. It’s never the victim’s fault. The only person who is responsible for the things that abusers do is the abuser. The same with trolling. The troll is responsible for the choice to troll others. That’s what they should not be doing.

That’s the first point. The other point is that the stakes are especially high right now, especially for marginalized communities, and the variables are hard to wrap your head around. How somebody can respond (or even should respond, though I don’t like imperative-type framings because that implies there’s one right way to do it) depends a lot on why the person is being attacked, who’s attacking them, whether or not the target or the attacker has a major social media or even news platform, whether or not the attack is likely to generate news coverage, all kinds of variables.

Depending on the circumstances, including basic personality stuff related to the people involved, either responding or not responding can be equally dangerous for the person being targeted. And there are a lot of ways a person can do both things, there’s a whole menu of options for responding, and for choosing not to. Approaches that might be the best option for one person wouldn’t be best for another, they could even be catastrophic.

No “gold standard” for reacting to trolls then?

It would be great if there were a universal way of responding. What I support is targets getting to make whatever choices they’re comfortable with in that situation. If not responding feels like the right thing, then a person gets to do that, and we don’t get to say they should have done it differently. If responding feels like the right thing to do, then we don’t get to tell someone that that’s the wrong thing either. We don’t get to tell people how they should react or how they should feel. The right thing to do is to support people under attack, and not add to their burden by holding them even partially responsible for their harasser’s choice to harass them.

One development that we have seen recently in several countries is that many advocate for platforms to do more to address the issues of online harassment and hate speech. But is there something social media platform can or should do? And would it be effective?

This is a very difficult point. Because from a purely business-minded perspective, these companies don’t have much immediate financial incentive to fully moderate harassment and other problematic behaviour. It’s expensive to moderate and also the more riled up people get, the more people are reacting to things — including harassment, as well as news stories about that harassment — the easier it is to commoditise those spaces. Really putting a dent in the problem would really put a dent in their bottom line.

That’s not to say that people who work at Twitter or Facebook somehow enjoy harassment. This point isn’t about them as people, one way or the other. The point is that, when we’re thinking about the capitalistic structures at play, capitalism, which isn’t interested in ethics or the lives of people, says “You are a business and therefore you have to maximise your profit potential.” Unfortunately, online abuse and harassment, just like sensationalist news content, happens to align with that objective. We’re already at an impasse because it’s good for business, even if the people within that business hate it and wish they could do something more substantive about it. Economic pressures make everything more complicated, is what it comes down to.

© Marcin Wichary, CC BY 2.0

At the same time, many tech companies say a lot of the right words about harassment; Twitter, for instance, announcing an expansion of their moderation policies or Reddit banning subreddits that glorify or incite violence. In both cases, they say that they are focused on context as a determinant of whether something is ok to post or whether it’s in violation of their terms of service.

That’s great, except for the fact that establishing context online is one of the most difficult things you can do, something my co-author and I have talked about a lot. In order to establish context, you have to know the actors and you have to know the relationship between the actors. Basically, you have to be able to discern tone of voice between all these actors and then you have to figure out what the impact is once something leaves this initial sort of community.

That is not something that’s easy to accomplish just by looking at an exchange, especially when you consider the human labour issues involved. Moderators won’t have three days or even three hours or maybe even three minutes to fully research what somebody’s joke really meant, there’s too much content to sift through, too much conversation. Too much context, really, to figure out the context.

But the tech companies must know that content moderation based on context is an Herculean task? What, then, are their interests in promoting this solution?

Talking about context is a clever way of signalling: ”Don’t worry, we’re not going to violate your free speech. Don’t worry, we’re going to take into account whether or not it’s satire.” But I don’t understand how that’s enforceable, especially given their history of only selectively getting involved when there is a bad publicity campaign.

© Guiseppe Milo, CC BY 2.0

For instance, when actress Leslie Jones was attacked they were suddenly all about that because the press was so bad. But in other instances they didn’t get involved for a long time, for instance with Milo Yiannopolous. Their behaviour is already super inconsistent.

All this is vexing because these issues, of course, feed into problems in journalism and problems in search and problems in representation more generally. All of this is part of the root system I described at the beginning. Frankly, I don’t know how one could solve it just within the platforms. I don’t know if they can. I don’t know if they want to, I don’t know if they will even try.

Apart from whether platforms can or want to do something, there is also the question whether they should and whether we should give platforms the power to decide what is and what is not allowed.

Yes, but I do think that it’s important to keep the “should” on the table. The ethical imperative is that we need to find ways to create the most robust democratic spaces that we can and in order to do that we must make sure that extremist fringes are not minimising everybody else’s speech.

I know that there is a “slippery slope” argument as to whether we really want Twitter, Facebook and Co making decisions about what we can and can’t talk about. I understand why people have those concerns. But I also think it’s really important not to lose sight of the fact that if you believe in democracy, you have to contend with the fact that democracy is not a free-for-all.

For example even in the US, which has extremely vigorous free speech protections, there are limits to those protections. There is a basic recognition that people can’t just do and say exactly what they want at all times. That’s something the most staunch proponents of free speech tend to acknowledge as well.

I don’t know if they can. I don’t know if they want to, I don’t know if they will even try.

There has to be some limit, at some point, in order for democracy to even function, let alone flourish. That’s literally why laws exist. I believe in democracy and I’m committed to democracy and sometimes in a democracy that means that you’ve got to intervene. There’s no way out of that. Even though it creates all these problems that make people nervous around free speech and censorship, the alternative of doing nothing is worse.

What do you personally think of a solution such as “No anonymity on the internet” which is often proposed by politicians as an ideal solution? Personally, I don’t buy this argument but I’d be interested in your opinion. Apart from the question whether it’s actually enforceable, would it be effective?

Two response points to that: The first is that the Charlottesville rally is a really good example of people being willing to identify with extremist positions without any hoods, without any obscuring of their identities. People are capable of being terrible racists and bigots and violent and hateful under their own names.

Nazis during the Charlottesville rally — © Anthony Crider

The lesson of history is that people don’t need to be anonymous in order to dehumanise and commit atrocities against fellow human beings. That’s the first point.

The second point is that there are lot of assumptions about the role that anonymity plays in the situation more broadly. It’s a very depressing and distressing assumption that you give a person a mask and you know that they’re just two clicks away from engaging in the worst kinds of behaviour imaginable. And to an extent, especially seeing how many terrible things are undertaken by anonymous actors online, that seems intuitively true.

A counter-protester at the Charlottesville rally gives a white supremacist the middle finger. The white supremacists responds with a Nazi salute — © Evan Nesterak

However, the research doesn’t actually support that thesis. There is no overwhelming evidence to support that anonymity directly causes bad behaviour. More significant is the role group norms play in influencing behaviour, bad or otherwise. A fascinating study from 2012 shows that the norm of a particular group is to behave in destructive, malignant and violent ways, anonymity will enhance that because of the reduced social risk of online environments. But if the group’s norms are to be conciliatory and generous and compassionate, then anonymity is going to enhance that.

That means then that the solutions are not going to be tech solutions, because we are dealing with cultural problems. We have to think of groups, norms and values of communities and those things don’t have easy solutions.

So as a culture we should rather start with ourselves and our culture and how we treat others, rather than seeking these quick fixes on a technological basis?

Yes, technology can’t be the solution because the problem is cultural. Many technologies are reflective of the culture, which creates an additional feedback loop, but the solution cannot be in tech. It’s a question of education more than anything.

We’re in trouble but it’s not because of technology. We’re in trouble because of deep-seated cultural biases, myopia around diversity and representation, and internalized supremacist inclinations. That’s what got us here, and thinking through that is going to be the only thing that gets us out. Unfortunately, there’s just no button for that.


Felix Simon is a journalist and regularly writes for the “Frankfurter Allgemeine Zeitung”, “Die Welt”, the “Telegraph” and other outlets. He holds a BA in Film- and Media Studies from the Goethe-University Frankfurt and an MSc in Social Science of the Internet from the University of Oxford, where he works as a research assistant at the Reuters Institute for the Study of Journalism. He tweets under @_Felix Simon_.