How We Unlocked a Whole Other Level of “Troll”

terri harel
OnlineSOS
Published in
15 min readAug 22, 2019

Trolling hasn’t peaked and we’re headed for another contentious election cycle. Can we turn things around? How a disturbing trend is sending society, security and democracy into the deep end.

Illustration by Alex Chen

Just over five years ago, Gamergate ushered in a new era of online harassment, manipulation and misinformation. The explosive campaign changed the way abusers staged attacks on individuals and empowered them to manipulate national conversations. This played out poorly for targeted individuals, of course, and also for the country after the 2016 presidential election. Its legacy continues for both today.

Conversation about the campaign’s impact was reignited recently by waves of tragic violence driven by white supremacy. Tried and tested tactics of individual harassment (like impersonation or mob harassment) have been applied to larger audiences, which can radicalized individuals, reshape national conversation and eventually provoke real life violence. Campaigns against individuals can spiral, damaging not only the individual’s life, but also anyone who comes to the person’s defense, is labeled an “enemy” of the abuser(s) or entire groups or populations.

Just this week a man named Eric Lin was arrested on charges of making threats. The man was a Nazi sympathizer who sent a Hispanic woman threats of violence and other grusome messages. He also supported a race war and wiping out non-whites. What we’ve found in our analysis of hundreds of news articles and cases is that the harassment of individuals can serve as the nucleus of a much more antagonistic organism that aims to provoke, erode social cohesion or inflict damage on society. An attack on an individual serves the stage for broader nefarious action. Gamergate also played out this way. What began as an attack on one individual became an attack on an entire group of people abusers labeled “social justice warriors.”

To target the group(s), one tactic Gamergate abusers used was impersonation. Abusers set up fake accounts posing as activists and/or people of color and then gamed various Twitter hashtags to harass, intimidate and spread misinformation. For anyone who doubts the relevance of such campaigns today, consider that just this week, activists identified sockpuppet accounts impersonating Jews to promote anti-Semitism and sockpuppet accounts impersonating trans people to associate trans people with white nationalism. Both campaigns were discussed on 4chan message boards. We’ll discuss this trend, its origins and its impacts in more detail in the following passages.

As we enter what is shaping up to be an even more contentious and divisive 2020 race, there is no more urgent time to address online harassment. We have a unique window of opportunity to change what we wish hadn’t been part of the 2016 cycle. We have a deeper understanding of tactics and, most importantly, how they could play out at scale, as they did in 2016.

It’s also worth noting that the communities often targeted by campaigns like Gamergate are marginalized-women of color like Shireen Mitchell, Sydette Harry, I’Nashah Crockett, and Shafiqah Hudson, for example-and have been warning platforms and the media about the abuse, violence and danger online harassment poses. If there is any other lesson learned from history it’s that targeted individuals not only bear the brunt of the intentions of abusers to silence and exclude their targets, but also have played a critical and prescient role in identifying harassment tactics and describing their potential impacts. Let’s finally listen. Let’s finally change the internet for the better.

This post is an abridged and edited version of the first part of our report, Into 2020: The State of Online Harassment and Opportunities for Collaboration. In this excerpt, we describe how we got to where we are today and why it’s critical to learn from the past. We chronicle “flaming” in 1994 to the next level “trolling” we see today.

In the coming weeks, we’ll release lessons learned from the report in a series of posts. Follow us for updates. You can also download the full report here.

A Brief History of Online Harassment

Online harassment isn’t new. As soon as networked computers popped up, people found ways to abuse these new forms of communication and community. An article published in the New Yorker in 1994 details its author’s first “flame,” and, no, it’s not the romantic kind. The writer, naive and excited about the new world wide “net,” has his cyber-bliss rudely interrupted by a vile email supposedly sent by a fellow journalist. Newly “flamed,” the writer sets off to find out what other bad behavior might exist on the net. To his horror, he discovers “flame wars”-exchanges of insulting messages-on message boards and websites around the net. He’s frightened by what he finds and his once bright outlook on the wild, wild web turns suspicious and dark.

This kind of flaming or “trolling” didn’t always have its roots in malicious intimidation. But it soon morphed into a tactic of more harmful intent. Researcher Ben Radford argued that trolls saw themselves as digital clowns, exposing the foibles and folly of a community. Whatever noble view trolls held of themselves, however, they still acted with the intent to create an unpleasant experience for others online. Importantly, the provocation often wasn’t targeted toward individuals.

An Example of Trolling
A troll joins an astronomy discussion forum, posing as a genuinely interested user. They then vehemently assert that the Earth is flat in order to provoke an emotional and verbal reaction from community members.

The OnlineSOS team developed a four level progression to describe how online harassment tactics have evolved over time.

Level 1: Trolling for the Lulz
Level 2: Trolling for Individual Attacks, defined by Gamergate
Level 3: Trolling for Ideological Attacks, defined by the fallout from Gamergate like attacks on “social justice warriors” and encouragement of men’s rights advocates
Level 4: Trolling for Large-Scale Manipulation, defined by the 2016 election and widespread disinformation and misinformation campaigns

Illustration by Alex Chen, text by Allison Gauss

As we detailed in a previous blog post, the term “trolling” can be much more specific. Naming and defining tactics of harassment can help us identify what’s happening to an individual and develop an appropriate response. Let’s track tactics used within each level.

Level 1 — Trolling for the Lulz

This early type of trolling was not unlike someone who comes upon a peaceful pond and throws rocks in it to make waves and disturb the ecosystem. It even seems like a victimless crime, if you don’t account for all the creatures that live in the pond.

Tactics of “trolling for the lulz,” as this was known, were easily translated into behaviors with malicious intent, including:

  • Sending messages that range from rude remarks to encouraging suicide
  • Cyberstalking, an invasion of privacy and source of intimidation that can also involve identity fraud and financial hacking
  • Doxxing, releasing private information to the public, which puts targets at risk for stalking, violence, and physical intimidation
  • SWATing, sending police to a target’s home, which puts targets (and innocent bystanders) in harm’s way
  • Coordinating mob harassment, which can silence and discredit, especially journalists, activists, and oppressed groups
  • Spreading false information, which damages targets’ professional reputations and their ability to work

In this period, spamming, doxxing (when information considered private, like a home address or social security number is published or broadcast online), and non- consensual distribution of intimate images were used to threaten, intimidate, and silence specific targets. Examples of high-profile cases and targets include Kathy Sierra (2007), Anita Sarkeesian (2012 and 2014), Caroline Criado-Perez (2013), the spamming case at Occidental College (2013), and Jennifer Lawrence (2014).

These cases inflicted psychological trauma and distress on targeted individuals, and had a serious impact on their personal and professional lives. Blogger and software designer Kathy Sierra, fearing for her own safety and that of her family, disappeared from online spaces for an extended period of time and moved across the country. Anita Sarkeesian needed to leave her home. Caroline Criado-Perez received so many threats of rape and violence on Twitter that she was unable to function. In response, Twitter, which had been around for seven years by then, added a “Report Abuse” button.

Level 2 — Trolling for Individual Attacks

The Gamergate Period: 2014–2016

Unlike “trolling for lulz,” tactics of trolling were used to target individuals. Instead of inciting arguments within a community for the sake of entertainment, a troll might attack an individual with whom they disagree. Old tactics such as impersonation and outrageous claims would be used to harm a target.

This can be compared to someone throwing rocks at another individual. The tools are the same, but by turning them on an individual, the troll is able to do a great deal of harm.

Example: A troll impersonates their ex-partner and posts offensive content to draw criticism and negative attention to the target. The goal is to cause harm specifically to the ex-partner, in the form of emotional suffering, reputational damage, and further harassment.

By 2014, distinctions between life on and offline had gotten blurry. iPhones and other smartphones had been around for almost a decade, and most people in America happily carried around their digital lives in their pockets. Twitter and other social media platforms were not only ubiquitous for fun, but also became more important to people’s professional lives.

But this increased access to online communities and platforms was a two way street. It also gave malicious actors greater reach. They could find and target individuals 24 hours a day, 7 days a week. It became increasingly easy to ramp up the intensity of abuse too, using platforms like Reddit or 4chan as staging grounds. Pew Research’s 2014 report about online harassment found that 40 percent of Americans had been harassed online by that point and that 73 percent had witnessed some form of harassment, ranging from “efforts to purposefully embarrass someone” to more severe forms like sexual harassment and stalking.

Zoë Quinn, an independent game developer, was quickly harassed online after releasing a game in 2013. Misogyny had been present in online spaces-especially within the gaming community-since the advent of the internet, so this was not surprising. However, the end of her short relationship with a man named Eron Gjoni sparked an online harassment campaign that lasted years. Its effects still reverberate.

During Gamergate, men’s rights advocates (MRA groups) and other anti-feminists used platforms like 4chan to coordinate their attacks on specific individuals. This marked a turning point in coverage of online harassment, particularly mob harassment.

Even months and years after “The Zoe Post”, Gamergate worked like a hurricane that sweeps up and destroys everything in its path. Anyone who critiqued sexism in gaming, voiced support for targets, or was deemed a SJW (“social justice warrior,” a term created by harassers to describe anyone promoting progressive views, including feminism) could be targeted.

What’s more, 4chan denizens uninvolved in the original events of Gamergate were easily recruited and egged on to become harassers. “Lot of support, and a ton of people are picking up the self-chastising when people start getting insulting. It took a few days of 4–5 of us doing it but it’s taking off,” wrote one user involved in organizing IRC chats, according to an Ars Technica report. In another transcript, a user wrote, “i couldnt care less about vidya [Quinn’s game], i just want to see zoe receive her comeuppance.”

Gamergate even consolidated other campaigns in its wake, making it a Category 5 storm of ongoing mob harassment. Historically, trolls had often posed as a member of an existing community so that their content would be heard and received by targets. But this behavior was weaponized for ideological purposes, as was the case with the #endfathersday campaign-a hashtag that trended on Twitter in June 2014- which was taken up by Gamergate. The campaign intended to create discord and backlash within the feminist community. Harassers created sockpuppet accounts- an identity used to deceive and/or maintain anonymity-designed to look like politically active feminists and posted content to disrupt communities and weaken one side of a debate.

Gamergate sparked new models for harassment and left many victims and targets
in its path. For example, it capitalized on the manufactured narrative that the controversy had nothing to do with Quinn personally and everything to do with “ethics in gaming journalism.” It became a successful experiment in creating unfounded ideological strife.

Level 3 — Trolling for Ideological Attacks

Whereas many earlier cases of harassment sought to harm targeted individuals, during and after Gamergate harassers began using digital tools and tactics to weaken an ideological group or community (although Gamergate isn’t the first such case, it brought these tactics to the mainstream). These coordinated campaigns may also promote a specific ideology. Harassers may target individuals who are part of the opposing group or use misinformation to turn public opinion against the targeted group.

Going back to the rocks analogy, this form of harassment can be visualized as someone throwing rocks at a group of people based on their identities or ideologies. The aggressor may be a lone troll who skillfully uses trolling tactics (rocks) or it could be a coordinated attack by like-minded trolls. The goal is to weaken or intimidate the other side of a debate.

Example: A group of anti-feminists uses bots, dogpiling (overwhelming a target with questions, threats, insults and other tactics meant to discredit, silence, or shame a target), and other tactics to threaten and intimidate leaders of a women’s rights group. The goal is to silence these activists and discourage others from getting involved in the cause.

It’s important to note that the concerns about such organized harassment that came into sharp relief during and after Gamergate were well-voiced, but mostly ignored by the general public and platforms. Campaigns against Black women, in particular, were present long before Gamergate but little had been done to stop it. For example, Black women on Twitter had for years raised concerns about harassment and pointed out campaigns to discredit and silence them.

During the Gamergate period, Shafiqah Hudson (@sassycrass), @so_treu, and other Twitter users revealed that the #endfathersday campaign was a coordinated effort by trolls posing as feminists. Over 200 deceptive accounts were identified and documented-with the hashtag #yourslipisshowing-in a Storify in 2014 to help people identify and block such accounts. (Unfortunately Storify shut down in 2018, but Tweets were preserved by Twitter user @kazamacat in Twitter Moments.) ∎

Activists have been instrumental in highlighting the danger of such abuse and many warned that Gamergate and the tactics used to organize and perpetuate it would have far-reaching consequences for campaigns of harassment, manipulation, and hate. Those who have raised the issue of online harassment early and often have been, themselves, members of targeted marginalized communities. Shireen Mitchell, a founder, speaker, author, and activist, has pointed this out extensively, as has Jamia Wilson, the director of Feminist Press, among many others. ∎ ∎

To further illustrate this, in 2018 NPR tweeted about an investigation into how the alt-right recruited members on gaming networks. The replies, linked under Figure 1.2B, offer a deep look into how far back such efforts and tactics reach. Replies included, “Anybody who’s used these platforms has been aware of this and saying it for YEARS” and “no surprise there.”

Gamergate and its copycat campaigns had a profound impact on how abusers and bad actors learned to organize for future campaigns, including the ones that targeted American voters in the 2016 presidential election.

New Example — #yourslipisshowing exposé continues.

Online Harassment in a Post-2016 Election World

The same tactics used in Gamergate, like sockpuppeting and mob harassment, were used by Russian trolls in the run up to the 2016 presidential election. Posing as American voters, these trolls spread polarizing and often false content on social media platforms and provoked extremism among unsuspecting citizens.

Such actions aren’t limited to paid campaigners. Groups like the alt-right and the anti- vaccination movement are also taking their cues from Gamergate and paid trolls to harass and silence targets.

Recently, anti-vaccination activists have targeted medical professionals who support vaccination in publications and online forums. The Guardian reported that “networks of closed Facebook groups with tens of thousands of members have become staging grounds for campaigns that victims say are intended to silence and intimidate pro-vaccine voices on social media. The harassment only exacerbates an online ecosystem rife with anti-vaccine misinformation, thanks in part to Facebook’s recommendation algorithms and targeted advertising.” These tactics, including staging mob attacks and spreading misinformation, harken back to Gamergate.

Platforms’ engagement-based models enable violent extremism, white supremacy, and other hateful content. This allows a relatively small group of people to have an outsized voice in discourse.

Facebook, Twitter (which no longer sorts posts only chronologically), Reddit, YouTube, and Google Search all rely on specific indicators to determine what’s shown to people. In general, algorithms pick up on trending topics, combinations of engagement indicators (likes, retweets, comments, upvotes), and personal preferences to determine what reaches the tops of users’ newsfeeds. These models have encouraged misinformation to spread and also enabled hateful comments or content to become more visible.

Far-right actors frequently game Twitter’s trending topics feature to amplify certain stories or messages. And YouTube gives a platform to conspiracy theorists and fringe groups who can make persuasive, engaging videos on outrageous topics.

Online Harassment and Disinformation

At first glance, online harassment may not seem part and parcel of the disinformation campaigns we now associate with Russian “trolls.” However, when the strategy to sway public sentiment, mold an electorate, and play into distinct political and geopolitical goals includes coordinated, sophisticated action meant to incite anger and fear, we’re talking about classic trolling.

Moreover, when the goal is to sway public opinion, online harassment is an incredibly potent tool. It’s also hard to control once unleashed. The exact tactics and tools used by early trolls and later adapted to vicious online harassment campaigns are tailor- made to silence opposing viewpoints. In turn, they make it much more difficult for a regular person to discern fellow citizens from paid agents, or to discern individual opinions from facts or campaigns of propaganda.

Level 4 — Trolling for Large-Scale Manipulation

Today, harassment tactics have been further weaponized to manipulate the wider population. While the perpetrators’ intent remains waging and winning an ideological battle, these campaigns encroach on politics, society, and democracy when absorbed by larger audiences. In addition, while harassment for ideological reasons may consist of direct attacks from like-minded individuals, this newer, more broad manipulation may be financed by a leader or organization. Tactics may be amplified by paid agents or bots to make an issue or viewpoint seem much greater than it is. Coordinated trolling may be used to manipulate people who may not have any direct connection to the issue or political mission at hand.

This could look like a powerful individual or organization mobilizing rock-throwing mercenaries. The rocks are not aimed at individuals but are more numerous and cause fear and mistrust in the population being bombarded. The powerful individual might then claim they are the only leader who can protect the population from rocks.

Example: A think tank organizes a campaign to publicize any crimes committed by undocumented immigrants. Their paid trolls and bots spread stories far beyond their usual audience and share rumors and false reports. The goal is to convince the population that immigrants are a threat to the country.

In 2015, Russia’s Internet Research Agency (IRA) started a campaign with the hashtag #ColumbianChemicals. The fabricated backstory was that a chemical plant in Louisiana had exploded. Sockpuppet accounts of “local concerned citizens” and “eyewitnesses” started to document the horror by using the hashtag, leading to a deluge of misleading reports.

According to an investigative report by the New York Times, a user named @EricTraPPP tweeted to a New Orleans-based reporter named Heather Nolan, “Heather, I’m sure that the explosion at the #ColumbianChemicals is really dangerous. Louisiana is really screwed now.”

In another example, sockpuppet accounts, also managed by Russian agents, used Instagram as the battleground to target African-American voters. A report, as summarized on ABC News, said that the IRA “created an expansive cross-platform media mirage targeting the Black community, which shared and cross-promoted authentic Black media to create an immersive influence ecosystem.” 16 This network then exploited existing racial tension in America to sow discord among American voters.

A Guardian article highlighted this behavior in an interview with Theodore Johnson, a Brennan Center for Justice fellow. In the interview, Johnson said, ‘Equally important was putting black activist language out on social media in order to scare white citizens into thinking their nation was changing,’ pointing to posts that falsely showed Black Lives Matter activists with guns and claimed they planned to exercise their second- amendment rights.”

These were not grassroots efforts by independent actors, but hired agents of a political power. Within the current functionality of platforms like Twitter and Facebook, a mass-produced bot or puppet account can be indistinguishable from a legitimate user. Paid labor can now produce a chorus of voices tuned to the buyer’s message.

With these tactics in mind, the 2016 election season brought online harassment tactics and behavior out of “niche” circles like tech, gaming, and activism and to the forefront of American politics, culture, and discourse.

Heading Into 2020:

It is clear that the same tactics used for harassing individuals are now being weaponized in large-scale, sophisticated attacks that undermine our democracy. As we learn more about the social and cultural power of online engagement, we also understand its use as a political and social organizing tool. This is creating rapid changes in accepted norms of online behavior that appear as-or are recognized as- harmful, but to what extent remains unknown.

While these issues won’t be resolved by 2020, they indicate a concerning normalization of harassment as an effective tool for ideological warfare. More strategies, tactics, and avenues exist today than ever to weaponize the internet, its current infrastructure, and the people who use it. Learning from previous real-life examples of online harassment can help identify patterns of behavior and create better outcomes in the future.

This post is an abridged and edited version of the first part of our report, Into 2020: The State of Online Harassment and Opportunities for Collaboration. Download the full report here.

Allison Gauss contributed to this report.

∎ Thanks to Shafiqah Hudson for unearthing this Twitter Moment.
∎ ∎ More of Shireen’s work can be found on her website at digitalsista.me

Originally published at https://onlinesos.org.

--

--

terri harel
OnlineSOS

Currently @OnlineSOS | Berlin | Love electronic music, horses, and political economy. Formerly at Classy, Start Up Chile in SocEnt, IVN