Hate Groups and the Online World: How Extremist Groups Utilize the Web as an Apparatus of Violence
A gun is only as dangerous as the one who wields it.
December 9th, 2019
By Lee Torrejon-Zuniga
Growing up as the only child of two young, working-class parents in the early 2000s, the internet was my playground. My access to technology was limited to bits and pieces of time here and there, but on the rare occasion that the family computer was unoccupied, you bet I was on it. It was these brief moments that I was able to tap into a community bigger than the small suburb I lived in, where I was able to meet people that I never would have dreamed of meeting in my conservative neighbourhood. A tool that allows you to build your own worlds and travel between them! My child self wanted nothing more. While isolated from my predominantly white, upper middle-class community, I felt right at home online with the friends I had made. I owe a lot of the formation of my identity as a queer, bi-gender BIPOC to the internet, and cannot imagine my experience without it.
Yet, when I see how the internet has been co-opted and used in this day and age by extremist groups, I can’t help but feel afraid of something that once felt so familiar.
While social media is a tool that can be used for good, ultimately it is an impartial tool, and in the wrong hands, can cause great harm. We will explore two of the ways that hate groups utilize social media to attack vulnerable groups both online and in real life: by utilizing the internet as a platform to organize, and by using social media to indoctrinate/radicalize vulnerable individuals. We will do so by discussing their modus operandi, their use of platforms as inciters and extensions of real-life violence, and the psychology behind radicalization.
First, we must ask ourselves a couple of questions: who are these groups and where do they come from? It can be hard to source hatred in a time fraught with political strife, health, and poverty epidemics. According to the Southern Poverty Law Center, as of 2018 there are 1,020 hate groups in existence in America. (SPLC, 2018) Upon further observation, it is apparent that the overwhelming majority of these groups are white (with some exceptions, unfortunately), and aligned against most protected groups (people of colour, immigrants, LGBTQIA2S* folk, etc.), under the guise of protecting their rights to the First and Second Amendment.
Many members of these groups, like the Proud Boys for instance, consist entirely of cis, white, men. While the class, and even racial identities of these individuals may vary, ultimately their motives align with one another: to protect whiteness, at any and all costs. An instance of one of these groups would be the well-known American Identity Movement (formerly Identity Evropa), a group of college-educated men in their 20s with the singular goal — to enable a white supermajority in North America. (Cooney, 2018) Interestingly enough, in real life, AIM is deeply concerned with appearance and public image — in The Centre for Public Integrity’s piece on white male radicalization, members of the AIM were reportedly participating in a community park clean-up, something seemingly incongruent with the group’s history of hate.
So how is it that groups like the American Identity Movement are able to exist as hate groups while seemingly acting in the interest of their community at large? The answer is simple: the online world. Within the safety of online anonymity and hidden forums, hate groups can act on the fullest extent of their agendas, while controlling how their narrative is portrayed to the public by presenting innocuously in real life.
With this established, we must then ask: how does this use of social media as an organizational tool assist these groups in attacking vulnerable groups and infiltrating safe(r) spaces?
Well, take for instance the alt-right’s response to Drag Queen Story Hour. Drag Queen Story Hour is a program run all across America, where, true to its name, drag queens read stories to children in schools, libraries, and other public institutions. In addition to reading children’s classics, the program also seeks to encourage children and parents alike to engage in conversations about gender fluidity/identity and sexuality through LGBTQIIA2S* children’s literature. (Drag Queen Story Hour, 2019).
In response to this event being hosted at a library in Spokane, Washington, former U.S congressional candidate and known white nationalist Paul Nehlan made a call-to-arms on the app “Telegram” (a popular channel of communication for white supremacists/terrorists), in the form of a plan called “DOXX TRANNY STORY HOUR”. Nehlan urged followers of his to photograph “all of the degenerate filth taking their children to this event”, as well as their license plates. Nehlan’s call to arms spawned much discussion amongst self-identified members of the “alt-right”, with the most notable comment being from an anonymous user, who wrote,
“Anyone online who has said a word of support of this deserves to be hung, drawn, and quartered.” (Miller, 2019)
As a result of this, on the day of the event (June 9th, 2019), over 200 alt-right protestors showed up with the express purpose of harassing the attendees and disrupting the event (to which they, quite satisfyingly, failed). If not for the combination of the efforts of the 400 counter-protestors that attended to combat them, and the presence of armed officers, it is likely the event would have been shut down, and more grievous harm would have been done to the attendees.
This instance is a direct demonstration of how hate groups are able to utilize the “online world” to target groups of different ideologies, and the harm that this does to vulnerable parties. As a result of the violence and harm professed online against Drag Queen Story Hour, a space that was intended to be safe for LGBTQIIA2S* folks seeking it, many people now feel afraid to do so, for fear that they or their children may be harmed in some way. Furthermore, these threats/calls for violence made online without barriers to who can access them, emboldened and will likely continue to embolden individuals that would not otherwise involve themselves without the presence of a group to take action.
This leads to our next point: discussion of the radicalization process. Before we explore the ways in which online radicalization harms vulnerable groups, we must understand a bit of the psychology behind radicalization. First and foremost, what is radicalization? According to the Centre for the Prevention of Radicalization Leading to Violence (Montreal, Canada), radicalization is defined as:
“a process whereby people adopt extremist belief systems — including the willingness to use, encourage or facilitate violence — with the aim of promoting an ideology, political project or cause as a means of social transformation.” (Centre for the Prevention of Radicalization Leading to Violence, 2019)
The CPRLV goes on to identify four types of radicalization: right-wing extremism, politico-religious extremism, left-wing extremism, and single-issue extremism. (CPRLV, 2019) For the sake of relevance and page count, we will be focusing on radicalization as it pertains to the discussion of oppression and violence against protected minority groups: right-wing extremism. Right-wing extremism is often associated with facism, racism, and nationalism, and is characterized by “violent defense of racial/ethnic/psuedo-national identity, and hostility and violence against protected minorities and left-leaning political groups.” (CPRLV, 2019) Groups like the previously mentioned American Identity Movement, the Proud Boys, and the Klu Klux Klan, are textbook examples of these right-wing extremist groups.
So, what is causing this insurgence of white male radicalization? As previously discussed, it’s hard to narrow it down to a single motivating factor, especially in a time fraught with difficulty that spans across political, cultural and economic barriers.
Extremist group researcher and author J.M Berger (Jihad Joe: Americans Who Go to War in the Name of Islam) offers some insight into this at a 2017 conference at the International Centre for Counter-Terrorism, titled “Identity Versus Self: Tensions Between Group, Radicalization and Individual Violence”. Berger narrowed these sources of radicalization into 2 distinct categories: personal issues (financial instability, loss, violence, relocation, mental instability), and social issues (war, rapid shift in demographics, political uncertainty, shift in social/cultural attitudes/beliefs, etc). (Berger, 2017) Regardless of the specific issues in question, these 2 categories encompass various factors that can contribute to the state of vulnerability in individuals that right-wing extremist groups seek to exploit.
With this having been established, we can now explore how it is that hate groups utilize social media to attack and infiltrate safe(r) spaces. By using internet platforms, these groups are able to extend the reach of their influence, and connect directly with vulnerable folks who are easy to misinform. By constructing a narrative that holds protected groups responsible for major social/cultural/economic changes, hate groups are able to set the groundwork for individuals with existing issues caused by a capitalist society, to redirect their frustrations towards their cause.
This narrative is further perpetuated by more innocuous ways of social programming, such as taking advantage of online algorithms. In an interview with NPR, writer and mother Joanna Schroeder shared a disturbing instance of this social programming at play. Schroeder discussed encountering hateful content on her son’s instagram feed, and in an attempt to understand more, discovered an entire world of racist, sexist, homophobic/transphobic content in the related videos tab. Schroeder speculated that the dispersion of hateful content amongst other popular media online contributes to the normalization of attitudes that target protected minority groups.
This is an instance of this social programming occurring on such a level that the content appears almost innocuous to an unattentive parent. During a time where disenfranchised groups are seeking to shift the conversation around equity, having a narrative constructed to frame them for societal issues only serves to damage the progress we have made towards change.
Even in the “online” world, one cannot seek solace from the hatred and vitriol seeped into everyday life; supposedly impartial spaces like social media platforms are only designed to suit the intentions of its users. For all the credit it receives for ushering a new social “golden age”, with no systems put in place to moderate and protect its users from hate groups, social media only continues to parallel the systemic violence its users enact upon protected groups in real life.
With all that being said, I don’t mean to write off all of the technological developments we’ve made over the last couple of decades. The internet certainly does offer us a lot at a society; information, community, unlimited access to whatever and whoever we want…the list goes on. However, the consequences of this unlimited access are of great concern to me. Right-wing extremist’s utilization of social media as an extension of the real world violence they enact upon disenfranchised groups occurs in a number of ways. They utilize social media platforms as means of organizing targeted attacks on protected groups, as well as means of spreading their influence to disenchanted vulnerable parties that are easy to misinform. Furthermore, they also utilize online algorithims in order to subtly shape and normalize hateful behaviours in mainstream media as means of acruing more influence.
Ultimately, if we want to utilize the internet as a tool for positive and transformative change, we need to become more diligent as we enter an era of informational warfare and privacy breaches.