The past few years have seen the growth and popularization of a dangerous ultra-conservative movement in the “Alt-right.” This group is organized and technologically savvy, using social media and technology platforms to their advantage and growing in part due to inadequate moderation from Facebook, Twitter, YouTube, as well as more niche platforms like Reddit and 4Chan.
Through providing fringe movements with a platform, radicalizing new members through algorithms, and transitioning far right extremism from the keyboard to polling stations, Big tech companies are complicit in the rise of the Alt-right extremist movement. The online rise in this movement has translated into a global policy shift, and while we all have a responsibility to voice our views in the democratic process to ensure policy stays clear of extremism, If big tech is responsible for creating these movements, they have a responsibility to counter them.
What is the Alt-Right?
First, it’s valuable to define what exactly distinguishes the recent Alt-right movement from other political movements. Though existing on the fringes for several years, this movement really came into public focus alongside the candidacy of now President Donald Trump. The Alt-right movement is primarily characterized as an effort to unify various elements of the American far right and push them into the mainstream, particularly when it comes to issues of race, gender, sexuality and class.
The movement is predominantly young, white and male, and is reflective of a broader push under American conservatism to cater to a sense of a helpless hegemony under attack by scapegoat groups. This group is also credited with radicalizing members into violent acts, with a large spike in far-right extremism in recent years.
Another key characteristic of the Alt-right movement is their use of technology to grow. These groups have utilized major platforms like Facebook and Twitter to grow and post content, content that demonizes groups and can often incite violence. While these platforms have community guidelines, they are often murky to navigate and difficult to enforce. For example, the Counter Extremism Project (CEP) studied several far-right pages on Facebook. After monitoring closely for two months and checking them against Facebook’s terms of service, CEP reported several violations such as hate speech content. Facebook only removed a small handful of the pages.
The Role of Social Media
Leniency on the part of large social media platforms have not only allowed these movements to grow unhindered, but provided them with the confidence that their members can push the envelope without being held to account.
Recently, after intense public backlash, large social media platforms have begun to take enforcement of these groups seriously, which has resulted in removal from the platform for several of the largest offending groups and pages. While the limited action that platforms like Facebook and Twitter have taken is welcome, there is always another platform on the internet that welcomes these groups. For example, Reddit, 4Chan and 8Chan have all continued to host these groups.
These platforms promise the alt-right a space where free speech is close to absolute, and where the “marketplace of ideas” has a high tolerance for the ideologies of the Alt-right. These platforms use various tools of network building (suggesting friends, involvement in confined private groups/pages, grouping posts based on similar content) to allow users to connect to others with similar interests, and even offer tools to allow for pseudo anonymity, such as faceless and nameless profiles. And finally the bargain is a grey area of acceptable content, which if you cross, you’ll simply have to transition your network to another platform.
YouTube is one of, if not the largest player when it comes to using algorithms to radicalize viewers. In an effort to keep viewers on the platform longer (and generate more revenue through further advertisement views), YouTube uses an algorithm to determine what video autoplays next. YouTube’s algorithm has determined that more extreme content is an effective source for keeping viewers engaged, and so these algorithms seamlessly transition from traditional, popular conservative views into far right, anti establishment themes like racism and bigotry.
While it’s dangerous to radicalize viewers who are already consuming political content, crucially, YouTube algorithms don’t just link overtly political videos to each other. It’s one thing to turn an existing partisan believer to the fringes of their ideology, but these algorithms push traditionally non-political content into the political sphere, and then push further to the extreme ends of that sphere. For example, someone who is interested in video game content might consume strictly non-political content through a trusted creator. Then, YouTube’s algorithm will suggest a similar creator who might employ subtle political language or ‘nods’ to radicalized viewers which familiarizes the audience with extremist messaging. As viewers become primed to hear far right language, they become more susceptible to the more extreme auto-played videos to follow. As Hunter College professor Jessie Daniels wrote in 2018:
“People have adopted [far right] rhetoric, sometimes without even realizing it. We’re setting up for a massive cultural shift. Among White supremacists, the thinking goes: if today we can get “normies” talking about Pepe the Frog, then tomorrow we can get them to ask the other questions on our agenda: “Are Jews people?” …it is fair to say that White supremacists are succeeding at using media and technology to take their message mainstream.”
As these ideas were initiated outside of the normal political arena, they can masquerade as being non-partisan worldview statements, accessible to viewers who haven’t traditionally been politically interested. If viewers aren’t careful, especially young or easily impressionable consumers with interests that align with subjects targeted by far right groups (such as the gaming example above), they can easily find themselves pushed further into extremist content. And before they know it, they are immersed in far-right and white nationalist themes.
The Alt-Right Outside the Internet
The Alt-right is not only a group that exists on the keyboard, they have grown large enough to influence political discourse and real world policy. These policies are steeped in harsh social conservative ideology and negatively impact already marginalized groups.
This real world policy shift occurred when the ideas championed by the Alt-right became larger than the movement itself, pushing their way into mainstream conservatism. As Daniels writes again:
“algorithms, aided by cable news networks, amplify and systematically move [Alt-right] talking points into the mainstream of political discourse.”
As such, a movement that began on social media evolved into a network in larger society, and members newly exposed to that network have their individual actions impacted by the social group. This impact extends to purchasing decisions, voting, and other major choices. The Alt-right has grown so large that many say the movement has almost taken over traditional establishment conservatism, and the algorithm data backs this up; Twitter has been reluctant to employ algorithms to block Alt-right content, as the algorithm can’t distinguish between that and traditional Republican rhetoric.
Because of the actions of the larger network, social media’s part in the growth of the Alt-right has extended far beyond the impact on the content of their platform, and into the world of real life social and financial policy, even economic trends.
What Happens Now?
In order to slow, and potentially stop the further radicalization of future Alt-right supporters, big tech companies need to address their role in providing spaces for these groups to form, and the role of their algorithms in providing extremist content to new viewers. To do this, they may need to make politically unpopular decisions. For example, the algorithm that Twitter developed to find and block Alt-right content could be deployed, despite the political fallout from blocking establishment and mainstream conservative voices. Additionally, YouTube could employ their algorithms to avoid potentially radicalizing content, even using their algorithms for good to promote deradicalizing content and pushing potentially radicalized viewers back into the mainstream.
However, both of these options, as well as most options that technology platforms can make to deter the Alt-right, are unfavorable for the platforms as they will impact their bottom line. These platforms initially developed practices that either supported or were indifferent to extremist movements because the growing, engaged viewership generated consistent advertising dollars. Now that the Alt-right has expanded past a fringe movement, they represent a considerable user base for these platforms.
At the end of the day, technology and social media platforms need to decide, or be legislatively required, to value a reduction in extremism over the financial benefit of allowing the Alt-right to flourish.
Justin Draper is a Canadian fiction and non-fiction writer who focuses on themes of politics and culture. He is currently completing his Masters degree in Communication and Technology at the University of Alberta.
Follow Justin on Twitter at @JustinDraperYEG