Social Media Companies Must Crack Down on Far-Right Hate

If they won’t, the government needs to step in

Paris Marx
Radical Urbanist
5 min readMar 16, 2019

--

Photo by Szabo Viktor on Unsplash

How many wake-up calls do we need before we start taking far-right, white-nationalist terrorism seriously?

I can’t say I was surprised by what happened in New Zealand on Friday, March 15, 2019, when two mosques in Christchurch were attacked by a white supremacist, killing 49 Muslims as they partook in weekly prayer. Not that it wasn’t an absolute tragedy that probably could have been avoided, but as the years pass, far-right terror only seems to be more common as it spreads to even more countries.

Yet, despite the fact that every extremist attack in the United States in 2018 had a link to right-wing extremism, the threat posed by the growing far-right and white-nationalist movements around the world is not being taken seriously. Racist, Islamophobic, anti-Semitic, homophobic, transphobic, and otherwise bigoted language is all over social media, frequently broadcast by major media networks, and even comes from the mouths of prominent political figures — including the most powerful one in the world.

YouTube, Twitter, Facebook, and other major social media platforms are complicit in spreading and reinforcing hate and conspiracy theories

Some of the most privileged people in our societies are complicit in spreading narratives of hate which are motivating and affirming far-right views held by a vocal minority, and normalizing them for many more. The media must start doing a better job of recognizing these hateful narratives, challenging them, and excluding the people who spread them from their programs. But, potentially even more importantly, we need to start holding social media companies accountable.

It’s no secret that YouTube, Twitter, Facebook, and other major social media platforms are complicit in spreading and reinforcing hate and conspiracy theories, but they’ve largely escaped the serious scrutiny they deserve. Lawmakers have been looking into Facebook since the 2016 U.S. presidential election, making it even more difficult for the company to deny how it separated people into echo chambers where they see a stream of self-reinforcing news and memes, and where extreme right-wing content flourishes. But while the lawmakers have become obsessed with Russia, there’s a more serious problem that needs to be addressed.

YouTube’s algorithms are designed to push people toward more extreme content to keep them watching and engaged. It’s great for advertisers, who get more eyeballs on their products, but terrible for society. Zeynep Tufekci, author of Twitter and Tear Gas, has written that “YouTube may be one of the most powerful radicalizing instruments of the 21st century.” The platform pushes users toward videos which promote extremist views and conspiracy theories, which can sent them down a rabbit hole where they consume more and more to the point where they begin to believe it. YouTube is literally using radicalization algorithms to earn more ad dollars, ignoring the social harm, and politicians aren’t doing anything about it.

And what about Twitter? The platform is a hotbed of hate, where fascists know they can get away with almost anything because CEO Jack Dorsey refuses to address it. Instead of taking some action to reduce the level of abuse directed at women and minorities on the platform, Dorsey has preferred to go on a media tour to assure conservatives Twitter isn’t biased against them, reportedly intervened to stop Alex Jones from getting banned, apologized for labeling a far-right activist for what she was, and went on Joe Rogan’s podcast, which has been called a gateway to the alt-right.

YouTube is literally using radicalization algorithms to earn more ad dollars, ignoring the social harm, and politicians aren’t doing anything about it.

It shouldn’t come as a surprise then that Dorsey won’t take action against Twitter’s far-right users when he seems either unable to form an opinion or may have some sympathies for what they’re doing. But that doesn’t mean he can’t. Twitter knows who the fascists are — they’re blocked in Germany, where social media companies are required by law to stop their spreading of hate. Dorsey could order Twitter staff to crack down on them elsewhere in the world, as well, but he’d rather protect their ability to spew the worst kinds of hate and bigotry.

Over the past few years, there’s been a growing campaign to allow hateful narratives to be spoken openly under the guise of “free speech,” often focusing on the exclusion of hateful speakers from college campuses. Powerful people with insidious agendas have seized on this idea and successfully spread it through the media, using the positive notion of free speech to legitimize their bigoted campaign.

I won’t lie: a few years ago I was sympathetic to the liberal notion that hateful speech, as terrible as it is, was best expressed publicly where it could be countered with better ideas. The proceeding years have taught me how wrong I once was. Bigotry and hate have no place in public fora and the people pushing such views need to be silenced. We have plenty of evidence that deplatforming works to shut down fascists and revoke their influence — it needs to be used more often.

That’s exactly why college and university campuses have become the primary battleground: they fit well into the “free speech” talking point used by far-right figures, but they’re also home to the youngest and most progressive members of the population — people who won’t stand for their peers to be abused and dehumanized by powerful people whose dated (at best) or hateful (at worst) views have no place in the modern world.

The social media platforms are playing into the hands of resurgent far-right groups when they act as though they need to be an impartial means of communication. They have a responsibility to their users and to the broader society not to spread hateful messages and conspiracy theories, radicalizing more people with views that put already marginalized people at greater risk. If YouTube, Twitter, Facebook, and others aren’t going to address the far-right content and users on their platforms, it’s time for the government to step in and use its regulatory powers.

--

--