Twitter’s Problem with White Nationalism

John Slough
John Slough: Course Portfolio
3 min readSep 14, 2019

As political tensions rise and division increases, the rise of white nationalism has given a particular problem to those who moderate and create policy for social media giants. Most of this issue revolves around how to limit and take down hateful speech on their platforms without affecting too many people, who are acting innocently, within the bounds of their policy. As an example, Twitter has recently been under a lot of fire for not taking enough action against white nationalism on their platform. They are constantly torn between limiting the free expression of their users — creating less of an appeal for Twitter itself — and allowing hateful and divisive speech to slowly build a destructive environment (which would lead to the same end).

Twitter, by its simple format, allows anyone to share any idea quickly and have it seen by almost all of their followers. This has led to the spread of what some might call “fake news.” This is happening across almost every social media, allowing divisive and decidedly false speech to run rampant across a medium that was made to bring people together. Many have grown indignant of the way that Twitter has handled the situation. They not been able to follow through on their hateful conduct policies (protecting discernible groups from targeted violent or hateful speech) but have even been used to spread the messages of islamophobia from the eventual Christchurch shooter. Having a medium such as Twitter to spread hate and false information presents a problem within Twitter and for the country in general, creating an atmosphere in both of polarization and hate between people of different races and religions, incited by white nationalists.

For this reason, many well-known and identified white nationalists are left to post freely on Twitter simply because they tweet just ambiguously enough to stay off the radar of any algorithm that searches for hate speech. For example, a friend jokingly telling their friend to “kill yourself”, whether that is wrong or not, is not able to be distinguished by an algorithm that is put in place to find actual hate speech. Another problem arises from users who are infuriated by the thought of banning or limiting any type of speech, often citing the First Amendment. While Twitter is a private company that can make its own rules and policies, Twitter — like many other social media giants — is afraid to alienate any part of its audience, no matter if the views are harmful or disingenuous.

So what can Twitter do to help fix this growing problem? As a matter of fact, they have already taken some action in removing the influence of White Supremacist groups. Recently Twitter has banned over 2,000 accounts that took part in spreading extremist ideology. However, there is more that they can do. Facebook, working on these same problems, has already implemented some of these policies. First, they could ban any prominent white supremacists that choose to build an audience on their site. Next, they could limit the number of people who are reached by hateful content, as not even followers see everything someone may post on Twitter. Even posts that have made it through the filters should be taken down if they have a hate-filled message, no matter who they are from including the president. If they start to implement even one of these changes, it would not only change their platforms atmosphere but set a precedent for other companies to follow suit. Overall, this would create a safer and more meaningful discourse on Twitter and, hopefully, other platforms as well.

--

--