Alternative Social Media Platforms
Are they just a free pass to publicly post hateful content?
The U.S. election season not only brought about the circulation of false information but also instilled a deeper sense of not wanting to have one’s speech censored. This has led to the “splintering” of social media platforms into alternative platforms such as Parler, MeWe, and Gab.
As a result, we’ve seen an increase in people migrating to “free speech” centric platforms, where under the guise of “free speech”, one can see hate speech prevail. Moreover, these platforms base their usage policy on the U.S. First Amendment or advertise their policies as such. But what does the First Amendment actually say?
“The First Amendment of the United States Constitution protects the right to freedom of religion and freedom of expression from government interference” (“First Amendment”, n.d.).
Freedom of expression is mainly associated with freedom of speech. However, this is not completely black and white. There are two categories of speech, “protected” and “unprotected”. “Protected” speech is that which the Supreme Court has determined cannot be prohibited by the government. “Unprotected” speech is that which may cause a breach of peace or violence and could be subjected to government interference. The forum in which the speech takes place decides the level of “protection”.
Merely citing the First Amendment in their terms of service doesn’t exempt these platforms from scrutiny. The core of Parler was to support all such conversations that could easily take place in the streets of New York. MeWe was founded as an alternative to Facebook, and Gab was founded as an alternative to Twitter. All three alternative platforms claimed that “free-speech” was their differentiating factor to mainstream social media platforms. But how do you contextualize free-speech?
For instance, Parler is often associated with posts that would not meet the standard for protected speech.
This is just one example out of the 15 listed by Amazon. Parler’s community guidelines seem to strongly condemn violent behavior, as you can see in the image. Then how does such content prevail? This brings us to the question, are these guidelines just for show?
MeWe advertises itself as a “Next Generation Social Media App”.
MeWe’s main selling point is privacy, thus, making it slightly different from Parler and Gab. However, by shifting their focus to privacy-related issues, they seem to ignore the hate speech and conspiracy theories being spread by their various members.
The CEO, Mark Weinstein, does claim to have stricter content moderation policies and a 100-member team solely focused on monitoring such content, but to what end? Ironically, in a recent interview by OneZero, the CEO was dubious about the team being able to efficiently moderate 15 million people.
Finally, there is Gab, one of the most controversial websites that is still active on the internet. The website had a similar fate to that of Parler as a result of its association with the shooter involved in the Pittsburgh synagogue shooting. The shooter, Robert Bower, open fired inside a Pittsburgh synagogue, killing at least 11 congregants, while openly shouting anti-Semitic slurs. Further investigation of the shooter revealed that he was an active Gab user, who often posted anti-Semitic content on the website. Gab is known to be rife with such demeaning content.
Gab’s content moderation policies have been extremely lax since the onset, with the claim that “everyone is welcome”. These lax policies also make it nearly impossible to hold individuals accountable for kick-starting conspiracy theories or spreading hateful content. The founder doesn’t seem too inclined to make any changes regarding the website policies either. Moreover, since the website uses a “forked” version of Mastodon to continue its operations, it can’t be subjected to bans from Google and Apple. It’s safe to say, Gab truly supports “free speech”.
The reason why these apps came to be is because of the “Twexit” movement — a campaign started by Parler where they wanted people, who got upset with Big Tech censoring ex-president Trump’s tweets, to move to their platform. The movement was mainly targeted towards Twitter, hence the name “Twexit”, i.e. Twitter exit, but has now shifted towards all of Big Tech.
It’s difficult to take a stance and point to who’s right. However, it’s evident that by quoting “free speech”, people want to get away with the most vicious content. We need to be able to foster free speech without it transcending into hate speech. But how do we do so?
We need to instill a balance where the right to freedom of speech is not questioned and at the same time, a healthy environment is established. On the platforms’ part, the primary issue with moderation is that the public is confused about the bounds of moderation and how it works. This could be resolved by clearly stating the rules of a community and defining the type of content that is acceptable on the platform in their terms of service. Moreover, by giving an insight into the moderation process, the public would be better informed about what they’re seeing and why they’re seeing it. This would help them understand the expectations behind a safer online community. When we talk about the users, there needs to be a certain amount of accountability on their part as well. For instance, hateful content is often anonymous. There needs to be a way where social media platforms can retrieve the basic contact details of a user, whilst said user remains anonymous. This is to improve the traceability of users actively propagandizing hateful content.
Transparency is the key to prevent people from thinking they’re being “muzzled”.
First Amendment. Cornell Law School. (2020). Retrieved 31 January 2021, from https://www.law.cornell.edu/wex/first_amendment.
Response to Motion — #10 in Parler LLC v. Amazon Web Services Inc. Courtlistener.com. (2021). Retrieved 31 January 2021, from https://www.courtlistener.com/recap/gov.uscourts.wawd.294664/gov.uscourts.wawd.294664.10.0_1.pdf.