Should governments be involved in regulating online hate speech?

Maite Laris
17 min readJan 20, 2020

--

Read and tell me what you think…

Introduction

Recent events in Christchurch were a living proof of the capacity to transform online (active connection to the internet) hate speech into real-life violence; and unfortunately remains just one of the many cases around the world. The internet is a global computer network that provides information and communication tools via cyberspace. This space has allowed fast communication, global interconnectedness, anonymity and immediacy of information. Persons from all ages, backgrounds and contexts can access it with different goals and various audience sizes. Knowledge has been globalized and it has become a medium of democratization, as it has permitted people to participate more often on a wider variety of issues. By March 2019, approximatively fifty-six percent of the world´s population had access to the internet, that is more than four billion people; this makes the internet both very powerful, and incredibly useful as well (Internet World Stats, 2019).

Indeed, the very nature of the internet, that is its intractability and its accessibility, has also become an ideal space for the propagation of questionable content like hate speech. According to the Council of Europe, hate speech “covers many forms of expressions which spread, incite, promote or justify hatred, violence and discrimination against a person or group of persons for a variety of reasons” (ECRI, 2019). They also agree that hate speech can lead to more physical violence and that it is a threat to the cohesion of society. This leads us to ask ourselves, who should play a role in curbing online hate speech? There are different potential actors for this: private companies, individuals, and the government.

Some believe that public regulation of online hate speech is very wrong and even dangerous (Yaraghi, 2018). Many argue that the internet should not be regulated because it implies the violation of free speech, and because the internet is “extraspacial”, which for “ethical and practical reasons the government should not intervene” (cited in Tsesis, 2001, p6). Moreover, numerous people also believe that opening the door to public regulation of cyberspace can lead to greater political manipulation and even less privacy online. Nonetheless, I argue that online information can affect human lives and therefore there is a responsibility from the authorities to regulate it. More particularly, hate speech poses a threat to egalitarian values and social cohesion. In other words, there is more harm than good when there is no government intervention in social media content.

In this article I demonstrate that national governments need to play an active role in regulating online hate speech particularly on social networking sites. The first part of this article will be dedicated to the discussion of hate speech in social media, as well as its implications. It then shows, why public regulation is necessary in order to tackle online hate speech through five different but interconnected arguments. At last, I will give some suggested methods of the government’s intervention in social networking sites, as their participation remains a delicate matter.

Discussing Hate Speech

Social media is a very popular tool used by people all over the world in order to communicate and access new information: in 2019, it is expected that the number of social media users hits a little more than two and a half billion users worldwide. Social media has been defined by many scholars in different ways (Statista, 2019). Boyd and Ellison, describe it as a “platform to create profiles, make explicit and traverse relationships” (Boyd and Ellison, 2007). Kietzmann, McCarthy, Hermkens, and Silvestre, state that “social media employ mobile and web-based technologies to create highly interactive platforms via which individuals and communities share, co-landscape of social media sites” (Kietzmann et al, 2011, p241); they believe social media is divided by “building blocks” (Kietzmann et al, 2011, p242). The main social media platforms include Facebook, Instagram, Linked-in, Twitter, but there are thousands more, and less common ones like 8chan and Gab. These platforms allow the generation and creation of individual´s own content, where non-physical interaction is made through a non-boundary technological space. As already said, but further explained, the broadness of the online space, and its massive capacity of transmitting information, has allowed for people to express themselves in an unethical way. By unethical, it is assumed that the comments and thoughts expressed online have not a morally correct content, due to its implications against others.

Hate speech, which in itself is considered unethical by nature, has been rising in social media and on the internet in general, this can be seen by the growing concern of the public, but mainly by the increase in private online regulation. Hate speech can be understood as the expression of hatred for some group or person defined by their race, culture, sexual orientation, gender, and/or ethnicity. Matsuda, Lawrence, Delgado, and Clay Calvert (critical race theorists) describe it as “words that are used as weapons to ambush, terrorize, wound, humiliate, and degrade” (Matsuda et al, 1993, p56). Some examples of hate speech that Jeremy Waldron gives are “islamophobic blogs, cross-burning, racial epithets, bestial descriptions of members of racial minorities” (Waldron, 2012), as well as racist cartoons, anti-Semitic symbols politically incorrect jokes, sexist statements, etc. When talking about hate speech, it automatically involves two parties: the first is aggravated, generally a minority, and the other provokes. According to an article written by Sean McElwee cited by Udoh-Ohin: “hate speech aims at two goals. First, it is an attempt to tell bigots that they are not alone. The second purpose of hate speech is to intimidate the targeted minority, leading them to question whether their dignity and social status is secure.” (Udoh-Ohin, 2017, p8).

This diffusion of hatred on social media has many consequences for society that could be avoided. Hate speech can cause undirect pain on an individual, which affects its long-term performance in general. This can be seen in a John Hopkins study published in 2013 that concluded that “being exposed to online racism can lead to high blood pressure and stress among African Americans” (Zeltner, 2013). Moreover, spreading hate online can also cause unnecessary rivalry amongst people that naturally had no problem with others. Indeed, Calvert expresses that “hateful messages cumulate, creating an ambiance that categorizes and, consequently, causes harm; they create and maintain power structures” (Calvert, 1997). It only contributes to segregate society, therefore is counterproductive when aiming to create a pacific cohesion in the world; it fuels polarization. Richard Delgado and Jean Stefancic also agree that hate speech is leading the United States to a place of less equality and more hatred.

As we can see, hate speech is a big and multifaceted problem that needs a solution in order to advance as a technological society, pacifically and fairly. Discussing the debate about public content moderation on social media is therefore very important. Private intentions have already been seen from big tech companies like Facebook and Google, but there is a greater legal effort to be made by authorities, so that social media becomes a “safer” and more trustworthy environment for all kinds of people. In the next section, I provide the reasons why the government should be intervening legally in regulating hate speech on social media.

Curbing online hate speech

National government regulation of social media content, more particularly, of speech containing hate towards a group of people is a modern debate about the extension of the role of the government in cyberspace. Indeed, the role of the government and the technological era need to evolve accordingly. As offline hate speech got regulated a few years back, online hate speech can and should be regulated as well, for different reasons.

First and foremost, such a massive commonly used space like social media, cannot work without “regulation” that could make the platforms a “safer” space to use. Even though complete impartiality is impossible, an active participation from all parties involved (individuals and companies), including the government can lead to more neutral regulation. Indeed, the accessibility and openness of the internet allows people from all ages to use and enter any IP address, and be exposed to any kind of information. Consequently, young people may see or read daily comments or messages, from Facebook or any other platform, that is considered harmful without their guardians actually knowing. For this reason, Alexander Tsesis argues that: “the Internet provides an extensive forum to persons intent on directing outgroup oppression, so instead of tolerating this antisocial activity, Internet hate speech, which poses a substantial threat to egalitarian democracy and its constituents, should be prohibited” (Tsesis, 2001, p7).

Certainly, people are convinced, influenced and biased by certain pages and comments in social media, as there is a massive quantity of information being shared and commented. As explained by Zack Doffman:

“the sheer volume of content being removed from Facebook is staggering. Over a six-month period last year, the company removed 12.4 million pieces of terrorist content, 99% of which was automatically flagged. There are similar statistics for graphic and violent content and for hate speech” (Doffman, 2019).

Indeed, everyday people post, more commonly on groups, words with “racist and homophobic statements, such as calling non-whites ‘vermin’ and gay people ‘degenerates’” (Doffman, 2019), as well as pictures with the symbol of the swastika during the Nazi period, and even comments degrading people from different ethnicities. For example, a comment posted on Facebook on September 15th in Myanmar saying: “today Maung Daw has been announced as a Bengali-free zone”, where “Bengali is regularly used as a slur for Myanmar’s largely Muslim Rohingya population and Maung Daw was the site of some of the military’s most egregious crimes”, has been one of the grave causes of the continuous violence in that country against the Muslim Rohingya population (Gilbert, 2018). David Gilbert writes that “the post, shared alarmingly well, piling up 13,500 shares, 22,000 reactions, and over 2,000 comments. And unfortunately, it’s just one of thousands of hate-fueled messages that have polluted Facebook in Myanmar, transforming the social media platform into ‘a beast’ that facilitated ethnic violence in Myanmar, according to the U.N” (Gilbert, 2018). As noted, this is just an example of the tragic consequences on real life actions, of allowing all kinds of speech on social media, that the government could prevent by regulating. Thus, regulating should not be the enemy here, online chaos and freedom should be.

Secondly, hate speech on social media should be regulated by the government because instead of impeding formal regulation, there should be greater input on making sure that while these regulations take place, they really get implemented correctly and ethically. Historically, all other media has been regulated, why wouldn’t this modern type of mass media be legitimate of regulation. This does not minimize the fact that there are many problems and possible risks to overcome, it only suggests that if regulations have always been the guideline of a peaceful and cooperative society, why wouldn’t it work in this occasion? Blayne Haggart and Natasha Tusikov go as far as saying, that there is no way that the government won’t regulate social media because they already are, just not openly: “in other words, governments are regulating speech, but not through democratic channels. Online platforms and internet service providers are regulating speech based on self-interested terms of service. That is, until the moment they decide to drop the banhammer” (Haggart and Tusikov, 2019). Considering that governments are already regulating content on social media, democratizing the question of online regulation becomes then more important to tackle, in order to talk not so much about whether or not regulating hate speech on social media, but rather about “who should draw a line that is inherently subjective, and that changes over time and across societies” (Haggart and Tusikov, 2019).

Thirdly, leaving regulation to commercial platforms isn’t enough, there needs to be a common effort from private companies, but also governments in order for it to be effective. Today, platforms like Facebook, Youtube and Twitter, all have put more effort in regulating content and comments that are considered harmful to others. In fact, “to help enforce its policies, Facebook has developed and deployed artificial-intelligence tools that can spot and remove content even before users see it” (Romm and Dwoskin, 2019). But, as they outline, technology isn’t perfect especially when talking about hate discourse. To fill in this gap, governments could catch the shared hateful comments that are missed by private technology. As expressed by Julian Knight, an English politician of the conservative party, to a Google Senior policy council: “why has your self-regulation so demonstrably failed, and how many chances do you need?” (Hendrix, 2018). Indeed, Youtube, a major video social media platform owned by Google, is considered one of the best platforms that has designed algorithms to make people engaged in their screens so they keep watching, which at times results in bad outcomes: “critics and independent researchers say YouTube has inadvertently created a dangerous on-ramp to extremism by combining two things: a business model that rewards provocative videos with exposure and advertising dollars, and an algorithm that guides users down personalized paths meant to keep them glued to their screens” (Roose, 2019). Even if Youtube top executives have decided to get stricter with their content policy, their main purpose is profit driven, which makes it difficult for them to actually ban all hateful comments and videos. This is where government policies could and should step in, in order to avoid people, especially easily manipulated people, to watch and be “brainwashed” (Roose, 2019) by extremist content or any other type of biased narratives.

Moreover, the lack of transparency of private companies like Facebook and the fact that they are in charge of not only providing the platform to be used for content publication, but are also the ones limiting that content, is in itself a controversial problem. This imposes some doubts about the credibility of the process of filtering information, as it could be a biased filtering process, especially when considering that they keep it so secretive to the public. Implicating the government in this kind of decisions could allow the process to be more transparent and therefore more effective. Because in fact, as Frank Sesno (Director of the School of Media and Public Affairs at The George Washington University) explains: “they fear that they are going to be held to account for the content that they say they are merely facilitating and not producing” (Hendrix, 2018).

Similarly, Chris Hughes, co-founder of Facebook, recently posted an article regarding the “monopolistic” role of the company. He says that the company has a lot of influence and even “control over speech”, in fact, the CEO Mark Zuckerberg can individually decide how to run the company’s algorithms to decide what information people read and see, “they just don’t do it properly” (Hughes, 2019). Hughes says in an article of the conversation: “Facebook has responded to many of the criticisms of how it manages speech by hiring thousands of contractors to enforce the rules that Mark, and senior executives develop. In other words, responsibility for establishing censorship rules and enforcing them on Facebook cannot be entrusted to the management and staff of Facebook and must be turned over to the government” (Hughes cited in Reed, 2019). Facebook has even failed in taking down pages and profiles that share ‘harmful’ content. The company, permitted that “a notorious Russian neo-Nazi”, promotes its white nationalist clothing for minimum three years’ time, which gave him the place to propagate and influence “white supremacist” comments and ideals, as well as “spout hate to thousands of followers” (Robins-early, 2018). In this sense, it is necessary for the government to intervene on social media hate speech along with private companies.

At last, it is important to differentiate between censorship and regulation, in order to lose fear in the possibility of the government regulating social media content. Fear and distrust over the regulation process is understandable, as it puts a lot of power into the hands of the authorities that could hardly be taken away. Nonetheless, making the difference between the two terms remains very important in order to focus on what the real problem is.

Censorship is described by Anastaplo in Britannica as: “the changing or the suppression or prohibition of speech or writing that is deemed subversive of the common good. Whereas it could once be maintained that the law forbids whatever it does not permit, it is now generally accepted, that one may do whatever is not forbidden by law” (Anastaplo, 2019). Instead of looking it as a limitation to freedom of expression, it is about understanding the process as a step towards better social cohesion, because “hate speech is speech, no doubt; but not all forms of speech or expression are licit” (Waldron, 2012, p14). The challenge is not whether regulation should be acceptable, but rather on defining the terms and conditions of it. Governments would not be censoring, but rather would be monitoring content with the aim that the respective country and even the world, becomes a more secure one. As Natalie Alkiviadou explains: “the issue of hate speech regulation is usually presented by academics, civil society and international organisations as a balancing exercise between free speech and other freedoms and values such as freedom of discrimination and human dignity” (Alkiviadou, 2018, p25). It is an exercise of balance, not a restriction to freedom of speech. Thus, it is by maintaining a legal regulatory framework that an ethical approach to regulation can be achieved, without being a limitation to freedom of speech. “By combining legal intervention with technological regulatory mechanisms — monitoring, IPS user agreements, user end software and hotlines — the harm caused by online hate can be diminished” (Banks, 2010, p239). This idea is also backed up by two Indian scholars that believe that: “with the help of combined effort from the government, the Internet Service Providers (ISPs) and online social networks, the proper policies can be framed to counter both hate speech and terrorism efficiently and effectively” (Chetty and Alathur, 2018, p108).

About freedom of speech

The debate regarding the regulation of social media from the authorities has mainly two sides: the ones that believe it would contribute to censorship, therefore limit freedom of speech, and the ones that trust government regulation will only provide legal frameworks to make social media safer. One of the main arguments that is debated related to the restriction of freedom of speech, is government’s abuse of power. Governments don’t need laws in order to block or filter content on social media, many of them, if not all, already access this kind of platforms either for information or political manipulation, so the argument that a potential regulatory framework of social media could lead to greater government manipulation, and less privacy, lacks evidence. Perhaps a more scrupulous regulation of social media will allow political manipulation to decrease, due to more transparency and accountability.

In this sense, it is such a controversial topic that if an action is not taken, there will never be enough theoretical knowledge to prove something is right, unless already experienced, especially in the domain of cyberspace. For this regulatory framework to work several considerations are needed. It is very important for the regulation process, and therefore the policy, to be transparent, but also very clear and detailed, in order to avoid abuse of power and a conflict of interest from all parties involved, as well as hold the government accountable for its actions online and offline. A clear and vast policy can be used as a solution to the “definitional ambiguity” (Keats Citron, 2017, p4) that lies as one of the main problems. Indeed, “clarity in the definition, meaning, and application of both terms would help constrain censorship creep” (Keats Citron, 2017, p5).

Our responsibility then, as individuals who uses social media daily is to demand proof, to criticize, stay alert and be informed of how and what the governments does, but also ask for transparency reports that really put pressure on the established laws. As it has been happening, but is yet to be ameliorated, with the European Union case. The European Union released a “code of conduct on countering illegal hate speech online” where through a “common agreed methodology”, organisations located all over the territory “test how the IT companies are implementing the commitments in the Code” (European Commission, 2019). Even though it is not a written law, it is a step towards promoting regulation on a national level in the European Union:

“following the publication today of a study on laws and practices of 47 member states on blocking, filtering and removal of Internet content, the Council of Europe Secretary General Thorbjørn Jagland urged European governments to ensure that their legal frameworks and procedures in this area are clear, transparent and incorporate adequate safeguards for freedom of expression and access to information in compliance with Article 10 of the European Convention on Human Rights” (Council of Europe, 2016).

For this reason, the case for regulating social media by governments turns out to be more powerful and even sustained.

Conclusion

To sum up, the increase in social media usage has had a massive impact on society today. These platforms have allowed to share public opinion more than ever and have provided a space of complete freedom electronically. Nevertheless, these benefits have been also followed by some backlashes, such as the propagation of hate speech that cause great social damage in real-life. In order to avoid causing social harm both online and offline, national governments need to play an active role in regulating online hate speech particularly on social networking sites for three reasons: it will allow to make social media platforms a safer environment, outsourcing regulation to social media platforms is not enough and finally, it will demystify the regulation process of cyberspace, so that people stop having fear of it and rather see that regulation is not the same as censorship. Perhaps this study can extend its focus on to a broader subject, because even if the government does decide to limit hate speech online, what evidence is there to prove that it is actually going to reduce real life violence and social polarization?

References

Alkiviadou, N. (2018). Hate speech on social media networks: towards a regulatory framework? Retrieved from: https://search-proquest-com.ezproxy.lib.uts.edu.au/docview/2203232328/C43A5789F3EF4954PQ/20?accountid=17095

Anastaplo, G. (2019). Censorship; Encyclopedia Britannica. Retrieved from: https://www.britannica.com/topic/censorship

Banks, J. (2010). Regulating Hate Speech Online. International Review of Law, Computers & Technology Vol. 24, №3, 233 –239. Retrieved from: http://web.a.ebscohost.com.ezproxy1.library.usyd.edu.au/ehost/pdfviewer/pdfviewer?vid=1&sid=026fbc5a-a7ec-444f-ad76-6d1c1f7763d1%40sdc-v-sessmgr03

Boyd, D and Ellison, N. (2007). Social Network Sites: Definition, History and Scholarship. Retrieved from: https://academic.oup.com/jcmc/article/13/1/210/4583062

Chetty, N and Alathur, S. (2018). Hate speech review in the context of online social networks. Retrieved from: https://www.researchgate.net/publication/324955141_Hate_speech_review_in_the_context_of_online_social_networks

Council of Europe. (2016). Council of Europe Secretary General concerned about Internet censorship: Rules for blocking and removal of illegal content must be transparent and proportionate. Retrieved from: https://www.coe.int/en/web/tbilisi/-/council-of-europe-secretary-general-concerned-about-internet-censorship-rules-for-blocking-and-removal-of-illegal-content-must-be-transparent-and-prop?desktop=false

Doffman, Z. (2019). Facebook attacked for refusing to remove neo-nazi content even after Christchurch; Forbes. Retrieved from: https://www.forbes.com/sites/zakdoffman/2019/03/25/facebook-attacked-for-refusing-to-remove-neo-nazi-content-even-after-christchurch/#106a846732d6

European Commission. (2019). The EU code of conduct on countering illegal hate speech online. Retrieved from: https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/countering-illegal-hate-speech-online_en

European Commission against Racism and Intolerance (ECRI). (2019). Hate speech and violence. Retrieved from: https://www.coe.int/en/web/european-commission-against-racism-and-intolerance/hate-speech-and-violence

Gilbert, D. (2018). Hate speech is still going viral on Facebook in Myanmar, despite Zuckerberg´s promise. Retrieved from: https://news.vice.com/en_us/article/8xdw83/zuckerberg-says-facebook-is-taking-its-myanmar-problem-seriously-activists-say-thats-bs

Haggart, B and Tusikov, N. (2019). Stop outsourcing the regulation of hate speech to social media; the conversation. Retrieved from: https://theconversation.com/stop-outsourcing-the-regulation-of-hate-speech-to-social-media-114276

Hendrix, J. (2018). The age of unregulated social media is over; Just Security. Retrieved from: https://www.justsecurity.org/52346/age-unregulated-social-media/

Internet World Statistics. (2019). Internet Users in the World by Regions — March 2019. Retrieved from: https://www.internetworldstats.com/stats.htm

Keats Citron, D. (2017). What to Do about the Emerging Threat of Censorship Creep on the Internet. Retrieved from: https://object.cato.org/sites/cato.org/files/pubs/pdf/pa-828.pdf

Kietzmann, J et al. (2011). Social Media? Get Serious! Understanding the functional building blocks of Social Media, p241–251. Retrieved from: https://www.researchgate.net/publication/227413605_Social_Media_Get_Serious_Understanding_the_Functional_Building_Blocks_of_Social_Media

Matsuda, M. J., Lawrence, C. R., Delgado, R., & Crenshaw, K. W. (1993). Words that wound. Boulder, CO: Westview.

Reed, K. (2019). Co-founder Chris Hughes calls for government break-up of Facebook. Retrieved from: https://www.wsws.org/en/articles/2019/05/16/hugh-m16.html

Robins-early, N. (2018). Facebook Let A Notorious Russian Neo-Nazi Profit Off Its Platform for Years; The HuffPost. Retrieved from: https://www.huffingtonpost.com.au/entry/facebook-nazi-white-rex-nikitin_n_5b4f949ee4b0de86f4894b80

Romm, T and Dwoskin, E. (2019). Facebook says it will now block white-nationalist, white-separatist posts; the Washington Post. Retrieved from: https://www.washingtonpost.com/technology/2019/03/27/facebook-says-it-will-now-block-white-nationalist-white-separatist-posts/?utm_term=.547de7ba1b36

Roose, K. (2019). The making of a Youtube radical; The New York Times. Retrieved from: https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html

Statista. (2019). Number of social media users worldwide from 2010 to 2021 (in billions). Retrieved from: https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/

Tsesis, A. (2001). Hate in Cyberspace: Regulating Hate Speech on the Internet. Retrieved from: https://lawecommons.luc.edu/cgi/viewcontent.cgi?article=1289&context=facpubs

Udoh-Oshin, G. (2017). Hate Speech on the Internet: Crimes or Free Speech? Retrieved from:https://digitalcommons.liu.edu/cgi/viewcontent.cgi?article=1009&context=post_honors_theses

Waldron, J. (2012). Why Call Hate Speech Group Libel? In The Harm in Hate Speech (pp. 34–64). Cambridge, Massachusetts; London, England: Harvard University Press. Retrieved from http://www.jstor.org.ezproxy1.library.usyd.edu.au/stable/j.ctt2jbrjd.5

Yaraghi, N. (2018). Regulating free speech on social media is dangerous and futile. Retrieved from: https://www.brookings.edu/blog/techtank/2018/09/21/regulating-free-speech-on-social-media-is-dangerous-and-futile/

Zeltner, B. (2013). Racism and high blood pressure link suggested in research in black patients. Retried from: https://www.cleveland.com/healthfit/2013/08/racism_and_high_blood_pressure.html

--

--

Maite Laris

International relations passionate, natural born debater and defender of human rights.