Rebalancing Regulation of Speech: Hyper-Local Content on Global Web-Based Platforms¹

There is too much harmful speech online but too much harmless opinion is being censored. There is more big data on harmful speech than ever before but no information on what the data means. This is the conundrum of our times.

Online public and private discourse in India is mediated through web-based platforms.² The platforms are global with names that are familiar to people in small towns, villages even, across the world.³ Someone might live her whole life in one neighborhood but still clock hours of public and private engagement on the global Facebook, Twitter, WhatsApp, SnapChat, Google and Instagram. These global platforms hold, enable and carry so many local conversations between local actors. Some of these conversations have local consequences.

The web-based platforms are mostly US-based⁴ and mediate an unfathomable quantity of speech between local actors around the world. Some of this speech is harmful⁵, which means that data on harmful speech worldwide has been accumulating with global platforms.

As public discourse moves over to web-based platforms, the usual local modes of intervention and study lose footing. Online harmful speech therefore has global and local dimensions and this piece attempts to tease these intertwined questions out so that researchers may attempt to answer them. In the process, it also discusses how the web-based platforms’ misguided reactions to government pressure to regulate harmful speech threaten freedom of expression.

This essay begins with discussing the elusive data on harmful speech online, and moves on to outlining how the Internet has disrupted local regulation of harmful speech and how regulators are pushing back. Finally it describes the effect of this unfortunate situation on freedom of expression online and offers a constructive way forward.

Data on harmful speech online

We have more data on harmful speech online than we have ever had on harmful speech offline. For example, YouTube tracks the number of times a video is viewed as well as individual user preferences.⁶ Instagram, Facebook and Amazon are able to target advertisements of books and clothes disturbingly intelligently.⁷ These platforms’ algorithms are sophisticated enough to track viral speech⁸, harmful or otherwise.

The same algorithms are also sophisticated enough to track receptiveness to harmful speech. They already curate content to create a personalized experience based on whether one tends to search for ‘violence against women’ or ‘fake rape allegations’.⁹ This personalized content-curation creates a ‘filter bubble’¹⁰ around each of us, showing us content, incendiary or otherwise, based on our own personal vulnerabilities.

The algorithms target users based on data about individual preferences, responses and interests. However the datasets lie beyond our reach. They are held by social media companies, media houses and law enforcement agencies. This data is the key to many of the mysteries and questions looming over harmful speech online. It is almost never shared with researchers who may be able extract useful findings from it, outside tightly controlled company-funded studies that do not ask the difficult questions¹¹.

It is not clear whether the companies are able to flag, categorise or unpack all of the data. For example, it may be easier for most content reviewers, wherever they might be based, to grasp the meaning and implications of the word ‘negro’ as opposed to the word ‘chamar’. ‘Chamar’ has significant but local baggage in India.¹² This is apparent from news reports from villages, of upper caste men attacking the Chamar community with weapons and burning their homes.¹³

Harmful speech can be hyper-localised, significant in just one village or district, such that even people from the same state might not understand. This is the sort of thing that local police and local journalists might notice and address but a global corporation might miss completely.

Local regulation of harmful speech

Regulation of harmful speech has traditionally been the responsibility of local governments. Most countries have laws restricting dangerous speech¹⁴ like incitement to violence. This part of the essay uses India’s regulation of speech as an illustration of the strategies used by states to cope with harmful speech.

A close look at hate speech law in India suggests that certain regulatory strategies are used commonly to regulate speech. These include criminalization, censorship, removal of content from circulation and regulation of the mass media.¹⁵

Indian law criminalizes many kinds of potentially harmful speech, ranging from incitement to violence, to ‘insulting’ members of specified castes or tribes.¹⁶ These laws are medium neutral and apply to spoken words as well as media content. There are also laws that the state can use to prevent the circulation of harmful content: the government may seize books, ban their import and even prevent public speeches that might incite violence.¹⁷

In addition to general law, India regulates the mass media with medium specific regulation. Films must obtain prior approval for public screening¹⁸, and cable televisions channels can be forced off the air for failing to comply with content guidelines¹⁹.

It is evident that the Indian government is accustomed to a great degree of control and many modes of intervention while coping with harmful speech. Unsurprisingly, it views the Internet as highly disruptive medium.²⁰ After attempting to criminalise and block speech online for a few years, the Indian government has added network shutdowns to its repertoire of strategies to cope with harmful speech online.²¹ In 2017, there were 70 documented instances of Internet shutdowns in India.²²

Reports of similar shutdowns around the world²³ suggest that many countries are responding in this manner to what they see (whether correctly or wrongly) as harmful speech online. In most cases, this is likely to be a disproportionate response.²⁴ However the shutdowns in combination with other threats to platforms’ operations seem to have affected the platforms’ business, leading them to react in ways²⁵ that may damage public discourse.²⁶

Freedom of expression and harmful speech

Freedom of expression is eroded by global platforms’ reaction to the threats to their businesses in markets around the world.²⁷ The threats are delivered through shutdowns, potential data localization law and other means. The platforms are reacting by self-regulating in a manner that imperils free expression while failing to truly address the problem of hyper-local harmful speech.

For example, after protests erupted after the killing of popular rebel leader Burhan Wani in Kashmir in 2016, Facebook restricted and removed speech about the incident on its platform.²⁸ US and UK based academics reported that their profiles were disabled and their posts removed when they posted information about the incident or criticized the armed forces’ conduct in Kashmir.²⁹ Similarly, Kashmir Solidarity Network’s page was removed and its administrator’s account disabled.³⁰

This an illustration of the threat to liberty and freedom of expression contained within Facebook’s reaction. Kashmir is the classic complex case in which freedom of expression matters more than ever although there are legitimate concerns about harmful speech and violent extremism. No one would argue that extremism is not of serious concern in Kashmir, especially in the wake of a violent incident. However it is equally clear that the use of excessive force and violation of the human rights of the Kashmiri people is of equal, arguably greater, concern.³¹ This is why information and news from Kashmir is critical — there is little else to hold the government and armed forces accountable. Given that the press is frequently banned from reporting from the region³², the speech on web-based platform becomes critical. Facebook failed to navigate this balance in Kashmir and will very likely fail to do it elsewhere under similarly complex circumstances.

In the context of some kinds of harmful speech, such as online misogyny, companies like Twitter and Facebook have been working with experts.³³ In principle this is the beginning of a useful feedback loop. However when one considers the diversity of jurisdictions involved and the fact that the experts cannot possibly have local knowledge of these jurisdictions, it becomes apparent that this is far from simple. The platforms, to their credit, are beginning to appreciate and acknowledge this.³⁴

Platforms’ efforts to censor, apart the freedom of expression criticism it elicits, may not even be effective enough. Hutu extremists used the phrase ‘cut down the tall trees’ during Rwanda’s genocide.³⁵ It is unclear how human content moderators sitting in another country expect to understand the local significance of such a phrase or image. This is worrying if one considers how communal violence has been triggered by an image of the Prophet and Kaaba Sharif in Baduria³⁶, and by rumours about cow slaughter in Jharkhand³⁷. In both cases, the violence has resulted in death.

Way forward

Without information, it is difficult for scholars to develop a body of scholarship to cope with harmful content online. It would be progress if experts started developing a range of strategies to achieve the delicate balance of protecting freedom of expression while addressing harmful content online³⁸. Since this is a global problem, with different local and hyper-local dimensions in different parts of the world, the platforms need to engage with a diversity of experts.

It is unwise to respond to serious harmful speech problems by trial and error, or by testing one strategy after another. It would be more productive to evaluate a spectrum of strategies and consider their relative costs and benefits.³⁹ At this stage I must confess that little is known about whether the global platforms did in fact conduct such a consolidated evaluation of potential strategies for handing online harmful speech.

What is clear is that the data is available, and experts around the world are both available and willing to come together to work with the data. The data may help us understand patterns of hate, harm and fear and ways to undo them in different local contexts. However, it is important to note that this data may also present its own threats. It can be used to violate privacy, and profile and target political dissenters. Even algorithmic flagging of videos is dual-use technology and can be used to flag controversial journalism or political dissent as extremist content. This is why platforms’ working with governments directly, outside the influence of constitutional courts, creates human rights risks.

It would appear that the global platforms have erred in waiting until they find themselves gradually backed into knee-jerk reactions to harmful content. They may do better, and we may all do better, if they start sharing their data with researchers. This will permit experts to develop strategies that offer the most promising chance of coping with harmful content without endangering the right to freedom of expression. It may emerge that the technology that created the filter bubble is also capable of helping us track and respond to harmful speech without compromising individual privacy or liberty.

[1] Based on my talk at a workshop on the intersection of algorithms and human behavior held by the Berkman Klein Center for Internet & Society at Harvard University in July 2017. I wish to thank Daphne Keller for her feedback on my draft, and the Berkman Klein Center community for all the conversations that have shaped my work.

[2] See International Telecommunication Union, Measuring the Information Society Report, Volume 1 (2017).

[3] We Are Social, Digital in 2017: Southern Asia (2017) as quoted in Statista India: social network penetration 2016 | Statistic, https://www.statista.com/statistics/284436/india-social-network-penetration/.

[4] Id.

[5] Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Human Rights Council, U.N. Doc. A/HRC/23/40 (April 17, 2013) (by Frank La Rue) available at http://undocs.org/A/HRC/23/40.

[6] See Google Support, YouTube Analytics basics — YouTube Help, https://support.google.com/youtube/answer/1714323?hl=en&ref_topic=3025741 [Accessed 17 Jan. 2018]; Google Support, Sharing report — YouTube Help, https://support.google.com/youtube/answer/1733022?hl=en [Accessed 17 Jan. 2018]; and Google Developers, YouTube Analytics and Reporting APIs, https://developers.google.com/youtube/analytics/ [Accessed 17 Jan. 2018].

[7] Amazon.com, Amazon.com: Advertising Preferences, https://www.amazon.com/adprefs [Accessed 17 Jan. 2018]; Facebook Business, Choose your audience, https://www.facebook.com/business/products/ads/ad-targeting [Accessed 17 Jan. 2018]; Facebook IQ, Audience Insights, https://www.facebook.com/iq/tools-resources/audience-insights [Accessed 17 Jan. 2018]. See also Madsbjerg, C. and Rasmussen, M. (2018), Advertising’s Big Data Dilemma, Harvard Business Review, Available at: https://hbr.org/2013/08/advertisings-big-data-dilemma; and Zeynep Tufekci. We’re Building an Artificial Intelligence-Powered Dystopia [video, Ted Global (2017),] https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads/transcript [Accessed 21 Jan. 2018].

[8] Justin Cheng et al, Can cascades be predicted?, Proceedings of the 23rd international conference on World wide web, April 07–11, 2014, Seoul, Korea, https://dl.acm.org/citation.cfm?doid=2566486.2567997 [Accessed 21 Jan. 2018].

[9] See Mostafa M. El-Bermawy, Your Filter Bubble is Destroying Democracy, The Wired, November 18, 2016, https://www.wired.com/2016/11/filter-bubble-destroying-democracy/.

[10] Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You, 2011.

[11] See for example, Demos, Counter-Speech Examining Content that Challenges Extremism Online, (2015) available at: https://www.demos.co.uk/wp-content/uploads/2015/10/Counter-speech.pdf which focuses on counter-speech instead of asking whether other strategies might be more effective.

[12] Swaran Singh and others v. State and another, (2008) 8 SCC 435.

[13] Priya Ramani, Love and Loss in the Time of Lynching, The Wire, 2017 available at https://thewire.in/caste/love-loss-time-lynching.

[14] Susan Benesch, Dangerous Speech: A Proposal to Prevent Group Violence (2013), available at https://dangerousspeech.org/wp-content/uploads/2018/01/Dangerous-Speech-Guidelines-2013.pdf .

[15] Discussed in detail in Chinmayi Arun and Nakul Nayak, Preliminary Findings on Online Hate Speech and the Law in India. Research Publication №2016–19. The Berkman Klein Center for Internet & Society, (2016).

[16] Id.

[17] Id.

[18] The Cinematograph Act 1952, Section 5B.

[19] The Cable Television Act, 1995, Section 20.

[20] Govt. Studying Report on Online Abuse, The Hindu, September 29, 2017 available at: http://www.thehindu.com/news/national/govt-studying-report-on-online-abuse/article19772279.ece [Accessed 21 Jan. 2018]; Madhuparna Das, Social Media Posts Trigger Seven Communal Riots in a Month in West Bengal, The Economic Times, July 8, 2017 available at: https://economictimes.indiatimes.com/news/politics-and-nation/social-media-posts-trigger-seven-communal-riots-in-a-month-in-west-bengal/articleshow/59496771.cms [Accessed 21 Jan. 2018].

[21] Discussed in detail in Chinmayi Arun and Nakul Nayak, Preliminary Findings on Online Hate Speech and the Law in India. Research Publication №2016–19, The Berkman Klein Center for Internet & Society, 5 (2016).

[22] Software Freedom Law Centre, Internet Shutdowns in India, https://www.internetshutdowns.in [Accessed 21 Jan. 2018].

[23] See Access Now, #KeepItOn, https://www.accessnow.org/keepiton/ [Accessed 21 Jan. 2018].

[24] Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Human Rights Council, U.N. Doc. A/HRC/35/22 (March 30, 2017) (by David Kaye) available at https://undocs.org/A/HRC/35/22.

[25] Sam Levin, Tech giants team up to fight extremism following cries that they allow terrorism, The Guardian, June 26, 2017 available at: https://www.theguardian.com/technology/2017/jun/26/google-facebook-counter-terrorism-online-extremism.

[26] Danielle Citron, What to Do about the Emerging Threat of Censorship Creep on the Internet, Cato Institute, Policy Analysis, Number 828 (2017).

[27] Id.

[28] Abhishek Saha, Burhan effect: Facebook blocks page of Kashmir magazine, deletes cover of issue, Hindustan Times, July 9, 2017 available at http://www.hindustantimes.com/india-news/burhan-effect-facebook-blocks-page-of-kashmir-magazine-deletes-cover-of-issue/story-FWxU7PnATmZc52jClkkl0O.html [Accessed 18 Jan. 2018]; Vidhi Doshi, Facebook under fire for ‘censoring’ Kashmir-related posts and accounts, The Guardian, July 19, 2016, available at https://www.theguardian.com/technology/2016/jul/19/facebook-under-fire-censoring-kashmir-posts-accounts [Accessed 18 Jan. 2018].

[29] Id.

[30] Anuj Srivas, The Facebook-Kashmir Episode Blocks: Invisible Censorship, The Wire, July 31, 2016, https://thewire.in/55156/the-facebook-kashmir-blocks-technical-errors-editorial-mistakes-and-invisible-censorship-galore/ [Accessed 18 Jan. 2018]; Piyasree Dasgupta, Abandoning Nuance, Facebook Is Deeming Posts On Kashmir ‘Terror Content’, July 31, 2016, Huffington Post India, http://www.huffingtonpost.in/2016/07/31/abandoning-nuance-facebook-is-deeming-posts-on-kashmir-terror_a_21440637/ [Accessed 18 Jan. 2018].

[31] International Commission of Jurists, Human Rights in Kashmir (1995) available at: https://www.icj.org/wp-content/uploads/1995/01/India-human-righst-in-Kashmir-fact-finding-mission-report-1995-eng.pdf [Accessed 21 Jan. 2018].

[32] Toufiq Rashid, Kashmir Newspapers Raided, Printing Banned for 3 days to ‘Ensure Peace’, Hindustan Times, July 17, 2016, available at: http://www.hindustantimes.com/india-news/j-k-govt-bans-publication-of-newspapers-in-valley-for-3-days-to-ensure-peace/story-VbUcLgOQOScIxCMmHdYrgP.html [Accessed 21 Jan. 2018]; Rayan Naqash, #Mediagag in Kashmir: Journalists unite to protest the ban on Kashmir Reader, The Scroll, October 5, 2016, https://scroll.in/article/818254/mediagag-in-kashmir-journalists-unite-to-protest-the-ban-on-kashmir-reader [Accessed 21 Jan. 2018].

[33] Twitter, Twitter Safety Partners, https://about.twitter.com/en_us/safety/safety-partners.html [Accessed 18 Jan. 2018]. Facebook Help Centre, What is the Facebook Safety Advisory Board and what does this board do?, https://www.facebook.com/help/222332597793306/?ref=sc [Accessed 18 Jan. 2018]

[34] Monika Bickert, At Facebook We get Things Wrong — but We Take our Safety Role Seriously, The Guardian, May 22, 2017, available at https://www.theguardian.com/commentisfree/2017/may/22/facebook-get-things-wrong-but-safety-role-seriously [Accessed 22 Jan. 2018].

[35] Andrew Harding, Rwanda Tries to Heal its Wounds, BBC News, August 30, 2003, available at: http://news.bbc.co.uk/2/hi/programmes/from_our_own_correspondent/3191489.stm [Accessed 18 Jan. 2018].

[36] Sweety Kumari, Looking Back, Bengal fights ‘Communal’ Tag: Basirhat Riots Lead the Pack, Social Media ‘Menace’ under Police Scanner, The Indian Express, 31 December 2017.

[37] Saurav Roy, After Lynchings, Jharkhand to Expel People Sharing Rumours on Social Media, Hindustan Times, July 10, 2017, available at: http://www.hindustantimes.com/india-news/after-lynchings-jharkhand-to-expel-people-sharing-rumours-on-social-media/story-iHkjSLRyuJ58q9G8Wzc0WL.html [Accessed 22 Jan. 2018].

[38] Nathan J. Matias, The Benefits of Massively-Scaling Platform Research and Accountability, Berkman Klein Center for Internet and Society, Perspectives on Harmful Speech Online (2017) available at: https://medium.com/berkman-klein-center/the-benefits-of-massively-scaling-platform-research-and-accountability-b969a58b5d5.

[39] Id.