“Trends in Content Regulation in Africa and Beyond,” Report from the GNI Session at FIFAfrica

Global Network Initiative
The GNI Blog
Published in
5 min readOct 9, 2020

On September 28, GNI hosted a session on “Trends in Content Regulation in Africa and Beyond” as part of the Forum on Internet Freedom in Africa (FIFAfrica). Speakers from business, civil society, and academia shared their insights on current approaches to content regulation in various countries in Africa, identifying issues and possible multistakeholder solutions. This session built upon a series of consultations on content regulation around the world, including events focused on regulatory proposals in the EU, Pakistan, India, and the UK.

The session began with a presentation of GNI’s content regulation policy brief, which provides practical guidance to governments on how to implement content regulations that are effective, fit for purpose, and protect and enhance fundamental rights. GNI’s Policy Director summarized the key learnings distilled in that brief, reflecting on the importance of human rights principles such as legality, legitimacy, necessity, and proportionality.

Subsequently, Facebook’s Public Policy Manager, Human Rights in Africa, Jeanne Elone, highlighted the company’s ongoing strategy around content moderation. She emphasized that the company is trying to balance the safety and security of its users with freedom of expression and other rights and that content regulation inevitably impacts this effort. Facebook published a white paper, which outlines key principles of content regulation, including consultation, transparency, and accountability.

After this intervention, panelists weighed in on different content regulation initiatives from across the continent. Speakers included: Berhan Taye, Africa Policy Manager and Global Internet Shutdowns Lead at Access Now; Charlie Martial Ngounou, Founder of AfroLeadership; Molly Land, Associate Director of the University of Connecticut Human Rights Institute; and Muthoki Mumo, Sub-Saharan Africa representative at the Committee to Protect Journalists (CPJ).

The panelists noted that many African countries are concerned about the proliferation of fake news, misinformation, and hate speech on the Internet. Misinformation related to COVID-19 has also been on the rise, posing a threat to public health. Recognizing the dangers these online trends pose to democratic governance and national security, several African countries, including Kenya, Tanzania, Ethiopia, and Nigeria, have introduced either new laws or amendments to existing laws. While the cited rationale for these efforts are understandable, the legislation has often been rushed (coinciding in several cases with key national elections) and include flawed provisions, which restrict freedom of expression and privacy, stifle the press, and suppress dissenting voices.

One alarming trend common across these various initiatives is the lack of definitional clarity. For instance, Kenya’s Computer Misuse and Cybercrimes Act, 2018, contains provisions that criminalize “false publication” and the publication of “false information,” without clearly defining what constitutes “fake news.” Ethiopia’s Hate Speech and Disinformation Prevention and Suppression Proclamation, 2020 adopts a broad definition of disinformation i.e. “speech that is false, is disseminated by a person who knew or should reasonably have known the falsity of the information and is highly likely to cause a public disturbance, riot, violence or conflict….” As GNI’s content regulation brief stresses, where laws fail to provide sufficient definitional clarity and guidance, companies and users cannot properly assess what is permissible content. This can result in over-removal by companies and a chilling effect on user speech.

Vague terminology leaves room for the violation of digital rights as well as the misuse of the laws by government or individuals. In fact, there is already evidence for these impacts. In Ethiopia, journalists have been targeted for COVID related reporting, and Kenyan authorities have arrested social media users for spreading “false information” since the start of the pandemic.

In addition, panelists noted and GNI’s brief also points out the challenges that stem from content regulation efforts that take an overly-broad approach to the range of content they attempt to address. Tanzania’s Electronic and Postal Communications (Online Content) Regulations, 2020 implement a very broad set of prohibited categories of content, ranging from content that “causes annoyance,” and leads to “confusion about the economic condition in the country” to anything that “harms the prestige or status” of Tanzania. Similarly, the Social Media Bill introduced in Nigeria’s Senate prohibits not only false statements, but also speech that might affect the security of Nigeria, affect Nigeria’s relationship with other countries, influence the outcome of an election to any office in a general election, or cause enmity or hatred towards a person or group of persons.

Several laws also impose disproportionate penalties for violating these legal obligations. In Tanzania, anyone convicted of defying the regulations faces a fine of at least 5 million shillings ($2,200), imprisonment of a minimum of 12 months, or both. All bloggers and owners of online forums are also required to register with the government, and pay a $900 licensing fee, thereby restricting the growth of businesses and content creators.

Similarly, Ethiopia’s regulation grants the government the authority to fine and imprison citizens for their social media activity. It also includes provisions that increase punishment for individuals and online groups that have more than 5,000 followers — an arbitrary number and a feature that is often out of the users’ control. In Ethiopia, speeches by the government, activists and others have spread through social media and fueled violent conflicts. However, vague terminology and excessive punishment do not sufficiently address this problem. As Access Now has noted, the legislation was crafted without the proper evidence and research into the harmful impacts of hate speech and disinformation, and does not “consider whether existing provisions in the criminal code addresses the root causes [of the violent conflict].”

Tech companies are also struggling with their responses to a rapidly changing news and information environment. For instance, while some social media companies have made important advances in their transparency reporting, the metrics are not always sufficiently granular. Specifically, they are not broken down by geography or language within a given country/context, which makes it difficult to assess impact. Moreover, machine learning solutions, whose utility is limited in situations where the interpretation of content is context-dependent, are often even more limited in their ability to interpret certain languages, making it all the more essential to employ human content moderators from diverse demographics and languages. Content reporting on the ground could also be offered in more languages, and companies should ensure that citizens can report harmful content via mobile devices.

These challenges are exacerbated by content regulation that require intermediaries to judge what content is illegal or otherwise create confusion as to when they may be liable for user content. Under the Ethiopian Hate Speech law, social media platforms are obliged to take down content that’s deemed false or harmful within 24 hours of being notified. However, the legislation is not clear on who provides this notification: there is no authority named, meaning that the platforms could be expected to take down any content that an individual or government entity reports to them.

Navigating these challenges will prove to be essential. Fake news, misinformation, and hate speech pose threats to individual users, but as this discussion demonstrated, so too does overly broad or misaligned content regulation. Panelists advised that there are small steps stakeholders be taking. Seeking to understand policymakers’ incentives for legislation — in some cases, determining whether legislation purports to stifle critical voices — and building a space of trust are helpful starting points. Legislators should look beyond individual content, toward the root of many risks for users of digital communications services and platforms: coordinated efforts, such as troll farms, “net centers,” and groups with some affiliation or relationship with governments, that disproportionately result in greater harms.

--

--

Global Network Initiative
The GNI Blog

GNI is the only multistakeholder initiative dedicated to advancing freedom of expression and privacy in the information and communications technology sector.