Can the EU regulate digital platforms without limiting freedom of speech and free exchange of thoughts?

FreeW1ll
3 min readMar 11, 2022

--

Disinformation, hate speech, content moderation, or algorithmic decisions, are some issues in the context of digital communication. Twenty years after the entry into force of Directive 2000/31/CE on e-commerce, two primary –and expected– law proposals which are destined to modify the legal ecosystem of platforms are in the making: The Digital Services Act (DSA) and the Digital Markets Act (DMA). But there is an intense debate. Concerning infringements of freedoms of expression (article 11, the EU fundamental rights) are reported. Hence, more steps towards fairness and the protection of human rights are needed. The conflict between digital platforms regulations and free speech is attributed partially to deviations between states’ laws and the EU laws, as well as the lack of clear regulations and mechanisms of protection for cases such as the content generated by whistleblowers or civilians in wartimes. A list of policy proposals was brought up in Feb 2022 during the preparations to the European Student Assembly to address these issues at the time the European Commission aims at extending the list of ‘ EU crimes’ to hate speech.

Content moderation

1. To avoid over-repressive moderation practices by the platforms and to ensure that creators are not harmed by such practices, civil liability (indemnities) should encompass content moderation. These civil liability mechanisms could directly be invoked at the level of independent bodies. For example, a user whose creative content has been removed could lodge a complaint with the Digital Services Coordinators and receive reparations.

2. Content moderation for abnormal situations (i.e., wars or disasters) should be clearly regulated to address cases such as victims’ voices against suppression (i.e., victims who stream crimes being committed against them). Indeed, this proposal remains in line with human rights and must never be used to discriminate against groups or undermine EU values.

3. Digital platforms should be held accountable for the bias of unwritten content moderation practices. Discriminatory practices need to be investigated using standardized tests conducted regularly and reported by independent bodies to ensure that all content is treated equally regardless of color, race, gender, beliefs, or political views.

4. The right to lodge a complaint against providers of intermediary services with the Digital Services Coordinators of the concerned Member State should remain affordable for the plaintiff, with the same financial or logistic support as for other legal proceedings. Indeed, these settlement bodies suggested by the DSA Proposal should ensure equality before the law and not become a big new business for law firms.

Transparency

5. The right to a safe exit. Providing users with a copy of their administrative data (that includes any IDs, passwords, and important documents) in the event of account suspension.

6. The right to transparent metadata and user-adjustable personas. Newsfeeds & Ads are governed by algorithmically created users’ personas & metadata. Users should have the right to 1) adjust their digital personas 2) access the metadata of sponsored content 3) learn about all parties that accessed their own metadata.

7. The right of lay users to stay adequately informed, especially about what they consent to, and who access their hidden data. That entails standardized transparency reports about all communications between digital platforms and any institutions (i.e., governments agencies).

Preventive measures

8. Protection mechanisms to free speech practitioners. Whistleblowers and journalists are being prosecuted, and extradited for revealing facts to the public (i.e., WikiLeaks, and Julian Assange). Some journalists were fired just for expressing their thoughts online in their private space. Actionable protocols are needed to protect freedoms of expression. Independent bodies could be created at the EU level to protect free speech practitioners.

9. Preventive measures such as education, the promotion of an inclusive discourse, proactive and pedagogical policies to combat hate speech and disinformation at its source, while also raising awareness regarding the risks associated with usage of social platforms.

10. Digital services providers should provide mental support mechanisms to content moderators. Otherwise, Member States should ensure the inclusion of mental support mechanisms into their social security framework.

Credits to/ Approved by: Ahmad Hammoudeh, Ana Galdamez Morales, Brice Bai, Ghanwa Altaf, Amber Mun, Louis Ryz, Alexandra Mihalari, Hanus Patera, Teresa Calogera, Sirine Ben Hadj Khalifa, Özge Arabaci Urgenç, Alexandra Soulioti, Corneliu Marina-Roxana, Laura Fotulová, João Rui Tanoeiro, Bram Michielsen, Radoslaw Stanislawiak, Chiriac Nicolae, Julie Kleinhans, Aleks Dorohin, Charlotte Grimberg.

--

--