More accountability and less fear — the case for bottom up harmful content policies

Stefania Koskova
Read, Write, Participate
3 min readMay 31, 2019
Political Accountability and New Technologies-Point Conference 8.0 in Sarajevo

More accountability and less fear — the case for bottom up harmful content policies

Prompted by outbreaks of inter-communal violence, terrorist attacks, and state-sponsored mass killings with links to social media abuse, online content policy and regulation to prevent future tragedies are high on the agenda of lawmakers and company executives. Just this month, governments and companies pledged to intensify the search for solutions in the Christchurch Call.

But while social media’s role in fueling real-world violence is increasingly recognized, it is still not well understood. There is even less understanding of the impacts (both positive and negative) of measures deployed so far to curb online abuse, ranging from taking down individual content and accounts, blocking and filtering, reducing visibility or slowing distribution, to communication campaigns and prosecutions. Discussions of effective technology regulation and content policies needed to keep communities safe are often informed by fear and pressure to ‘do something’, rather than empirical evidence.

Coinciding with the Christchurch Call meeting in Paris was South Eastern Europe’s leading annual civic tech conference POINT. It took place in Sarajevo between 16–18 May, and I moderated a discussion titled Technology, Fear and Accountability, about experiences and learnings from responses to online hate speech and disinformation in Sri Lanka and Myanmar. In his portion, Sanjana Hattotuwa from the Center for Policy Alternatives in Sri Lanka and a special advisor to ICT4Peace Foundation cautioned against “mono-causal and single-sourced explanations for violent conflict”, which often manifests as blaming Facebook for their failures in preventing the growth of violence and hate on their platforms. He suggested to focus instead on studying complex media ecologies. Victoire Rio from Myanmar Tech Accountability Network highlighted the importance of local context and the ability to calibrate responses based on the evolving situation on the ground, which requires content moderation decisions to factor in feedback from the grassroots. The abundance of rainbow flags displayed prominently during the conference in support of the first-ever pride parade in Bosnia and Herzegovina to take place in September was a reminder that the risk of violence is dynamic. Since the parade was announced there was a marked increase in online vitriol resulting in physical violence against members of LGBTQ community.

Based on Sanjana’s account, the anti-Muslim violence in Sri Lanka last year led to improvements in collaboration with Facebook and Twitter and a better response to the disinformation and hateful content spread on social media post-Easter Sunday terrorist attacks. Cauterizing content and conduct with a potential to spark violence and taking into consideration signals and data from various sources on the ground might sound like an elusive goal, but honing this practice shows promise. Outrage, blame, and pointing out failures are necessary to garner attention for the gravest mishaps and violations when holding companies (and governments) accountable. But the way forward is through demanding more collaboration in testing and iterative development of solutions that work.

Further, these solutions should be designed for and piloted in countries with a history of recent violent conflict and weaker institutions.

Despite industry and government driven platforms and networks like Global Internet Forum to Counter Terrorism and the EU Internet Forum, the engagement and collaboration — in particular with grassroots civil society from non-Western countries — is ad hoc and reactive. Regulatory models developed in and for liberal democracies might be misused by governments less committed to human rights. Safeguards must, therefore, move beyond the rhetorical, and potential consequences and risk must be considered in advance.

In the Balkans and in South Asia, there is a great appetite to contribute and to work constructively and collaboratively on issues related to safety and security. There is a vision for a way forward toward smart, rights-protecting, security-risk-reducing legal and policy framework for tackling online harms in order to prevent offline tragedies. It is this bottom up, iterative, collaborative approach that needs to be tapped into and scaled.

You can watch the talk here

A summary by the ICT4Peace Foundation can be found here

--

--