Digital Platforms and Extremism: Are Content Controls Effective?

It’s easy to agree on the need to fight the spread of terrorism, violent extremism, and hate through digital medium. With each horrific act of terror or tragedy, the demand for better solutions becomes louder. The Christchurch Call to Action, signed on May 15, 2019, is the latest; an action plan joining governments and companies in an intensified effort to eliminate such content online. There is serious attention on the issue. But it seems that malicious actors are still one step ahead. How do we effectively stop them?

Paris Peace Forum
Jun 21 · 9 min read

What’s the Problem?

The internet and social media have become a vital tool in the spread of extremist views. It is now a dominant battleground for ideology and a sophisticated recruitment platform for issues ranging from religious radicalism to hardline political agendas. Extremists groups or individuals can develop high-quality content at low cost and easily reach worldwide to spread fear and ideological propaganda. Or even, as we saw in the Christchurch attack in New Zealand in March of 2019, using it to amplify terror through livestreaming. Society, governments, and technology providers know that we need to get ahead of this and neutralize the qualitative edge of the extremists and their online agendas. But success has been elusive.

Be Smart

Digging for these needles in the haystack is not only a volume challenge. Algorithms still misfire in categorizing content. Engineers are intensely working to improve the efficacy of automated filters, but errors still occur too often.

Isn’t there a responsibility for moderates to match the organization and dedication of religious extremists?

Annette Young, France 24

Moderates don’t have the resources to compete. El Karoui called on the tech platforms not only to tighten their community standards but also to actively help moderates promote a counter-narrative. For example, adjust their algorithms in order to profile moderate speakers more prominently in searches.

What’s at the End of the Tunnel?

Two themes emerged from the Forum’s discussion. First, the need for more action on the technical side from the platform providers, whether that be detection and removal, or actively boosting moderate voices. Second, content removal strategies can only be part of the solution. The battle for ideas is much broader and requires actively disseminating a counter-narrative, beyond the digital arena.

The iDove Project

We are losing the fight against extremism. Most approaches are top-down, military, border centric (city or country at most), not really involving civil society or youth. Solutions are often very traditional and don’t include technology.

Eiman Kheir, African Union

While leaving space for dialogue, Kheir emphasized that counternarratives need an aggressive push. She applauded efforts to amplify moderate voices and increase awareness of alternatives to the jihadist interpretation of Islam. She encouraged all actors to ramp up these efforts and added a nuance that they should also be better contextualized (language, region, social context). The ideological battle is transcending borders and so must solutions. Specifically, she warned that more attention is needed to Africa and Asia. For tech companies, that means more awareness feeding into the filtering analysis, more nuance of what they are reviewing.

Who’s on the Hot Seat?

Most of the attention on this issue gravitates to the platform providers — stricter controls, unambiguous removal of dangerous content, smarter technology to catch abusive content, and much more financial investment in this battle. Rightly so, given their gateway role.

As long as you are not using violence, there is space for dialogue. As long as there is space for dialogue, there is space for changing this narrative.

Eiman Kheir, African Union

What’s Next

These days, the major tech companies have defined clear standards to remove a broad swath of content that explicitly or implicitly is inciteful or dangerous. Most of them are working aggressively to tackle the challenge. Still, they need to become much more nimble and less ambiguous in their review and removal of abusive content, even while defending the freedom of expression and the foundational principles of a free and open internet. Extremist elements are ever-more adept at maneuvering the digital space and their threshold for success is much lower. But even with the most effective content controls, the ideology that drives discontent and opens the door for radicalization will remain. Strategies to combat extremism and radicalization need to look more comprehensively beyond the digital arena.



Insights from the Paris Peace Forum

For advancing global governance

Paris Peace Forum

Written by

Better #governance for a world at peace

Insights from the Paris Peace Forum

For advancing global governance