What can we do about online extremism?

Elena Yi-Ching Ho
Reset Australia
Published in
3 min readAug 24, 2021
A megaphone with a screenshot of a social media post with the word “extremeism” being blasted out of it.

We recently made a submission to the Federal Government’s inquiry into online extremist movements and radicalism. This blog post takes some of our key thoughts, and summarises them below.

The rise of extremist movements has been identified by the federal government as a growing concern in Australia. And the lack of mechanisms to deal with the rise of extremism online, means that tech giants are playing a role in the dissemination of potentially harmful content.

What’s wrong with digital platforms?

Digital platforms such as Facebook, Instagram, Twitter, TikTok, Google and YouTube have forever changed how we consume information and interact with others.

As individuals, we tend to spend time and energy on things we are most curious about and interested in, and digital platforms have learnt how to use this to their advantage.

When you go online content you click, websites you visit, or even just friend requests you make all become part of massive datasets that companies keep about you. These datasets are like filing cabinets full of information that they use to calculate what content they should show to keep you on their apps.

When that content is cat videos, and photos of your family — digital platforms can be a source of connection and joy. But when that content is potentially dangerous or extreme, we risk falling down a rabbit hole of gradually more and more extremist content.

Humans are endlessly intelligent, but our attention is finite. And digital algorithms are using all of the data they have on us to try and control as much of our attention as possible. Why? Because its good for their business.

What has this got to do with extremism?

Extremists refers to people who uphold strong political or religious opinions and support violent, illegal and extreme actions to promote their ideologies.

And although digital platforms weren’t designed to fuel extremism, their business model based on capturing our attention has allowed for the creating of online echo chambers that reinforce extreme views.

Social media has become a common tool for far-right radicalisation, and this shouldn’t come as a surprise. As we have written about before — algorithmic curation systems drive users to content that is engaging, regardless of our cognitive bias. Pushing users further and further down ideological rabbit holes.

Or simply, once a digital platform thinks you’re interested in extreme content, you are likely to see more and more of it over time.

What do we reccommend?

The government has acknowledged that extremist movements are a growing threat in Australia. It has also attempted to tackle the problem through stringent content moderation rules. However, this solution is insufficient because it heavily relies on self-regulation and self-reporting.

That’s why government regulators need to be empowered to take a look under the hood of how these algorithms work, to make sure that big tech platforms are held accountable for the harms that they cause.

This level of public transparency could also arm researchers with tools to better understand the radicalisation process, so that more coordinated efforts to stop radicalisation can be achieved.

To deal with online extremism and radicalism, it is not enough just to raise users’ awareness on what information they are consuming and how they are being provided with certain content. Both tech giants and the government should also build up mechanisms that reduce the risk, such as transparency and safety around algorithms (through independent oversight).

--

--