How bad is COVID misinformation and what can we do about it?

Dylan Williams
Reset Australia
Published in
4 min readJan 29, 2021

Australia has just started rolling out a multi-million dollar public health campaign to encourage public trust in the COVID vaccine.

And after a year of hearing weird stories about 5G towers, Bill Gates, and drinking bleach — there are fears that the spread of misinformation is so widespread that is has made its way deep into the lives of our friends and families.

As we move towards the rollout, medical misinformation is becoming a serious public health risk — threatening to undermine community trust and uptake of the vaccine.

So what is misinformation?

Simply, digital misinformation is anything that is false, misleading, or harmful that spreads online.

It isn’t a new phenomenon. But the massive growth of social media use in the last decade has given these extreme fringe voices a direct route to billions of phones and laptops around the world.

This growth in social media use has led to the explosion of medical misinformation. A report by AVAAZ last year found that medical misinformation had been viewed more than 3.8 billion times online.

And whilst media diversity, quality education, socio-economic background, and trust in institutions all play a role in understanding why a person is susceptible to medical misinformation there is one factor that is undeniably amplifying the problem: the business model of social media companies.

Facebook, Twitter, and Google all have the same goal: to get you using their apps for as long as possible, to serve you as many ads as possible, to squeeze as much money out of advertisers as possible.

Extreme and sensational content is the most effective at keeping you glued to the screen.

The logic makes sense. Think about the Facebook posts that you can remember most over the course of the pandemic.

Chances are you don’t remember the dry posts by the Department of Health. But you do remember the video of the 5G tower being burnt down, or the doctor questioning the vaccine your aunt sent to you.

This is one of the reasons that public health information campaigns are struggling to combat the rise of misinformation — they just aren’t equipped to keep track of and respond to vaccine misinformation that is circulating online.

Well, what can we do about it?

Right now, governments around the world have focused on taking down and deleting misinformation.

Because posts are usually only taken down after they have reached significant numbers — this approach alone will never be enough.

But there are simple, common-sense interventions that can be made that will equip researchers, governments, and public health workers with the information they need to effectively disarm medical misinformation.

1. A live list of COVID-19 content trending on social media

For public health campaigns to effectively counter misinformation, experts need to know exactly what is trending.

That’s why Reset Australia has formed a coalition with public health experts to call on the federal government to establish a live list of COVID related websites being shared on social media.

The Live List policy was developed in consultation with academics and researchers who are attempting to disarm misinformation in the community but finding it increasingly difficult to do so when there’s no way to know what misinformation narratives are emerging.

At the moment, only the social media platforms have access to this data, but imagine how beneficial it could be to public health officials?

The live list solves the first piece of the misinformation puzzle — how do we know what is out there?

2. An enforceable misinformation code

Once we know what misinformation is out there, we need an effective approach to deal with it.

That means governments need to set the rules about defining, assessing, responding to, and where necessary removing misinformation online.

Right now, the tech giants are in the process of developing their own disinformation code but the code is opt-in, unenforceable, and a similar version has already failed in the EU to slow the spread of misinformation.

Why? Because the code gave the government no power to enforce the code and no access to what misinformation the platforms had identified. Self-regulation will never work because there is no accountability on the platforms.

For a misinformation code to work it needs to be operated in the public interest by a government regulator, who has the power to enforce the code and penalise the big tech companies when they break the rules.

Reset Australia recently wrote in-depth about the issues with the proposed disinformation code which you can read here.

Final thoughts

So how bad is COVID misinformation? The short answer to this question is: we don’t know.

Only the big tech giants have a birds-eye view on just how much misinformation is circulating online.

But, is there more that governments can be doing right now to start fixing the problem? Absolutely.

This is why policies like the Live List are so important in this moment.

To effectively counter misinformation we need to know exactly what is out there, who is seeing it, and who is engaging with it.

Without this information, we’re left in the dark. Focusing on misinformation narratives that have already reached the mainstream, and being left defenceless to new misinformation narratives as they emerge.

This blog post is a part of Reset Australia’s Explained series. Every month we explain and break down a question submitted by readers like you.

Reset Australia is a non-profit organization advocating to prevent digital threats to democracy. You can learn more about us at au.reset.tech.

--

--