I have an expert of my own — the tactics of misinformation spreaders

Danil Mikhailov
8 min readMar 3, 2020

--

The COVID-19 epidemic looks set to challenge the global order in unprecedented ways. It is not just a public health issue, as pointed out by Wellcome’s Director, Jeremy Farrar, it is an economic and political one, and it is happening against a background of a general undermining of trust in national and international institutions.

As the virus spreads across the world, it is accompanied by its shadow, the viral spread of fear and misinformation. Even if — we can all hope — COVID-19 itself does not end up anywhere near as harmful as some of the worst-case scenarios, the shadow virus of misinformation can, on its own, disrupt the social fabric of our communities and cause the many economic and political harms that we fear.

Copyright Wellcome

In my last blog post I gave examples of isolated incidents of xenophobia against Asian communities, and flare ups of violent protest, all based on the spread of misinformation about COVID-19, sometimes intentional, sometimes accidental. The spread of this shadow virus will make things worse, unless serious efforts are made to counter it. The WHO, to their credit, understood this risk of fear and misinformation from the start. Most national governments, however, are late to the game in dealing with the actual virus, and have not even considered that they must also focus on the shadow virus that accompanies it.

It is important to realise that in fighting misinformation, we engage in an asymmetric fight. A single YouTube video or Tweet produced in someone’s bedroom, if it goes viral, can generate enough fear to undermine million-dollar public information campaigns. This is because the online environment, as explained in my last blog post, is uniquely permissive of small actors.

Which is not to say that all spreaders of misinformation online are small actors. They include well organised and well-funded networks dedicated to politicised causes, such as climate change scepticism and anti-vaccination campaigns. As a bit of investigative journalism by the BBC has shown, many of such communities dedicated to their own issue, are also fertile ground to disinformation about COVID-19. If you believe one conspiracy theory, you seem to be more likely to believe another.

This happens because fears raised by the epidemic feed into a general mistrust of institutions and an anti-expert narrative amongst these communities: the view that established experts look after their own interests, that they do not tell the truth, that they suppress opinion other than their own, and that they, ultimately, look down on the rest of the public. In my research I called the driver behind such thinking democratic levelling: the desire to cut the “arrogant” expert down to size so they listen to the majority opinion.

One common tactic used by misinformation spreaders in pursuit of democratic levelling is to put forward an “expert” of their own, who contradicts the established expert view, thereby helping undermine it.

These alternative experts — alt-experts for short — fall into one of four categories:

  1. They are a legitimate expert in the field but holding a minority position
  2. They are an expert but from only a tangentially related (or completely unrelated) area
  3. They were an expert but have since been discredited in some way
  4. They are a bogus expert

In the first category, the problem is usually not with the expert. After all, paradigm shifts in research often start with a minority position. The problem, rather, is with the way their research is being used, often out of context, by the spreaders of misinformation. In my research I have found many examples of selective quoting of sources in online debates, where something is cherry picked to make a point. Since the advent of publicly available search engines, like Google, it has become easy for members of the public to find an academic paper among the millions published which supports any given position out of context of the academic debate. Many information scholars decry this effect of search engines in positively biasing what is most findable rather than what is correct (see Chapter 4 of my thesis for fuller discussion and references).

The second category is common and not just online. As an example, take the case of Lord Lawson, ex Chancellor (Finance Minister) in the UK, providing his opinions about climate change. Lord Lawson has a legitimate claim to be an expert on matters connected to finance, economics and politics. He does not, however, have a noticeable track-record in climatology or any other discipline that might inform an expert position on whether climate change is happening and whether it is caused by human activity. That did not stop Lord Lawson becoming a regular talking head in media interviews about climate change over the past 10 years, representing the climate sceptic point of view, despite repeated criticism by climate scientists.

Possibly the cardinal example of the third category is Andrew Wakefield, who has become a figure-head of a number of anti-vaccination communities spreading misinformation online. Originally a doctor, who published an academic paper that (wrongly) purported to link the MMR vaccine and autism, Wakefield’s research was eventually debunked by other academics, his original paper was withdrawn by the journal and he was accused of fraud (the full background of the Wakefield case is well summarised in this piece by Rao and Andrade).

There are unfortunately many examples of the fourth category, but one of the most famous is the case of a Wikipedia editor who worked under the user name Essjay, becoming an influential member of that community on the basis of supposedly having doctorates in philosophy and in law. Essjay was eventually exposed as a twenty-four-year-old with no relevant qualifications and was thrown out of Wikipedia’s editor community (this case has been covered by a number of scholars, e.g. see Mathieu O’Neil’s book “Cyberchiefs: Autonomy and Authority in Online Tribes”).

Academics, looking at these examples often find it impossible to understand how these individuals become so influential with online communities and why their views and opinions are so easily believed by the wider public.

The first part of the explanation is that the majority of the public engage with experts in a symbolic sense only, giving them respect due to their perceived status, in a process Pierre Bourdieu called the creation of symbolic capital. Often this is based on external markers of distinction, liked being a Doctor or a Professor. Unless you are a researcher in the same discipline yourself, you just do not have the tools or the inclination to check the credentials or claims experts make, to judge if they deserve their status. This makes expertise easier to falsify.

The second part of the explanation is that when experts engage with the public they don’t do it in the field of academic research. The engagement happens over mass media or online, and these fields operate to very different rules.

For example, the field of mass media communication prioritises getting eye balls on its material. Media outlets also care about being accurate and authoritative, but it is symbolic capital that they select for. They are not set up to vet the expertise of all their interviewees and, often, believe it is their role to be impartial witnesses to the debate between rival experts. In this way, it becomes possible for Lord Lawson, a charismatic expert from an entirely wrong academic field (economics) who is also a great communicator, to gain a platform to spread climate change scepticism.

The online world is different in another way. Technology platforms, as I have shown in my research, have certain characteristics, which collectively make misinformation harder to identify as such. One is the speed with which information is shared, which means the time we have to engage with each bit of information and decide whether, for example, to spread it wider by retweeting or forwarding it, is very short. Research shows this affects our ability to identify misinformation. Another is the anonymity of contributors on many platforms and the ability to easily set up false accounts and falsify bios, as the Essjay case above, amply demonstrates.

Unfortunately, experts too often play into the hands of misinformation spreaders, by acting in ways that undermine the things that could distinguish them from alt-experts. For example, one common way to check the credentials of an expert is to see if their research has been supported, cited or reproduced by other experts. However, we all know how big an issue the reproducibility crisis is in many academic disciplines. As a recent Wellcome survey of researchers shows, we have developed a very problematic research culture, which pressures academics to publish more, going for quantity over quality and taking the kind of short-cuts (such as p-hacking) which undermine the whole expert enterprise. This makes it more difficult to argue against an alt-expert who produces unscientific work. P-hacking and reproducibility problems blur the distinction. The more suspect the behaviour of established experts, the easier it is to convince the public that “they are all the same.”

Now, I would always defend the enterprise of research itself and have specifically focused on trying to improve it, by, for example, helping to co-found the Research on Research Institute seeking to create the evidence base we need to drive changes in research culture. To me research is a bridge spanning the gap between ignorance and knowledge. When I see cracks in that bridge due to negative behaviours of some scientists and wrong incentives, I try to fix it. The alt-experts and the communities of misinformation spreaders backing them, instead want to take dynamite to the bridge and blow it up. The two are not the same.

However, there is a reason why communities such as the anti-vaxxers talk so much about the reproducibility crisis. They know that it is a self-inflicted wound that established experts have allowed to fester far too long. It is also a pressure point they will exploit and attack, mercilessly. This is something we simply cannot afford to do as COVID-19 raises everyone’s stakes.

The other thing we can not afford to do is expect correct information to filter out to the public on its own, just because it is true. Experts need a good science comms team to train them to be better communicators themselves, less detailed, able to draw analogies from the audience’s everyday experiences.

Better still, experts need to learn the lessons of how online communities use symbolic capital and learn to deploy it effectively themselves. They need more powerful symbols. One short-cut is to choose surrogate spokespeople who can most effectively deliver the message. In my research I have encountered great examples of research organisations like, for example, CRUK, a UK charity funding cancer research, using celebrities as surrogates. In those cases experts craft the message to make sure it is right. Then celebrities are able to lend their significant symbolic capital, as well as the social capital of their networks of followers online, to the cause and the message they are delivering, in order for the crucial information to cut-through. Experts can be reticent about doing this, thinking it a marketing gimmick. And it is, but marketeers do it because it works.

Against the spreaders of misinformation we need all the tools we can master. In a crisis situation — which COVID-19 certainly is — researchers, doctors, experts need urgently to learn new skills and adopt new tactics.

--

--

Danil Mikhailov

Anthropologist & tech. ED of data.org. Trustee at 360Giving. Formerly Head of Wellcome Data Labs. Championing ethical tech & data science for social impact.