How spreaders of misinformation acquire influence online

Danil Mikhailov
8 min readFeb 23, 2020

--

Experts fighting the coronavirus COVID19 epidemic are increasingly concerned not only about the virus itself, but also by the spread of misinformation about this virus online. The potential side-effects of such misinformation can be very serious: mistrust of authority, discrimination against minorities, fear and panic. This can be seen in two incidents from the opposite sides of the world: protesters attacking a bus of evacuees in Ukraine because of an unfounded rumour of infection among them, and racist flyers stigmatising the Asian community given out in California.

For me this hits home for a number of reasons, some of them personal, through my many connections to China, as I recounted in a previous blog post. But there is also a professional connection. During my doctoral research, I studied online communities competing against established scientific experts in producing information, including misinformation. This has since become a big topic of debate, but back when I started in 2010, it was a bit of a niche area to focus on.

In light of what is happening, I have re-looked at the data and theory in my thesis. Here I present a very high-level summary of some salient points which I hope would be useful to counter some of the misinformation campaigns we are observing.

Copyright Wellcome

Online communities competing with experts fall into three broad categories. Some are a sophisticated alternative source of knowledge production, not always agreeing with establishment experts, but always making an effort to get the facts right. The best example would be the community of Wikipedia editors.

Another group of communities are best described as activists, using knowledge production as a means of achieving their specific goal rather than an end in itself. This second type sometimes make honest attempts to produce new knowledge, but at other times produce misinformation, consciously or not, if that was judged to be a more effective tactic.

Finally, a third group of online communities have risen into sharp relief in the last five years. These are, in effect, online “shells” or “fronts” for clandestine activity by off-line state and non-state actors.

These three groups often overlap and cooperate, producing a complicated network in cyber-space. What unites them is how effective they are in challenging established experts online. Those of us who are worried about the spread of misinformation online need to understand how these communities are established, grow and gain influence. Above all, we need to be able to tell the difference between them. Some of them need to be exposed and shut down, others engaged with and turned into potentially powerful allies in making sure the public make informed decisions.

During my doctoral research I borrowed quite a bit from the theoretical work of Bourdieu. Following him, I analysed the different types of capital each side — the experts and the online communities — possessed and how effectively that capital was deployed in competition to be seen by the wider public as the authority over a given area of knowledge.

I took three types of capital directly from Bourdieu — economic, social and cultural capital — and added two new types of my own, which are particularly effective in the online environment: time capital and algorithmic capital. Let me quickly define what I mean by each one, before I go on to use them to break down how misinformation can be spread so effectively, and why it is so difficult for experts to root it out.

“E” for Economic Capital: it is what it sounds, the financial resources each side possesses.

“S” for Social Capital: the advantage bestowed by your network of friends and connections, whether the social circle you have in the real world or the followers you have online.

“C” for Cultural Capital: your knowledge and expertise, particularly where formally recognised through degrees, honours etc. The online version of this is similarly important: it denotes your knowledge and skill in deploying the rules and etiquette of your online community.

“T” for Time Capital: this denotes not only the time you have to devote to an issue online, but also your freedom of reaction — i.e. can you respond fast enough? The insight is that immediacy of response is hugely important in the fast-moving stream of updates on many social media platforms. During my research I came across multiple example of expert institutions losing control during a social media crisis, because they were unable to respond and correct misinformation fast enough.

“A” for Algorithmic Capital: this denotes the advantage gained for those bits of information that are more findable on Google, or more viral on Twitter or Facebook. On most platforms this is largely determined by the invisible influence of algorithms, picking information that matches certain criteria and displaying it more prominently in search results or on a user’s timeline.

Using these building blocks, lets examine how they interact and re-enforce each other. One common pathway I have identified through my research is how charismatic amateurs can become an influential source of misinformation.

The starting point is realising that the online environment is a uniquely permissive system that enables individuals to compete with much more resource rich institutions on near equal terms. You do not need to set up physical premises, you do not need to hire lots of staff, there are plenty of free resources you can tap into to get your message across, etc. So, if we take an individual with enough time capital to devote to trying to push a bit of misinformation, within such a permissive system, the pathway looks something like this:

Copyright Danil Mikhailov
  1. Investment of time capital T in a permissive system leads to increase in social capital S, as others like and follow the individual on social media.
  2. Increase in S + T leads to increase in algorithmic capital A, as search engines and social media algorithms positively bias repeated content being liked by a growing social circle.
  3. Algorithmic capital can act as an accelerant, so an increase in A, catalyses further increase in S, as the two types of capital enter a positive feedback loop (think of the misinformation going viral).
  4. Rapidly increasing S + A can lead to a nascent community being created, or an already existing community being mobilised. This is an important evolutionary step, as it gives the misinformation longevity beyond a particular event or news cycle.
  5. One of the hallmarks of such communities maturing is the establishment of their own rules and sub-culture, exploitation of which by charismatic individuals leads to an increase of cultural capital C within that community.
  6. Increase in C in online communities often triggers an increase in economic capital E (think donations, book sales, lecture fees, corporate sponsorship, etc).
  7. Just as algorithmic capital is an accelerant, so is economic capital, meaning that now all three types of capital S + A + E are in a positive feedback loop with each other. This rapid increase underpins the ability of the purveyor of misinformation to make sure their version of the facts lands with the public, rather than that of the established experts.

This quick illustration is, of course, just one pathway, and over-simplifies the mechanisms of capital transmission and feedback (there is plenty more detail in the original thesis). But it helps make clear why it is so difficult for establishment experts to compete with this rapid build-up of capital in the permissive system that is the online world.

Establishment experts unfamiliar with the affordances of technology, constantly underestimate the effectiveness of this misinformation transmission online. Interviewing experts from different fields, I regularly encountered incredulity at how “evidently” unverified and unsupported information has greater penetration with the public than rigorously peer-reviewed official data.

There are many things established experts get wrong, but this is the most common: as a way of getting the correct information across, experts over-rely on their own cultural capital, thinking their authority as world-leading experts would cut through online. My research has shown that they, at the same time, under-value the importance of the other types of capital that would be more effective: social, time and algorithmic capital.

On the other hand, my research also uncovered examples of expert institutions getting things right. Here is one. In 2012, the Royal Society wanted to make sure its policy advice on fracking — a scientific area of significant controversy — had more cut through online than information from anti-fracking activist groups. The Royal Society’s experts decided to collaborate with Wikipedia’s editor community to get the right facts onto the Wikipedia pages devoted to fracking, instead of spending resources increasing readership of its own official report. This was based on the realisation that the only information source online concerning fracking with higher readership and higher placement in Google search engine results than the anti-fracking activists’ websites, was Wikipedia. In essence, the Royal Society borrowed Wikipedia’s much higher social capital online to make sure its information was more visible to the public (more information on this case study, including references, can be found in Chapter 4 of my thesis).

If COVID19 becomes a global pandemic, spreading widely outside of China, governments and public health experts will need to raise their game in how they deal with the inevitable upsurge in damaging misinformation online.

A key lesson from my research is that it is important to understand the mechanics of how spreaders of misinformation online become so effective at reaching the public. This will allow a more precise strategy for countering the misinformation.

In some cases that may mean targeting the algorithmic and social capital by getting social media companies to remove posts and groups producing dangerously misleading facts about the epidemic, from their platforms.

In other cases, it may be more effective to target the economic capital (where is the money coming from?) or cultural capital (who actually makes the rules and do they themselves follow those rules authentically?) which could expose clandestine support for the seemingly autonomous self-organising communities, or for specific members within them, by outside actors. Often just exposing duplicity in online communities can cause them to self-police and kick out fake members who are representing interests from outside that community without the knowledge of other members.

Another key lesson, as illustrated by the Royal Society example above, is that not all online communities producing knowledge and information outside the traditional circle of experts are the enemy. Many, like Wikipedia’s editor community, or the many online communities of practice (think online gaming communities or patient run communities focused on lived experience with a particular health issue) could be important allies in the fight against misinformation, able to lend their enormous social capital to the experts trying to get the message out. However, it is important to understand the internal rules and cub-culture for each community the established experts want to co-operate with, in order to establish trust.

What’s certain is that whatever happens with COVID19, misinformation online is an unavoidable phenomenon of our times, and experts in whatever field need new skills and new approaches to deal with it. That means social media and digital skills, behavioural science and data science skills, and skills in public engagement and communication. It’s a new world out there. Experts and institutions cannot afford to be left behind.

--

--

Danil Mikhailov

Anthropologist & tech. ED of data.org. Trustee at 360Giving. Formerly Head of Wellcome Data Labs. Championing ethical tech & data science for social impact.