Q&A: Misinformation and Coronavirus

Berkman Klein Center
Jan 30 · 7 min read

Alongside worry about the spread of coronavirus are concerns about the presence and spread of misinformation about the virus across the Internet. We asked members of Berkman Klein’s Misinformation Working Group their thoughts about misinformation and the virus.

Illustration of person whispering, with virus nodes coming out of mouth
Illustration of person whispering, with virus nodes coming out of mouth
Graphics by Milky-Digital Innovation and Gan Khoon Lay/The Noun Project.

What makes coronavirus, and pandemics in general, ripe for misinformation?

Oumou Ly: Pandemics — as we’ve seen in recent years with the Ebola and Zika viruses (and more generally on health topics such as vaccinations) — are ripe for disinformation for a few reasons. First, they are high-interest topics for the public. This can be due to heavy media reporting, interest from public figures, or other means. Second, pandemics reasonably create fear, which can be capitalized upon by actors seeking to sensationalize the pandemic. This tends to hold particularly true when pandemics are reported by media in conjunction with information about death tolls or conjecture about future transmission rates. And third, in the past few years, little known diseases have become full-blown pandemics. Generally, the public engages with the topic from a place of little to no knowledge about the ailment.

Mary Minow: Fear is a powerful emotion that grabs attention. Eyeballs are the new economy.

Mason Kortz: I think there are several factors at play here. A few that come to mind are urgency and fear. In a pandemic, the situation on the ground may change quickly, so news providers may rush to push out updates without careful vetting. Even if sources rebut misinformation spread in this manner, the pandemic may be over and out of the public eye by the time such a rebuttal is made. And contagious illness is common enough that there is a fear that it “could happen to me” but is uncommon enough that people are not inured to its occurrence. In the context of the coronavirus outbreak and Western media, there’s also an element of xenophobia amplifying this fear. And when people are afraid, they may value sources that confirm those fears over sources that minimize them — even if the sources that confirm their fears are less reliable. Other factors that come to mind are morality and lack of lay expertise in medicine.

Dariusz Jemielniak: Fear.

Dimitra Dimitrakopoulou: When there is a pandemic or in cases of public health threats, people have a lot of questions and a serious lack of information. These voids, usually accompanied by fear, are filled in with whatever information is available; sadly, in most cases, it is misinformation, usually available in the online space with abundance, that leads people to ill-informed decisions.

Media outlets like the BBC and Buzzfeed are tracking examples of misinformation about the virus. What are your thoughts on this tactic? Or, Is this enough?

Oumou Ly: This is good to do insofar as it raises awareness that dis/misinformation is being spread online and thereby, that there’s a risk one may encounter it. It certainly is not enough to combat the spread of misinformation on this topic (or any other topic) due in large part to the sheer volume of false or misleading content out there.

Dariusz Jemielniak: Definitely not, misinformation is much less hot than a deadly epidemic.

Mary Minow: It’s helpful, but not enough. Most people who encounter misinformation will not go to BBC and Buzzfeed. These lists are very helpful, even essential, for those who do take the time to check out a story.

Mason Kortz: I think this tactic is helpful. It may only help convince people at the margins who are already skeptical of information they receive online, but even if it only persuades 1% of readers, it helps to stem the flow of misinformation. It’s easy to say that it’s “not enough” because misinformation is obviously still spreading about coronavirus, but I don’t think that’s the correct metric. I think the correct metric is “does it do more harm than good,” and in this case, I think the answer is no.

Dimitra Dimitrakopoulou: Definitely helpful, but definitely not enough. More media organizations need to step up and try to trace misinformation. Also, public health institutions need to provide accurate information in a publicly accessible and comprehensive way.

Facebook, Google, and Twitter are also taking steps to combat misinformation. What do you think of their responses?

Oumou Ly: It is good practice to ensure that authoritative health information from the right sources is prioritized in search results for hashtags and keywords — and certainly that it is prioritized above other sources — as the linked article indicates that Twitter has done. However, the response could be stronger across platforms.

Dariusz Jemielniak: Their core business is clickability; I have doubts if they can be really effective when undermining their core business is at stake.

Mary Minow: I think Facebook could run public service ads telling users to connect with their public library to check out anything they question.

Mason Kortz: On one hand, I think it is dangerous for powerful private platforms to be making on-off decisions about what information is too dangerous to spread. For example, why does Facebook think misinformation about coronavirus is dangerous enough to require intervention, but misinformation in political ads is not? I’d much rather see a generalized policy about misinformation than domain-specific decisions like this. On the other hand, I recognize that sometimes general policies evolve from specific instances. Maybe this experience will help Google and Facebook develop general misinformation policies. I’m not holding my breath though.

Dimitra Dimitrakopoulou: It’s a helpful initiative but there is still a lot to be done toward this direction.

From your perspective, how can platforms better tackle misinformation?

Oumou Ly: Platforms should work to minimize the spread of disinformation not just on universally accessible online spaces like timelines and search result pages, but also on the more insulated areas of the platform/product. For instance, Facebook might consider strategies for monitoring the spread of health disinformation in groups and develop a plan for moderating false content circulated in those groups.

Dariusz Jemielniak: Community-driven approach (Wikipedia-like) should be tested. Users with different layers of privilege and trust evaluating news, their reputation being built by their accuracy.

Mary Minow: Everyone has access to a library, whether a school, academic or a public library. Libraries today make it really easy to ask questions — by online chat, email, telephone, social media… Yet many people don’t know they can ask their local library for help. Not only do libraries help with sleuthing (say, turning up the Buzzfeed and BBC sites for library users), but also they can show people how to evaluate sites for themselves.

Mason Kortz: I think having publicly accessible policies about misinformation (including a definition of misinformation, a statement as to whether misinformation is prohibited, and a description of the process for designating content as misinformation) would go a long way. As mentioned before, platforms making ad hoc decisions about when to allow or disallow content, without a public governing policy, makes me wary.

What are some ways people can detect misinformation?

Dariusz Jemielniak: Clicking for the source information, reading scientific reports rather than garbage.

Mason Kortz: The simplest piece of advice I can think of is to actually click through links and read references. If a post doesn’t have links or references, that’s a warning sign in and of itself.

Mary Minow: The library community has an infographic that is a good starting point. (For those who wish to learn more, see the American Library Association’s guide to Evaluating Information at https://libguides.ala.org/InformationEvaluation)

“How to Spot Fake News” Infographic from the International Federation of Library Associations and Institutions

What advice would you give someone as they navigate social media and see posts and stories about the virus?

Oumou Ly: At a minimum, ensure that posts and stories you read are from authoritative sources on the topic.

Mason Kortz: In addition to the advice I mentioned earlier about following through on links and references, I would encourage people to do their own research off of social media and to diversify their follows/news sources on social media.

Dariusz Jemielniak: I’d only click the reputable sources. Although we know now that the source does not matter, I still think that educating about the hierarchy of sources is important.

Mary Minow: The first thing is to take a step back and take a breath. Whenever there’s an emotional response, especially fear, but also vociferous agreement, take a moment to look at the information more critically. Use the tools you already know to check the source of the news, the sourcing within an article, and whether it’s from a fake or brand new source. If unsure, take a few more minutes and connect with your local librarian for help.

Dimitra Dimitrakopoulou: Always check the source and check with their doctors about the validity of the information.

Looking ahead, what are some ways to ameliorate the impact of misinformation surrounding such outbreaks?

Dariusz Jemielniak: We need accessible information and good news coverage from scientists, universities, the CDC, and others.

Mary Minow: If looking very specifically at pandemics, a hotline run by a credible group should be helpful. For a broader range of misinformation topics, the library can be consulted (including library and online classes on how to check sources for oneself).

Mason Kortz: I think there are two ways to approach this. One option is to reduce the creation of misinformation. This might be achievable with better overall media literacy or with great subject matter expertise about disease outbreaks among the general public. I’m not an expert on educational campaigns, but from my lay experience, it seems that they are not often successful and, when they are, they can take years of effort — much longer than the cycle of an outbreak. Therefore, this might not be the most effective tactic, at least in the short term. The other option is to mitigate the spread of misinformation. This could be achieved by media outlets (both traditional and social) intentionally slowing the rate at which (mis)information about an outbreak is published/shared. However, this gets back to the problem of private companies making one-off decisions about what information should or should not be propagated.

Responses have been lightly edited for clarity and/or brevity.

Many thanks to our contributors:

Dimitra Dimitrakopoulou, Visiting Assistant Professor, MIT Media Lab

Dariusz Jemielniak, Professor and Head of MINDS (Management in Networked and Digital Societies) dept. at Kozminski University, BKC faculty associate

Mason Kortz, Clinical Instructor at the Harvard Law School Cyberlaw Clinic

Oumou Ly, Assembly: Disinformation Staff Fellow

Mary Minow, Library Law Consultant, BKC affiliate

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

Berkman Klein Center

Written by

The Berkman Klein Center for Internet & Society at Harvard University was founded to explore cyberspace, share in its study, and help pioneer its development.

Berkman Klein Center Collection

Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.)

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade