Why #sciencecommunication is dead — or at least more difficult than before…

Victor Galaz
5 min readNov 26, 2019

--

The rapid expansion of social media, alternative media outlets, and increased automation of news and online behavior through “bots”, are rapidly changing the media landscape. Misinformation and defamatory material circulated before, during and after the launch of the EAT-Lancet “planetary health diet”, teaches us all some very hard lessons.

Social media plays a fundamental role for the exchange of information and debate on all type of topics. As Williams and colleagues note, the interactive and participatory aspects of social media gives it a central role in shaping individual attitudes and behaviors (Williams et al. 2015). As other studies have shown, policy makers, conventional media, and the general public are highly attentive to sentiments and agendas circulated online, and uses of social media can even underpin large social mobilization and protests “off-line” (Steinert-Threlkeld et al. 2015, Barberá et al. 2019).

Unfortunately, this new digital information landscape is plagued by the intentional spread of misinformation, including “bullshit” (Benkler et al. 2018), “hoaxes, conspiracy theories, click-bait headlines, junk science” (Shao et al. 2018), and intentional semi-automated messaging with the intention to amplify polarization and social conflict (Dredze et al. 2018, Starbird 2019).

Our new study “EAT-Lancet vs. yes2meat: Understanding the digital backlash to the ‘planetary health diet’” just published in The Lancet (together with David Garcia and Stefan Daume), unpacks this issue in more detail, and shows the clear dangers of “hashtag-science-communication” (or #sciencecommunication). By this I mean that scientists and prestigious scientific outlets seemingly are very vulnerable to loosely coordinated counterattacks, with the intention to spread misinformation, rumors, and defamatory material in social media.

Tweet from discussions on Twitter about the EAT-Lancet report. Bottom user anonymized by author of this blogpost.

Based on my understanding of the material we published, three features of an evolving digital media ecosystem makes science communication increasingly difficult in this new media environment.

The first is that scientists’ ambitions to be open about our research work in social media while allowing us to engage with the public, also allows skeptics to prepare and coordinate in advance. This is a clear result from our study of the EAT-Lancet report on social media, where early public announcements of a pending report and launch events, allowed for intensive counter-mobilization at least a week before the official release of the article. The response from scientists can’t be to avoid to communicate what we do, where, and through which collaborations. But we need to realize that our work is not only followed by friendly colleagues and an interested general public, but also deniers, skeptics and (apparently) digitally savvy opponents.

Photo by Elena Koycheva on Unsplash

The second is that science communication engagements in social media take time, and energy. This might sound obvious, but the fact that scientists active in social media have to deal with growing amounts of content pollution, and are expected to respond to junk science claims on a regular basis, can be intimidating to anyone seeing this play out online. Part of the success of the #yes2meat-movement was (as we describe in detail) its ability to flood online discussions on Twitter about the EAT-Lancet, with alarmist visuals, aggressive replies and retweets, and by continuously sharing critical and often poor-quality online material. The visualization below shows online activity based on replies on Twitter, (based on David Garcia’s and Stefan Daume’s analysis included in our article), says it all.

Source: The video shows number of replies over time in online discussions about the EAT-Lancet report. Red shows replies by users in the “yes2meat” community, blue is the “pro-EATLancet” community, yellow is an ambiguous community, and green is a vegan community. Based on “EAT-Lancet vs. yes2meat: Understanding the digital backlash to the ‘planetary health diet’”, GIF-video from http://dgarcia.eu/EAT-Lancet-TwitterReplies.html

Lastly, the odds are stacked against scientists. Recent studies about political opinion and polarization in social media, indicate that people who are exposed to information that conflict with their own opinion, can in fact become more (and not less) convinced of their original position, thereby creating “backfire effects that exacerbate political polarization” (Bail et al. 2018). As Kate Starbird and colleagues also note (2019), dis- and misinformation doesn’t need to be correct to be effective. It is more than enough that it “undermine[s] the integrity of the information space and reduce human agency by overwhelming our capacity to make sense of information. They therefore strike at the core of our values” (p. 2).

The media landscape is rapidly changing. Science communication must change too.

Note: Stockholm Resilience Centre (Stockholm University), where I’m employed as Deputy Director, is a science partner to the EAT Foundation. The study “EAT-Lancet vs. yes2meat: Understanding the digital backlash to the ‘planetary health diet” referred to here, has not received any support from the EAT Foundation, but is an independent study conducted under the research initiative “AI, People & Planet” funded through the Beijer Institute of Ecological Economics, led by myself and colleagues not affiliated to EAT.

References

Bail, Christopher A., et al. (2018). “Exposure to opposing views on social media can increase political polarization.” Proceedings of the National Academy of Sciences, 115.37: 9216–9221.

Barberá, P., Casas, A., Nagler, J., Egan, et al. (2019). “Who Leads? Who Follows? Measuring Issue Attention and Agenda Setting by Legislators and the Mass Public Using Social Media Data”, American Political Science Review, 883–901. https://doi.org/10.1017/S0003055419000352

Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press.

Dredze, M., Broniatowski, D. A., AlKulaib, L., et al. (2018). “Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate”, American Journal of Public Health, 108(10), 1378–1384. https://doi.org/10.2105/ajph.2018.304567

Starbird, K., Arif, A., & Wilson, T. (2019). Disinformation as Collaborative Work : Surfacing the Participatory Nature of Strategic Information Operations. PACMHCI, CSCW. Preprint online.

Starbird, K. (2019). “Disinformation’s spread: bots, trolls and all of us”, Nature, 571(7766), 449. https://doi.org/10.1038/d41586-019-02235-x

Shao, C., Ciampaglia, G. L., Varol, O., et al. (2018). “The spread of low-credibility content by social bots”. Nature Communications, 9(1), 4787. https://doi.org/10.1038/s41467-018-06930-7

Steinert-Threlkeld, Z. C., Mocanu, D., Vespignani, A., & Fowler, J. (2015). “Online social networks and offline protest”, EPJ Data Science, 4(1), 1–9. https://doi.org/10.1140/epjds/s13688-015-0056-y

--

--

Victor Galaz

Associate Prof, Stockholm Resilience Centre (Stockholm University), and Prog Dir Beijer Institute of Ecological Economics (Royal Swedish Academy of Sciences)