Understanding Engagement with (Mis)Information News Sources on Facebook

Posts from misinformation news sources got six times more engagement than factual information in lead up to 2020 elections

Cybersecurity for Democracy
Cybersecurity for Democracy
7 min readSep 14, 2021

--

by Laura Edelson, Minh-Kha Nguyen, Ian Goldstein, Oana Goga, Tobias Lauinger, and Damon McCoy.

This summary is based on an upcoming paper which has been accepted by the ACM Internet Measurement Conference 2021, November 2–4, 2021.

Background

Researchers from New York University, the Université Grenoble Alpes, and French the National Centre for Scientific Research conducted a system-level analysis of user engagement with misinformation from news sources on Facebook, in a new peer-reviewed study covering a five month period in 2021, August 10, 2020 — January 11, 2021. The researchers evaluate how Facebook users interact with content from 2,551 news sources, which have been evaluated for their reputation for misinformation and partisanship. They do this by studying user engagement from a number of angles to build a holistic picture of how news (mis)information content performs on Facebook.

Top findings

  • Across the political spectrum, posts from news sources that regularly traffic in misinformation have a statistically significant and large engagement advantage–by a factor of six–-over posts from news sources that have a record of factualness.
  • In absolute terms however, engagement with misinformation sources only makes up a majority of overall engagement in the Far Right category, where engagement with misinformation sources made up 68% of overall engagement. This is because in other partisan categories, misinformation sources make up a much smaller share of the number of publishers.
  • The researchers found no evidence that Facebook’s algorithms reward partisanship or bias by news sources.

Data sources: The study relies on two third-party data providers, NewsGuard and Media Bias/Fact Check, to evaluate news sources by partisanship and by misinformation and factualness. For data on Facebook user engagement, data was drawn from CrowdTangle, a Facebook business analytics tool made available to researchers.

Misinformation: This study defines misinformation as any information that is false or misleading, regardless of the intent of the author or distributor of the information. They consider misinformation to include disinformation which refers to false or misleading content that is communicated with the intent to deceive. False information communicated in error or by ignorance also falls under the umbrella of misinformation.

Examples of news sources on Facebook that are known to publish misinformation, as analyzed by NewsGuard and Media Bias/Fact Check,.

Analysis hampered by lack of transparency from Facebook

The researchers gathered data on user engagement with news content on Facebook using CrowdTangle, a business analytics tool that provides access to data on non-paid public content — often referred to as “organic content” as opposed to paid content, or ads. However Facebook provides only limited data via CrowdTangle:

  • Researchers are able to see user interactions — emotes (“i.e., likes, angry”), comments, and shares — which is one useful measure of whether content captures users attention.
  • However, Facebook does not make “impression” data available — the number of times a piece of content enters a person’s screen.
  • Facebook recently began publishing public reports on “reach,” which represents the total, unique number of people who see a given piece of content, but these are made available only by tables displaying the top content seen by all users.
  • While this study sheds light on what is happening–increased engagement with content from misinformation sources–Facebook hasn’t provided sufficient transparency of data needed for the researchers to make concrete recommendations to address this problem. They are hampered by lack of impression data, since without knowing how many people have seen a post, it is impossible to say whether this is because Facebook’s algorithm promoted it more or if users simply engage more with content from misinformation publishers. We attempted to use video view data to substitute for impression data, but we found that for many reasons, this is not a workable solution.
  • In sum, Facebook does not provide the data necessary that would help researchers disentangle the effects of Facebook’s algorithms, versus how enticing particular types of content are, and user behavior.

Researchers studied engagement measured in different ways

The researchers asked three questions when evaluating engagement with news sources on Facebook.

1. What share of overall engagement with U.S. news sources is taken up by misinformation providers?

  • Overall, non-misinformation news sources enjoy more engagement than do misinformation news sources across the political spectrum. When looking at total engagement with news sources across Facebook, 236 misinformation sources generated 1.9 billion interactions, compared to 5 billion for 2,315 non-misinformation sources. Thus, misinformation providers accumulated a sizable quantity of engagement overall, but less than non-misinformation news sources.
  • When looking at engagement by news source political alignment, the story gets more complex. Small numbers of misinformation sources can drive disproportionately large engagement on far ends of the political spectrum. On the far right, 109 misinformation publishers account for almost 1.2 billion interactions, which is more than twice the total engagement of the 154 non-misinformation news sources enjoy. In a similar pattern, but less pronounced, the 16 misinformation pages on the far left generate almost 56% of the total engagement of their 171 non-misinformation counterparts.
  • Photo posts by misinformation news sources get the most traction. On the far left, for example, photo posts contribute 73 % of engagement with misinformation sources, as opposed to only 17.1 % for non-misinformation.

2. How well do individual sources of misinformation news engage with their audiences, compared to more factual news outlets?

  • On average, misinformation news providers generate more interactions by followers. The researchers found that, on average, misinformation news providers on Facebook enjoyed more interactions from page followers — 3.48 — than those of non misinformation news providers –2.98. In this category, there’s evidence that a small number of misinformation news providers may enjoy exceptionally high engagement; in other words, there are outliers that are particularly high performing when it comes to generating user interactions.
  • There is wide variation by news source, misinformation and non misinformation alike, in their effectiveness in engaging their audience, and some overlap. Audience engagement distributions for misinformation and non-misinformation publishers overlap. This means that even though one group performs better than another on average or in the median, many individual pages from the group do perform worse. There are far-right non-misinformation pages that engage their audiences better than the median misinformation page, for instance, and some center misinformation pages do better than the median center non-misinformation page, even though most do not.

3. How do users engage with content from misinformation sources when compared to other news content?

The researchers explored what types of posts get the most engagement, divorced from the specific news sources that created them.

  • Posts from misinformation news sources tend to receive more engagement than those from non-misinformation sources. On average, posts featuring misinformation received 4,653 interactions, compared to 773 for non-misinformation sources, whatever the political leaning of the news source.

Glossary

Engagement: User interactions with a given piece of content or Facebook page, including the number of people who “liked,” commented on or shared a piece of content.

Impressions: the number of times any content from a page or about a page entered a person’s screen.

Reach: the unique number of people who saw any content from a page or about a page.

About the researchers

Laura Edelson is a PhD Candidate in Computer Science at NYU’s Tandon School of Engineering. She is co-founder of Cybersecurity for Democracy, a research-based, nonpartisan, and independent effort to expose online threats to our social fabric — and recommend how to counter them.

Minh-Kha Nguyen is a second-year Ph.D. student at Grenoble Alpes University, working with Oana Goga. His focus is on analyzing news dissemination on social media and investigating its negative impacts.

Ian Goldstein Ian Goldstein is a second-year Ph.D. student at NYU Tandon, working with Damon McCoy.

Tobias Lauinger is a research assistant professor at the New York University Tandon School of Engineering. His research focuses on transparency and policy enforcement efforts of online social networks, and other aspects of Internet security, privacy, and cybercrime.

Oana Goga is a tenured research scientist at the French National Center for Scientific Research (CNRS) and the Laboratoire d’Informatique Grenoble (LIG). She investigates how social media systems and online advertising can be used to impact humans and society negatively. She is the recipient of a young researcher award from the French National Research Agency (ANR). Her recent research received several awards, among which the Honorable Mention Award at The Web Conference in 2020 and the CNIL-Inria Award for Privacy Protection 2020.

Damon McCoy is an assistant professor of Computer Science and Engineering at the New York University Tandon School of Engineering. He is co-founder, with Edelson, of Cybersecurity for Democracy.

Cybersecurity for Democracy is a research-based effort to expose online threats to our social fabric — and recommend how to counter them. We are part of the Center for Cybersecurity at NYU.

Would you like more information on our work? Visit Cybersecurity for Democracy online and see how tools, data, investigations, and analysis are fueling efforts toward platform accountability. You can:

--

--

Cybersecurity for Democracy
Cybersecurity for Democracy

Cybersecurity for Democracy is a research-based effort to expose online threats to our social fabric. We are part of the Center for Cybersecurity at NYU.