Big Data, Social Media and Politics: Who Shares Fake News on Facebook and More with Joshua Tucker

Did Russian Twitter accounts play a part in Trump’s 2016 victory? Did hate speech increase on Twitter during Trump’s 2016 campaign?

Yara Kyrychenko
NYU Data Science Review
7 min readMar 26, 2022

--

Photo by dole777 on Unsplash

From the Arab Spring to the Jan 6 insurrection, social media has shaped modern politics. Accordingly, social media data has helped researchers across the social and behavioral sciences answer questions they couldn’t have before. Big data, together with the data science tools to deal with it, continue to open new avenues of scientific inquiry. But these technologies have also put democracy to a new test.

In a talk for the NYU Community, Professor Joshua Tucker answered some research questions centered around social media and Trump’s 2016 presidential campaign¹.

Joshua Tucker is a Professor at the departments of Politics, Russian and Slavic Studies, and Data Science at New York University. He’s also a Director of the Jordan Center for the Advanced Study of Russia and Co-Director of NYU Center for Social Media and Politics (CSMaP). He has published multiple papers on social media and democracy and co-edited Social Media and Democracy: The State of the Field. Below are the results of three studies that Tucker and his team at CSMaP run, as well as some of his thoughts on data access.

Contents

  • Did hate speech on social media increase throughout Donald Trump’s 2016 election campaign?
  • Who shares Fake News on Facebook?
  • What was the impact of Russian trolls on Twitter in the 2016 US Presidential Elections?
  • Thoughts on data access and Takeaways

Did hate speech on Twitter increase?

Media and scholars have emphasized how Donald Trump normalized hate speech and white nationalist rhetoric during the 2016 campaign. But can we find evidence for the increase in such speech throughout Trump’s 2016 campaign and the election’s aftermath? Joshua Tucker and his colleagues were set on finding out whether the “explosion” had taken place on Twitter. They scraped and analyzed over 750 million election-related tweets and almost half a billion tweets from randomly sampled Americans.

Surprisingly, the team found no evidence that hate speech increased on Twitter over June 2015–2017². Similarly, there wasn’t a significant increase in white nationalist rhetoric over the time frame. Instead, hate speech on Twitter appeared to be “bursty,” i.e., with many spikes that go down just as rapidly as they went up.

Photos by Jon Tyson (left) and Mika Baumeister (middle & right) on Unsplash

Who shares Fake News on Facebook?

Conventional wisdom might suggest that the generation that can’t stay away from their phones for a minute is to blame for the dramatic spread of disinformation on social media that started around Trump’s campaign. But it turns out that the people who are the least likely to use the internet were most likely to share fake news on Facebook during the 2016 presidential campaign.

Tucker’s lab ran a multi-wave survey during the campaign. The participants were asked demographic and social questions and were able to voluntarily share their Facebook and Twitter accounts with the researchers³. Around 1300 people agreed to share the Facebook data. The researchers then found a list of fake news websites and counted how many times a given participant shared a link to any of those on Facebook.

Over 91% of people in the sample did not share any fake news stories. But a smaller number of people had shared way more links, amounting to a power-law distribution. In his talk at NYU, Tucker said that:

This pattern, a power law, is something you see over and over again when you study the internet and politics, and, especially, nefarious behavior¹.

Power law distribution of fake news shares on Facebook (left). Figure from Guess et al., 2019 ³

On average, Republicans and independents shared more links to fake news websites than Democrats. This might have been because most fake news articles during the 2016 Trump campaign were pro-Trump.

The second significant finding is that people over 65 years of age, on average, shared about seven times more links to fake news than the youngest demographic in the sample (those aged 18 to 29)³.

Photo by Joseph Chan on Unsplash

What is the impact of Russian Bots and Trolls?

Russia has been known for conducting foreign influence campaigns through social media in the West and Ukraine. The world had woken up to its scale only after the 2016 Trump campaign. As Tucker put it:

In the aftermath of the 2016 elections, we found out that all these Twitter accounts pretending to be American citizens were being controlled out of, basically, one big office building in Saint Petersburg, Russia¹.

Tucker’s team used the Russian bot tweets dataset (released by Twitter) to analyze the impact the bots had on political outcomes. They also used the same data as in the previously mentioned Facebook study to connect personal and demographic information to Twitter profiles of about 1500 people⁴. Then they scraped twitter feeds of every account that someone in the study followed for a total of 1.2 billion tweets. The survey was conducted in April and October, so the researchers could see if the presence of Russian troll tweets in a person’s feed correlated with a change in attitude. In particular, they were interested in whether the exposure correlated with an increased preference for Trump.

By October, the average person was exposed to about 5 Russian troll tweets per day. On the election date, the number jumped to 14 tweets. That doesn’t mean that the people saw all of these Russian tweets, but they did appear somewhere in their twitter feed. However, the amount of exposure to regular news media or US politicians dwarfs any exposure to the Russian trolls, peaking at around 200 and 70 tweets per day, respectively.

Following the power-law again, 1% of the sample saw or were responsible for 75% of the exposures, and 10% were responsible for around 90% of the exposures. Who was exposed? Male Republicans⁴.

Finally, was there any impact of Russian bots and trolls on the vote in the 2016 presidential elections? The evidence suggests not⁴. Moreover, polarization, measured by differences in support for issues like Obamacare or Building a wall, did not go up.

This is all correlational research, so we can’t make any causal claims. But this data does not support the idea of Russian trolls impacting the election vote and polarization in the US in 2016. It seems like the Russian interference was unsuccessful. Yet, it still shed doubt on the legitimacy of the election results.

Photo by Joshua Hoehne on Unsplash

Thoughts on Data Access

It’s an exciting time to be doing political science research. Most of the data that political scientists traditionally use are administrative data, like data that states or schools create, or data that researchers accumulated by running a survey or an experiment. But today, a lot of the political data is created by a tiny number of giant, wealthy and powerful companies.

In some ways, it’s the best of times and the worst of times to be doing social science research. It’s the best of times because we have more information about how people behave and interact with politics than ever before. Orders and orders of magnitude. But we also have to deal with these giant companies. They get to decide whether we can access the data¹.

To keep the companies accountable and minimize harm, it’s critically important that outsiders, like academic researchers, can analyze their data and communicate the findings with the general public and policymakers.

Takeaways

  • Hate speech and white nationalistic rhetoric did not increase on Twitter during and right after Trump’s 2016 campaign².
  • People over 65 years and non-Democrats are more likely to share Fake News on Facebook³.
  • It doesn’t look like Russian bots and trolls had any significant impact on the outcome of the 2016 election, even though there was some exposure to their tweets⁴.
  • We need transparency and data access to keep advancing science and informing the public in the digital information era¹.

References

  1. Tucker, J. A. (unpublished, edited for clarity). NYU CAS Presidential Honors Scholars Talk on Social Media and Democracy, Feb 2, 2022.
  2. Siegel, A. A., Nikitin, E., Barberá, P., Sterling, J., Pullen, B., Bonneau, R., … & Tucker, J. A. (2021). Trumping hate on Twitter? Online hate speech in the 2016 US election campaign and its aftermath. Quarterly Journal of Political Science, 16(1), 71–104.
  3. Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science advances, 5(1), eaau4586.
  4. Eady, G., Pashkalis, T., Zilinsky, J., Stukal, D., Bonneau, R., Nagler, J. & Tucker, J. A. (to be published). The Limited Impact of Russia’s Election Interference on Twitter in the 2016 US Election.

--

--

Yara Kyrychenko
NYU Data Science Review

PhD candidate at Cambridge. Ukrainian. I love using data science to answer questions in psychology. github.com/yarakyrychenko