The State of Disinformation on Social Media

CDS researchers contribute to new review of disinformation on online platforms

NYU Center for Data Science
Center for Data Science
3 min readApr 23, 2018

--

CDS affiliated faculty members and researchers from NYU’s SMaPP lab have recently produced a report reviewing current research on how fake news, rumors, deliberately incorrect information—all of which is encompassed by the term “disinformation”—spreads on social media, and how this online phenomenon affects political life.

How exactly does disinformation spread on social media? How effective is it? Can we stop it?

These are central questions that the authors tackle in the review. Doctoral candidates Sergey Sanovich and Denis Stukal from the SMaPP lab, for example, examine the tactics and characteristics behind it. Key tactics include selective censorship, used often by the Chinese government; hacking and leaking sensitive information, such as the email hack of the Democratic National Committee; the manipulation of search algorithms using popular keywords and circuitous links; and the use of bots and trolls to directly share info.

It’s common knowledge that disinformation proliferated on social media during the 2016 U.S. election, but researchers are only recently beginning to understand the full scope of the impact and the conditions that allowed such disinformation to flow. Sanovich and Stukal’s review includes a report that 400,000 bots posted 3.8 million tweets during the final month of 2016 election. And, during the final three months, users engaged more with the top twenty fake news stories than they did with authenticated ones.

But U.S. elections are only one of many worldwide events affected: Russian bots also affected German, British, Catalonian, French, and Russian domestic elections. Another report showed that 7/10 stories about Angela Merkel were false ahead of the 2017 parliamentary elections.

Emerging research demonstrates the startling degree to which bots can deceive users. One study found that 30–40% of automated texts on factual topics deceive ordinary users and 15–25% deceive experts; for non-factual topics, ordinary users were deceived by 60% of automated information and experts by 30%. The study also showed that information disliked by the crowd has a 10–15% higher deception rate for both ordinary users and experts, making politicized disinformation especially potent.

Humans are clearly having a hard time identifying bots—so could automated systems do it?

The answer is yes—sort of.

Bot detection algorithms are typically trained to correctly classify bots within a certain domain; it can be challenging for these systems to identify bots outside their domains. Furthermore, according to a 2017 study, these algorithms degrade in accuracy over time by up to 20% in one year as bots evolve.

Sanovich and Stukal identify dependence on ad revenue and optimization algorithms as the two key characteristics that make social media inherently susceptible to disinformation campaigns. The U.S. government does not regulate who can and cannot advertise on social media. Consequently, one report says Twitter offered the Russian state-supported media network RT $3 million for 15% of its U.S. elections advertising; another report says Facebook avoided screening advertisers and allowed ads to be paid for in Russian rubles. Engagement optimization algorithms further exacerbate the problem by incentivizing sensational images and headlines.

So what can be done? Sanovich and Stukal consider methods for sites to verify stories through automated or manual methods, but they fear that verification could suffer from errors, bias, and have unintended consequences including censorship. The problem of disinformation on social media platforms remains very much an open one — a fundamental concern of contemporary society whose answer may lie in data science.

By Paul Oliver

--

--

NYU Center for Data Science
Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.