Recognizing Disinformation Campaigns Within Social Conflict

SIS Disinformation Research Team
SISDRT
Published in
3 min readJun 28, 2020

Historically, disinformation campaigns have been most successful when they have exploited pre-existing divides within society. Given the recent events in our country, the Disinformation Research Team considered it important to condense common indicators of disinformation-at-work in the context of social conflicts. The team examined multiple past instances of disinformation in social conflict in order to summarize how adversaries either amplify the existing divisive material or introduce content of their own. Potential warning signs that these campaigns are taking place have been identified, allowing researchers and invested parties to recognize their presence. For more details you can read the complete report below:

Following the death of George Floyd in police custody in May 2020, large numbers of Americans have protested to express grievances related to injustice, racism, and police brutality. While the online content and protest activity stems from Americans’ genuine grievances (and there is no evidence so far of foreign actor manipulation), it is important to recognize previous efforts by Russia, China, and Iran to exacerbate existing social conflicts through the use of disinformation. Examining recent instances of disinformation campaigns within social conflicts can allow for potential future instances to be anticipated or identified and enable the development of countermeasures.

Common Disinformation Techniques Used to Exploit Social Conflict

Amplifying pre-existing and organic content on opposing sides

o Multiple threat actors have used bots to amplify divisive messaging regarding social conflict. The bots link, retweet, and post material that is authentically posted by users but holds an extreme position within the audience. This affords fringe material more prominence than it would naturally receive and draws the audience away from attempts to mediate the conflict.

o During 2014, the #BlackLivesMatter and #BlueLivesMatter movements emerged on social media discussions regarding Michael Brown’s death. Twitter accounts associated with the Internet Research Agency (IRA) disseminated retweets and links to both mainstream news sources and more inflammatory conspiracy-promoting sources in an effort to normalize and promote more extreme views.

o In 2015–2016, IRA purchased advertising on Facebook that was geographically targeted to areas experiencing protests and either supported or opposed Black Lives Matter, depending on where the material would be displayed.

o Potential Indicators: An increase in the banning of inauthentic accounts within social media, particularly within social media platforms that are used by the majority in the United States (most commonly Facebook, Twitter, and YouTube).

Creating and promoting inauthentic content

o Multiple threat actors have demonstrated the ability to create original social media content regarding social conflicts. This tactic can be used to create new themes of conversation and introduce more extreme positions. These themes can be given additional authenticity by threat actors promoting them across multiple social media platforms.

o Russia-linked actors ran multiple online groups that had hundreds of thousands of followers and used those groups to disseminate memes and other novel short-form content intended to provoke outrage among supporters of both major candidates in the 2016 US presidential election. They used those platforms to organize rallies and protest events, across the spectrum of political ideology, that drew thousands of supporters.

o Potential Indicators: Multiple accounts creating or posting similar messaging simultaneously in a non-authentic manner, particularly accounts that are normally inactive for long periods of time and then become highly active regarding one topic or message in conjunction with other accounts, especially ones exhibiting similar patterns.

Threat actors will continue to exploit social conflict within the United States to prevent attempts at moderate dialogue or reconciliation and will degrade the ability to effectively address international concerns or carry out US foreign policy. These campaigns can be more effectively countered by monitoring for and recognizing early indicators and implementing countermeasures at their onset.

This product was created by a team of graduate students from American University’s School of International Service. The work herein reflects the team’s research, analysis, and viewpoints.

--

--