Current Private Sector Responses to Disinformation Provide a Baseline for Future Actions

SIS Disinformation Research Team
SISDRT
Published in
4 min readJun 28, 2020

Over the last decade, social media platforms have become leading vehicles for the dissemination of disinformation by malign online actors. In response to this threat, major social media firms adopted a variety of measures designed to mitigate the spread of disinformation. After examining actions taken over the last several years by major social media firms, the Disinformation Research Team concluded that key measures should make a positive impact. However, the team also determined that the evolving tactics of malign actors, coupled with inconsistent standards across platforms for what constitutes disinformation-related activity, presents challenges for addressing the issue in its entirety. For more details regarding notable actions taken by social media firms, read the complete report below:

Executive Summary

Social media companies have taken steps over the past two years to prevent their platforms from being utilized in disinformation campaigns. These preventative actions have the potential to disrupt the efforts of malign actors and may force them to modify their tactics and techniques. However, the effectiveness of these actions may be diminished by narratives that undermine the legitimacy of any precautionary measures taken by claiming political bias, and a lack of common responses or synchronization between companies and platforms. While at this time there are no completely reliable mechanisms to effectively counter malign online actors, these preventative measures provide an initial counter to disinformation that should continue to be built upon ahead of the 2020 elections.

Current Disinformation Countermeasures

Facebook
o On June 4, 2020, Facebook announced it would begin blocking state-controlled media outlets from purchasing advertising in the US. It will also introduce labels to provide users transparency on posts from state-controlled outlets. o In its 2019 analysis of the major social media platforms, NATO found Facebook to be one of the best at blocking inauthentic account creations, employing sophisticated anti-automation systems built into the structure of the platform. A blend of cyber tools and manual investigations are used to disrupt emerging campaigns.

TikTok
o On June 10, 2020, TikTok met with the EU Commission to discuss countering disinformation. TikTok has signed the EU’s Code of Practice to further this effort. The US lacks a similar agreement or enforcement measure, but this step indicates TikTok’s willingness to work within the frame of government regulations outside of its origins in the PRC. o TikTok is investing heavily in technology and review teams. The platform introduced in-app features such as a reporting function for suspicious content. It promotes trusted information from authoritative sources and is developing policies to preventing false information spreading. Its video content increases the difficulty in automating the process.

Instagram
o In November 2018, Instagram announced its crackdown on fake accounts and conveyed that the platform devotes “significant resources” to stopping this behavior. The platform disables millions of fake accounts every day, but Instagram was the easiest and cheapest social media platform to manipulate, per NATO in 2019.

o On December 16, 2019, Instagram announced that it would counter disinformation by “working with third-party fact- checkers in the U.S. to help identify, review, and label false information.” When identified, the content will be labeled with a warning and Instagram will reduce the content’s distribution and visibility. Instagram uses a combination of feedback and technology to determine which content to review.

Twitter
o On May 11, 2020, Twitter updated its approach to misleading information. The platform introduced new labels and warning messages containing disputed or misleading information related to COVID-19.
o To promote “informed discussion,” Twitter is developing a feature that encourages users to open a link before re-tweeting content.

The effectiveness of these social media platforms in countering disinformation is inconsistent due to their measures relying on human subjectivity to identify objectionable material, while lacking a common agreement on what constitutes objectionable material. The social media platforms have also chosen to not apply the same control standards to content posted by certain elected officials or world leaders, potentially leaving significant room for exploitation or manipulation by threat actors, some of whom actively falsify such content. While these steps are not sufficient to reliably counter all malign actors, they represent a commitment from the private sector to prevent their platforms from being used for disinformation; they also highlight areas for improvement, while showing contrasting approaches.

This product was created by a team of graduate students from American University’s School of International Service. The work herein reflects the team’s research, analysis, and viewpoints.

--

--