Social media companies, tech scholars, and government officials have already begun preparing for the spread of disinformation in 2020. But will their efforts keep 2020 from being a repeat of 2016?
It has been nearly five months now since the release of the Mueller Report, which, among other things, detailed and summarized the extent to which Russian “specialists” at the Internet Research Agency (IRA) took to social media in order to sow confusion and influence the 2016 election. At the time of the report’s release, The Federalist’s New York correspondent, David Marcus, called the Russians’ social media tactics “the bombshell” of Mueller’s findings, and though much was already known at that point, went on to state, “The section on Russia’s informational campaign using social media to influence the election is damning and shocking.” Indeed it was.
Russian “specialists,” posing as Americans, created thousands of fake accounts and numerous pages, such as “United Muslims of America,” “Don’t Shoot Us,” and “Secured Borders,” all of which had hundreds of thousands of followers. All told, IRA-controlled accounts made over 80,000 posts that reached anywhere from 29 million to north of 100 million users. As is clear from the names of the pages above, the IRA targeted men and women from various demographics and political leanings and on several occasions tweets posted by these accounts were quoted by traditional media outlets and attributed to Americans.
Given the scale and scope of the Russian operation in 2016, Silicon Valley’s major social media companies, the federal government, and various tech scholars, have already begun — a full fourteen months before the 2020 election — to prepare against another round of intrusions, this time not only by Russia, but potentially by Iran, North Korea, and even China.
Last week representatives from Facebook, Google, Twitter, and Microsoft met with government officials at Facebook’s headquarters to discuss ways to prevent a repeat of 2016. Among those at the meeting was Facebook’s head of cybersecurity policy Nathaniel Gleicher who stated, “Improving election security and countering information operations are complex challenges that no organization can solve alone.” Gleicher went on to say that Facebook had “developed a comprehensive strategy,” to fight against disinformation threats, though he did not elaborate on what that strategy entailed.
However, in the absence of details about what Facebook’s strategies to combat disinformation might look like, other entities have offered comprehensive plans of their own. Earlier this month, Paul Barrett of the NYU Stern Center for Business and Human Rights, published Disinformation and the 2020 Election: How the Social Media Industry Should Prepare. The 28 page report highlights what Barrett views as the biggest threats for the upcoming election, chief among them being “deepfake” videos.
Barrett notes that Hollywood has long had the money, time, and technology to create altered videos, but argues that “open source AI [has] democratized video fabrication.” Indeed, Barrett wonders at how more deepfakes haven’t already been unleashed on American politics. He cites another tech scholar, Danielle Keats Citron, who testified before congress this June about the unique threat deepfake videos pose to democracies. “Imagine that the night before the 2020 election, a deepfake showed a candidate in a tight race doing something he never did.” A deepfake utilized in this way could wield outsized influence, altering the outcome of an election, and as Citron went on to note, “Elections cannot be undone.”
Of course the use of a deepfake video right before an election isn’t the only danger posed by this relatively new threat. The very existence of deepfakes could further erode public trust of media institutions, a trust that is already mired in a downward trajectory. Furthermore the inevitable debate over whether a video is a deepfake or not might distract from the issue at hand, and those who have incriminating videos or audio clips used against them could conceivably cry deepfake, even if the video or audio clip is authentic.
As part of his recommendations, Barrett suggests social media companies employ AI technology to screen for potential deepfakes, ramp up human review of videos, and “remove deepfakes before they can do much damage.” For its part, Facebook has already been doing this and more. In an attempt to test deepfake detection methods and tools, the social media company is producing deepfakes of its own. In total the company plans to spend $10 million on deepfake detection development, and it has started the Deepfake Detection Challenge — replete with leader board and awards — to spur industry development of usable deepfake detection technology.
But deepfakes aren’t the only threat Barrett cites in his report. Whereas Facebook, Youtube, and Twitter are usually recognized as the main vehicles Russian “specialists” used to disseminate disinformation in 2016, Barrett notes that Instagram played a bigger role than people think and could do so again, especially via memes with fake quotes.
And while many are concerned that Russia will seek to repeat its disinformation campaign again in 2020, Barrett cautions against overlooking Iran, China, and even domestic groups, noting that, while foreign actors who purposely spread false information get most of the press, the majority of intentionally false content is produced domestically. Additionally, for-profit firms, specializing in disinformation campaigns, will undoubtedly seek to sell their services to the highest bidder. Barrett relates how Jigsaw, a Google-affiliated think tank, tested the waters of the disinformation economy by hiring such a firm, SEO Tweet, for $250 to run a two week disinformation campaign against a dummy website.
By way of solutions, Barrett recommends social media companies take various measures, everything from hiring content overseers to supporting existing proposed legislation like the Honest Ads Act which would bring online political advertising rules in line with those for television, radio, and other media — though some have questioned if the Act would actually prevent foreign actors from buying ads and if it would infringe on individuals’ right to privacy.
While his proposed solutions as a whole are a good starting point for conversation, Barrett largely ignores the interplay between social media and traditional media and how the former is often used as a means of spreading the latter. To be sure, Barrett calls for improved social media literacy, but he doesn’t get into the details of how harmful sharing unread articles from questionable sources can be in the war against disinformation. Former Time editor and Under Secretary of State for Public Diplomacy and Public Affairs, Richard Stengel, in his forthcoming book, Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It, endorses a more holistic approach to media literacy and advocates news trustworthiness rating systems and consortiums like The Trust Project and NewsGaurd (Stengle sits on the board at NewsGaurd). As bad actors continue to use sophisticated, deceptive methods, surely any recipe for truth protection and election security must include ways for users to ensure the news they are reading or watching is reliable.
Though fourteen months might seem early to begin talking about elections, social media, and disinformation, it is anything but. Last week Democratic candidate Beto O’Rourke sought answers from social media companies in regards to an unsubstantiated claim from a Twitter user that the gunmen at the Odessa, TX shooting had a Beto sticker on his truck. The tweet has racked up 11,000 retweets and 15,000 likes and according to Texas officials there is no connection to the shooter and the O’Rourke campaign.
That being said, another challenge for social media companies as they seek to root out disinformation from their platforms is ensuring that they don’t stifle the voices of those whose opinions may not be in sync with Silicon Valley, but are protected speech. Conservatives have long wondered if their posts are being “shadow banned,” and earlier this year Senate Republicans held a hearing on technology censorship to address the issue. Most recently, four Republican Senators sent a letter to Facebook on behalf of the pro-life group Live Action, who claimed that Facebook labeled ads the group had run on the site “false news.” Most disturbing in this case, was that Facebook used two abortion providers to “fact check” the video and give it a false rating, resulting in the group being stripped of its ability to run ads on the platform.
It is important to rid social media platforms of disinformation weaponized in an attempt to sway elections, but it is equally important that free speech be preserved. Managing that balancing act may prove to be the greatest challenge for tech executives in the months ahead.
As the election nears, the proliferation of disinformation such as deepfakes and misleading tweets is almost certain. And while many have begun in earnest to fight against disinformation, whether or not the proposed strategies will be enough to successfully combat it in 2020 is not.
John Thomas is a freelance writer. His writing has appeared at The Public Discourse, Christianity Today, and The American Conservative. He writes regularly at medium.com/soli-deo-gloria.