DNC Tech Team
Published in

DNC Tech Team

Social Media’s Misinformation Mismatch

Applying COVID-19 anti-misinformation policies to political misinformation

Social media companies have long resisted calls to more responsibly moderate misinformation, often stating their reluctance and sometimes inability to be the “arbiters of truth” for content on their platforms. As a COVID-19 “infodemic” of hoaxes, scams, and dangerous “miracle cures” has followed the spread of the virus, platforms have finally started to change their tune.

In response to unprecedented levels of misinformation surrounding COVID-19, social media companies are taking a far more aggressive approach to content moderation and information quality.

Facebook has created a “COVID Information Center” to provide authoritative information to its users, increased its support for fact-checkers, and removed thousands of dangerous and misleading posts — including one from Brazil’s president Jair Bolsonaro. Twitter has expanded its definition of harm, and begun working with authoritative sources to adjudicate the accuracy of information. The change has led to the removal of misleading posts from leading conservatives like Laura Ingraham, Rudy Giuliani, and Turning Point USA’s Charlie Kirk. YouTube has removed thousands of videos with “dangerous or misleading” coronavirus information, including many peddling bogus COVID “miracle cures.”

This aggressive response is a welcome step for social media companies often eager to shirk responsibility for content on their websites, but their actions beg the question, “Why is COVID misinformation any different from other kinds of misinformation?”

If the answer is that COVID misinformation can cause “real world harm,” we’ve seen countless examples where the same has been true of political misinformation. In reality, there’s no reason why social media companies couldn’t employ their ‘infodemic’ playbook against political misinformation. Their failure to do so, so far, reflects a lack of will, not ability.

Political misinformation and material harm

In 2016, bizarre lies and conspiracy theories targeting Hillary Clinton on major social platforms inspired a Virginia man to open fire inside a pizza parlor in Washington, DC — deeply traumatizing patrons and ruining his own life in the process. Similar conspiracy theories prominent on social media have inspired users to kill family members and block bridge traffic using an armored car. Social media misinformation clearly played a role in the radicalization of Cesar Sayoc, who mailed pipe bombs to prominent opponents of Donald Trump in the fall of 2018. Outside the US, political disinformation (disinformation is misinformation spread intentionally) on major social media sites has helped elect a regime accused of thousands of extrajudicial killings and even enabled genocide.

The material harm that some political misinformation can cause is often just as severe as the harm caused by health misinformation. Social media companies distinguishing between the two often do so more out of political calculation than principle.

‘Pizzagate’ gunman Edgar Maddison Welch surrendered to police in December 2016.

Tackling political misinformation

Fortunately, steps social media companies have taken in response to COVID-19 misinformation can provide a guide to tackling political misinformation and preventing the material harm it can cause:

Promoting credible, authoritative sources of information

One of the main strategies social media companies have employed to combat coronavirus misinformation has been to place authoritative information sources prominently within their products.

Both Facebook and Twitter have placed links to authoritative sources like the WHO and CDC prominently in user’s feeds and search results. Facebook has created a “COVID Information Center” and is sending notifications to users who interact with COVID misinformation that directs them to authoritative sources (Facebook’s recently announced “Voting Information Center” — which will seek to promote authoritative voting information — is another example of this approach). YouTube has added links to authoritative sources like the CDC under all coronavirus-related videos on its site.

Social media companies should employ a similar approach to news content. While companies might contend that authoritative sources of health information are easier to identify, there are also objective ways to identify credible, authoritative sources of news. News rating company Newsguard evaluates news sources on 9 objective criteria of credibility and transparency. Google’s news algorithms, integrated into YouTube’s platform, use objective assessments of content producers, such as the originality of content and credentials of authors, combined with objective signals of community trust to determine source authority. Signals like these could be integrated into Facebook’s News Feed (especially its Pages product) and Twitter’s home timeline to make sure users are consistently getting high quality news.

Facebook and Twitter have themselves developed news products that rely on human-curation to raise up authoritative sources of news, although their role on the platforms has so far been minor. While some of those curation decisions have been questionable, these products tend to successfully elevate credible news information for some users. Giving these credible sources a larger role on their platforms, as they have during the pandemic, would be a win in the fight against political misinformation.

Imposing costs on misinformation

Social media companies have also been far more willing to impose costs on publishers of COVID misinformation than they have for political misinformation. Thousands of posts with COVID misinformation have been removed from major platforms over the past several weeks, including some from world leaders, which companies have been hesitant to penalize in the past. Facebook, which uses certified third-parties to fact-check content on its site, has boosted funding for the program.

Political misinformation, by contrast, has largely been given free reign on major platforms — confusing and misinforming voters in democracies around the globe. While removing political misinformation may be a step companies are unwilling to take, neither Twitter nor YouTube have any mechanism for imposing costs on repeat publishers of political misinformation. Facebook’s third-party fact-checking efforts, while laudable, lack the size and scale necessary to combat the deluge of misinformation on its site.

Example of Twitter post misinforming voters about the general election

To combat the spread, Twitter and YouTube should develop a means of imposing costs on misinformation published to their site (Twitter’s experiments with community moderation seem promising, but are a long ways off.) Funding increases for third-party fact-checkers on Facebook should continue after the pandemic is defeated. Facebook and Twitter’s fact-checking exemption and world leader’s policies, respectively, which enable politicians to spread misinformation (in ways regular users aren’t allowed to) should be rescinded.

Leaked mock up of Twitter community moderation system (as reported by NBC News)

Removing misinformation that could lead to material harm

Social media content that encourages users to try bogus health remedies is as dangerous as posts that encourage conspiracy theorists to “take matters into their own hands” against invented enemies. As YouTube cracks down and removes videos with COVID-19 misinformation, a newly-released 77-minute “documentary” and homage to the “Pizzagate” and “QAnon” conspiracy theories remains live on the site. QAnon conspiracy theorists openly spin falsehoods and discuss the “coming storm” against the “deep state” in Twitter feeds, Facebook Groups, and on popular YouTube channels. Social media companies should heed a 2019 FBI warning that identified these groups as a domestic terror threat, and prevent their ability to organize on their sites.

At the DNC, we’ve developed a scorecard that encourages social media companies to learn from their peers and improve their posture against political misinformation. The companies’ response to the COVID-19 ‘infodemic’ has shown that many of these improvements are easily implemented if companies choose to do so.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store