How Digital Media Companies Combat Misinformation and Disinformation

Yike Yang
Marketing in the Age of Digital
4 min readApr 5, 2020

Digital media companies are under increased scrutiny for their mishandling of misinformation and disinformation on their platforms. There are two ways to consider a digital media platform: On one hand, we can view them as technologies that merely enable individuals to publish and share content, a figurative blank sheet of paper on which anyone can write anything. On the other hand, one can argue that social media platforms have now evolved curators of content. I argue that these companies should take some responsibility over the content that is published on their platforms and suggest a set of strategies to help them with dealing with disinformation and misinformation.

Where people get online news in the US, 2019, Source: Pew Research Center, “How Americans Encounter, Recall, and Act Upon Digital News,” February 9, 2017.

The State of the Digital Media

At the beginning, digital media companies like Facebook or Google established themselves not to hold any accountability over the content being published on its platform. In addition to that, their users are increasingly using these platforms as the primary source of getting their information. As of 2019, 93 percent of Americans say they receive news online. Twitter moments, in which you can see a brief snapshot of the daily news, is a prime example of how it is getting closer to becoming a news media.

As social media practically become news media, their level of responsibility over the content which they distribute should increase accordingly.

As the overall technology has developed, there have been several ominous developments. Rather than using digital tools to inform people and elevate civic discussion, some individuals have taken advantage of social and digital platforms to deceive, mislead, or harm others through creating or disseminating misinformation and disinformation.

The United States saw apparently organized efforts to disseminate false material in the 2016 presidential election. A Buzzfeed analysis found that the most widely shared fake news stories were about “Pope Francis endorsing Donald Trump, and the FBI director receiving millions from the Clinton Foundation.”

Fake content was widespread during the presidential campaign. A post-election survey of 3,015 American adults suggested that it is difficult for news consumers to distinguish misinformation from real news. This information can distort election campaigns, affect public perceptions, or shape human emotions.

the Risk of Regulation of Companies

It is not always clear how to identify objectionable content. While it is pretty clear how to define information advocating violence or harm to other people, it is less apparent when talking about hate speech or “defamation of the state.” What is considered “hateful” to one individual may not be to someone else.

What’s more, overly restrictive regulation of internet platforms in open societies sets a dangerous precedent and can encourage authoritarian regimes to continue and expand censorship. This will restrict global freedom of expression and generate hostility to democratic governance. Democracies that place undue limits on speech risk legitimizing authoritarian leaders and their efforts to crackdown basic human rights.

Approaches Adopted

Currently, digital media companies have adopted three approaches to fight misinformation. The first one is to block such content outright. For example, Pinterest bans anti-vaccination content and Facebook bans white supremacist content. In rare instances, the most appropriate response is to censor and ban the content with no hesitation. Examples include posts that incite violence or invite others to commit crimes.

The other is to provide alternative information alongside the content with fake information so that the users are exposed to the truth and correct information. This approach, which is implemented by YouTube, encourages users to click on the links with verified and vetted information that would debunk the misguided claims made in fake or hateful content.

Twitter currently relies on its community of users to flag such content and then uses an army of real humans to monitor such content within 24 hours to determine if they are actually in violation of its terms of use. Meanwhile, our technologies to analyze images and videos are quickly advancing. For example, Yahoo! has recently made its algorithms to detect offensive and adult images public.

Suggestions for Companies to Combat Misinformation and Disinformation

There are several alternatives to deal with falsehoods and disinformation that can be undertaken by various digital media companies. The ideas may represent solutions that combat misinformation and disinformation without endangering freedom of expression.

  1. Digital media companies should invest in technology to find misinformation and identify it for users through algorithms and crowdsourcing. As an example, several media platforms like Wikipedia have instituted “disputed news” tags that warn readers and viewers about contentious content.
  2. These companies shouldn’t make money from misinformation manufacturers and should make it hard to monetize hoaxes. Like all clickbait, false information can be profitable due to ad revenues. Indeed, during the 2016 presidential campaign, trolls in countries such as Macedonia reported making lots of money through dissemination of erroneous material.
  3. Strengthen online accountability through stronger real-name policies and enforcement against fake accounts. Firms can do this through “real-name registration”, which makes it easier to hold individuals accountable for what they post or disseminate online and also stops people from hiding behind fake names when they make offensive comments.

--

--