How do you deal with a problem like “fake news?”

Robyn Caplan
Data & Society: Points
6 min readJan 5, 2017
CC BY-SA 2.0-licensed photo by Marco Paköeningrat.

Last month, Adam Mosseri, a VP at Facebook in charge of changes to the News Feed, announced steps Facebook is taking to prevent the spread of fake news and hoaxes on the platform. The solution — a mix of user engagement, third-party fact checkers, user-experience design, and disrupting financial incentives — is interesting because of its multi-modal approach. Facebook should be lauded for engaging with the problem head-on, but there are still significant weaknesses in the steps that have been suggested by the company.

As this issue continues to unfold, and as more countries deal with the prospect of fake news and hoaxes influencing their own political process, we should take stock of the potential fallout of Facebook’s new approach.

Facebook will rely on users to report fake news despite evidence that suggests users have a difficult time assessing or identifying fake news. Teens seem to be especially vulnerable to fake news. A recent study by researchers at Stanford found that middle and high school students have a difficult time detecting fake news from real news, or detecting bias in tweets and Facebook statuses. In Pew’s recent study on fake news, they found that individuals are often confident in their ability to identify fake news (around 2/3 of those surveyed), but we don’t know if that confidence translates to being able to identify fake news. The same study found that a fourth of respondents admitted to sharing fake news on at least one occasion. If Facebook’s solution is to rely, in part, on users to report fake news through a system of flagging, users need to be trained in this type of identification.

Facebook’s fake news software may have unintended consequences for the news media industry and reporting. In addition to relying on reports from users, Facebook is deploying their own software to help them identify fake news through patterns in user behavior. Reports of how Facebook’s software would determine fake news have been unclear. Following an interview with Mosseri, Recode reported that Facebook would use data available to them to determine if a story had been read, and then shared, as an indicator of truthfulness. The presumption Mosseri and his Facebook colleagues are working under is, per Peter Kafka at Recode, “If I click on/read a story and then don’t share it with my friends…that’s a sign it may be ‘problematic content.’” The obvious problem with this measurement of truthfulness is that there are a great many variables contributing to why a user would read content and not share it.

And the logic of clickbait — content that is highly personal or includes calls to action is more likely to be shared — makes Facebook’s changes worrisome because they could lead to the production of more share-worthy news and less hard-hitting investigative pieces exploring the often dry and less shareable nuances of domestic and foreign policies. As we’ve seen in the past, because Facebook has become so central to the news media industry, changes like these can have drastic effects on newsroom operations and reporting, affecting variables like referral traffic that have become so powerful in generating ad-based revenues.

Third-party fact checkers will need to find a sustainable business model to continue this work long-term. The third-party fact checkers or “partners” (in Facebook’s language) will not be receiving any sort of compensation from Facebook for contributing labor to this cause. Currently this list includes ABC news, The Associated Press, Politifact, FactCheck, and Snopes, which have agreed to vet potentially fake news that is identified by users or Facebook’s software (Facebook puts these stories into a list that is available to the fact-checking organizations on the back-end for verification). ABC News has said their team will include about a half dozen journalists who were fact checking during the election and who will be redirected by the organization to this work.

Though this system could have indirect financial benefits to news organizations — e.g., fake news gets less traction leaving more room for real news — it’s still a fairly significant investment to make. (And it presumes that people trust fact checkers themselves, which isn’t always the case.) It also relies on the notion that the flow of reports from users and software will be manageable, when content moderation is often an incredibly large undertaking. In a 2014 story for Wired by Adrien Chen, the potential number of moderators for social media sites (often working in areas like the Philippines) was estimated in the hundreds of thousands. It’s unclear how this model will be translated internationally as well. U.S.-based fact checkers may not have the context, knowledge, or bandwidth, to address fake news and hoax claims now entering into the online ecosystems of Germany, France, Myanmar, Brazil, India, and other countries.

Facebook will continue to restrict ads on fake news, disrupting the financial incentives for producers. This is something that Facebook and other platforms, like Google, had already committed to doing over the past two months. For Facebook, this currently means that they will not “integrate or display ads in apps or sites containing content that is illegal, misleading, or deceptive.” However, until Facebook changes its own financial model, which prioritizes content that is easily shared, there is little hope for disrupting the current norms affecting the production of fake news or misleading content. While these policies do inhibit fake news producers from generating money on their own site, Facebook still benefits from the increased traffic and sharing on the News Feed. It’s unclear how Facebook will reduce their own reliance on easily shareable content, which has influenced the spread of fake or misleading news.

This proposal rests on the assumption that “fake news” will be easy to spot and define. At its base, “fake news,” the concept and practice, is clickbait, and has been, in part, manufactured by the incentives baked into how organizations and individuals gain attention over social media networks (through clicks, shares, and likes). It’s a term that has come to refer to a wide-range of media practices that build upon clickbait logics. In more black-and-white cases, fake news refers to intentionally made-up stories hosted on hastily built websites (i.e. The Denver Guardian or The Baltimore Gazette). These types of fake news websites, built by teenagers in Macedonia or by citizens in the U.S. to make money (or just lulz) off the circulation of outrage, fear, or anxiety. But most fake news is very gray. It consists of misleading headlines, deceptive edits, consensus-based truthmaking in communities like reddit or 8chan (i.e. pizzagate), or by the absorption of fake news by political figures, like Donald Trump, who have the power to make fake news, newsworthy.

Facebook and other platforms may, in addition, have to consider how they would actively prevent this misleading news, as well as hate speech, from trending over their networks soon, at least in certain areas of the world. Legislation recently proposed by Thomas Oppermann, of the German Social Democratic Party in Germany, would require Facebook to have a German-based team dedicated to fielding reports of fake news and hate speech, and would require Facebook to remove fake news items or be subject to a $522,575 fine for each post that is not removed.

How countries seek to limit the spread of hate speech and misinformation will continue to be an important counterweight to efforts proposed by companies based in the U.S. (subject to much less stringent media laws); companies will seek to maintain their market share within jurisdictions attempting to extend the media regulations and protections of the print, broadcast, and cable eras to newly dominant information and news distributors like Facebook.

Robyn Caplan is a Research Analyst at Data & Society and a PhD Candidate at Rutgers University’s School of Communication and Information.

Points/spheres: In How do you deal with a problem like ‘fake news?’” Robyn Caplan weighs in on the potential — and pitfalls — of efforts to curb Facebook’s fake news problem. This piece is part of a batch of new additions to an ongoing Points series on media, accountability, and the public sphere. See also:

--

--

Robyn Caplan
Data & Society: Points

Senior Researcher at Data & Society Research Institute, Visiting Fellow at Cornell Tech, Incoming Asst. Prof of Tech Policy at Duke University