11 steps Facebook should take to battle fake news in 2017

Facebook’s news feed algorithms wield real influence on politics and public discourse. We see this power, and an implicit civic responsibility to address it, acknowledged by Facebook in their 2-month-old announcement regarding their efforts to fight fake news, and a recent post on community. How many fake news stories have come and gone in-between…

Considering the extent of the problem and the ability of Facebook to take additional, meaningful efforts in response, the company’s efforts so far are not enough. The platform must continue to evolve to meet its responsibilities, maintain the trust of its users, and become a stronger force for news literacy and civic discourse.

Developing and reporting on clear metrics of success

Facebook has not yet acknowledged the extent of fake news on its website. While complex to evaluate, it has also not proposed ways of measuring this problem. As such, its proposed interventions may not necessarily address the core issues at stake, and there is no way for outside groups to assess the long-term value of Facebook’s product updates.

Meaningful metrics.

At a minimum, Facebook should quantify the amount of traffic it sends to fake news via its platform. This allows for easy benchmarking of its successes over time. Detailed reporting could be prepared for issues of key public interest.

It would be more valuable to assess through key indicators how Facebook impacts knowledge, attitude/beliefs, and behavior as a result of the content that users share and consume on its platform. How many people believe in the Bowling Green Massacre after touching relevant content on Facebook?

This is complex to measure. However, by amplifying quality content, and not just demoting the visibility of the most questionable content, Facebook has an opportunity to encourage consumption of media that lifts our discourse and improves our understanding of current events. Facebook’s ad team would also benefit from an integrated approach to understanding the real-world impact of links on the site.

Public accountability.

Without accountability to clear indicators of success, Facebook may still stand to lose credibility as a platform. Through regular and public reporting on its efforts, Facebook would demonstrate its thoughtfulness and diligence, offer transparency about how and why its decisions have been made, and reinforce that Facebook is an active and positive contributor to civic society.

Weighting quality content in user news feeds

Facebook has a core mission to “give people the power to share,” but it naturally doesn’t give the same weight to the importance of receiving content. It relies on signals around the kind of post, the amount and nature of content attached to the post, and engagement (clicks, likes, shares) within a user’s network in order to decide whether content continues to be boosted in news feeds. So how should Facebook diminish the popularity of fake news within this context? And how else could it use new mechanisms to improve the general quality of content on the site?

User rating.

At one point, Facebook could only measure the nature of linked content based on the fact that users had done something superficial — like click on it, or like it. With the introduction of features like Facebook Instant Articles, Facebook expanded its access to additional information to measure user engagement with that content. Now, Facebook will also encourage users to provide binary feedback as to whether shared content is fake news.

Facebook still makes no systematized effort to ask users about the quality of content when they return to their news feed. By introducing random surveys to readers, with questions that touch on areas like number of sources, tone, and complexity, Facebook may be able to add additional signals to their algorithm about the quality of news on their site. This process would proactively invite user feedback for all content, and develop a more comprehensive set of data for content evaluation.

What’s more, Facebook could proactively encourage a subset of its users to rate content by promoting these links as a public service via its existing advertising placements. As a bonus for Facebook’s business, this would encourage these viewers to look more frequently to these spots on the page.

Additional powers for third-party fact checking.

Facebook now allows reputable, third-party partners to assess whether specific links contain fake news. Facebook has committed to penalize the promotion of these links to an undetermined extent. However, this does not yet create a systemic incentive for publishers to reduce fake news.

Facebook can build on this capability by assessing the overall ratio of links from sites that publish fake news and penalize traffic to top offenders on the platform. It is not controversial that the sites that invest in detailed fact-checking deserve to be rewarded. Conversely, sites that do not conduct fact checks or purposely promote fake news are only likely to change their behavior when Facebook can legitimately reduce page traffic and ad revenue across all of their links.

Media literacy rating.

In 2012, Renée Loth prepared a study at the Shorenstein Center on low media literacy levels of high school students. While troubling, this is not a new phenomenon, nor have schools systematically solved the problem of ensuring that students understand how to assess key characteristics of news they consume, including its goals, audience, and quality.

Facebook has an opportunity to take a leadership role. It can assess user media literacy based on engagement with content on the platform and user answers to content surveys suggested above. It might then weigh the promotion of the content each user shares based on the strength of his or her media literacy score. Perhaps just as valuable, Facebook can share a variety of information directly with users regarding their media literacy. More on this below.

Displaying content and information to users

At the extreme, bad news shouldn’t get a trial run on Facebook. It should just go into the dustbin. Reddit “ghost bans” users that violate their terms of use; even if these users still post links, no other user can see them, and the poster may never know that their content never “left the launchpad.”

In light of this approach, Facebook’s early statement that fake news “may also appear lower” in the news feed seems anemic. In contexts where this isn’t black and white, though, Facebook can signal the quality of a post through expanded reporting and promotion of strategic content.

Summary of users who engage with content.

A specific link may enjoy disproportionate support among members of a party or an ideological wing of a party; among people of a certain level of education; among people who “like” a specific type or mix of pages; or among people who live in areas with specific characteristics. In outlier cases, access to this information (perhaps via text immediately above or below display content) could substantially help to inform media literacy efforts as users consider whether to click on content, and how to think about it. For instance, an article might be labeled as especially popular with rural Democrats with less than a high school education or Republicans who are over the age of 65.

Alerts after news has been rated as fake.

Facebook knows if users have clicked on a piece content. Accordingly, if that content has subsequently been judged to contain fake news, Facebook could share a notification to this effect to any user who had previously visited that link. It is important that people are given information to challenge popular fake news, and in aggregate, this will only improve Facebook’s reputation as an advocate for the user.

Proactive reports to users on a periodic basis.

Since Facebook has a good birds eye view of the content with which each of its users is engaging, it can also share regular, customized reports with users to help them understand the quality of news they are consuming, how they compare favorably and unfavorably to their peers, and how they compare to a broader user base in their area or in the country at large.

Expanded media profiles with engagement data and independent assessments.

As drivers of the Facebook information ecosystem, any news organization with a profile on the site could also be accountable for the quality of information it publishes. Specifically, each verified profile for media outlets could feature prominent information that allows Facebook users to understand the extent to which those outlets publish fake news, and to what demographic and psychographic profiles their content most appeals.

Information for readers with low media literacy scores.

When users have demonstrated low levels of media literacy through the content they read and share, or through their evaluation of that content, Facebook could actively promote tips, articles, and other content to improve media literacy in a neutral and nonpartisan way.

User consent regarding media they would like.

Facebook could also nudge its users to accept voluntary controls around media consumption, just as Google News encourages its users to personalize their mix of news. After allowing users to set goals for the balance and quality of content they would like to consume, it could then remind users before they click on any content that is fake, highly partisan, etc. that the content does not align with their stated goals for content consumption.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.