Let’s Fix It: Facebook’s Fake News Problem

Disclaimer: I am a Democrat, and a fairly liberal one at that. Fake news is a problem that affects people across the political spectrum, and those on the left and right spread and share fake news. Fake news is equally bad, whether it’s from those on the left, right, or center.

Fake news is a major problem in American society. It spreads like metastasizing cancer, particularly on social media like Facebook. And it undermines actual discussion regarding politics and political outcomes.

If people can agree on underlying facts — for example, that the earth is experiencing climate change due to man-made effects like industrial pollution — then we can accept there’s a problem and work to find the best solution, even if we have different solutions in mind. If many people take in false information, then we don’t even necessarily agree that voter fraud is not widespread; or that Russia tried to interfere in our elections; or that Social Security is becoming insolvent; let alone what to do about those things.

This guy fell for a lot of fake news.

Fake news is particularly a problem on Facebook. This was seen recently in the election, where both liberal and conservative sites spread information that was partially or completely inaccurate, often with bold, vituperative headlines not supported by the body of the article.

Here’s why fake news is particularly problematic on Facebook in particular.

Facebook has 1.79 billion — that’s billion — monthly active users. This is orders of magnitude more people than read the New York Times or watch Fox News. It’s larger than the population of India.

Many Facebook users don’t actually read stories linked to in posts. They see headlines shared on their news feeds, take it in as part of a stream of information, and assume that these assertions are factual. There’s a big difference between an article coming from a site like the New York Times or Fox News, where there are actual journalists vetting sources and fact-checkers double-checking their work and that of a misleading headline coming from a site made by a Macedonian teenager solely to get ad revenue. The LA Times might occasionally get a story wrong or have a biased reporter, but they’re generally trying to get it right and report the facts. And when they do get something wrong, they issue a retraction or correction for the record.

Facebook is designed around virality and engagement. A few years ago, you used to see all Facebook posts from all of your friends and companies and brands you liked. Depending on how old you are or how long you’ve been on Facebook, you may remember Zynga exploiting this by spamming your wall with Mafia Wars requests and building a huge audience cheaply. Facebook fixed the feed problem by decreasing the reach posts get. This was partially to reduce spam and partially to increase revenue. Companies now reach a small percentage of their audience with each post and can pay to “boost” that reach. This has been great for Facebook’s bottom line, but not so great for the spread of factually correct information.

The decrease in reach resulted in the most viral content getting the most shares, likes, and comments. An image of giraffes wearing neckties gets more shares than your friend’s baby photos. A poorly researched post about Donald Trump voters inspires a lot of anger and comments and goes viral, far more than an actual article about possible conflicts at the Clinton Foundation.

Buzzfeed did an analysis and found that fake news spreads more than real news:


If you get bad information and then use that to form a political opinion, it can lead to poor outcomes. If Hillary Clinton is essentially truthful (https://public.tableau.com/views/WhoLiesTheMost-2016PresidentialElectionv92_0921_02/WhoLiesMostDashboard?:embed=y&:display_count=yes&:showVizHome=no), and if she has not in fact been charged with a crime and no charges are forthcoming, then she doesn’t deserve to be locked up and could possibly be trusted, for example.

Unfortunately, Mark Zuckerberg doesn’t think fake news on Facebook is that big of a problem:

Mark Zuckerberg: “Not my problem, suckas!” (not an actual quote from Mark Zuckerberg)


We should all agree that looking at fake news as a percentage of posts is a bad metric. Facebook content follows a Pareto curve where a small percentage of content makes up the bulk of the comments, shares, and likes. A better metric would be how often fake news is shared on the site. Even better would be how often is fake news shared compared to other content, particularly real news.

Facebook doesn’t necessarily have a business obligation to stop the spread of fake news. But because nearly two billion customers use Facebook and newspapers and magazines and other sites use Facebook for distribution and audience building, I would argue that Facebook has a moral obligation due to their scale to stop known garbage from spreading. And if Facebook becomes known as a place that prefers quality content over whatever is most popular, that may help with their bottom line as well.

So how do you do stop the spread of fake news?

There are several tactics.

Facebook’s Secret Algorithm

Like Google’s PageRank, Facebook doesn’t share their algorithm for showing users content. After all, they don’t want posters to game the system. This also means that it would be relatively easy to make some behind the scenes changes to de-prioritize fake news. Facebook could flag known fake sites and penalize them; Facebook could give a bonus to credible news organizations like Fox, CNN, and the Washington Post. Users wouldn’t notice these changes, but posts from non-credible sources would be shown less and spread less.

Badging and Social Proof

Facebook could develop some badges that go alongside news stories and posts that share a webpage. For example, known sources that employ good journalistic practices could have a “NEWS” or “VERIFIED” tag. Sketchy sources like so called “parody” news sites could get a label like “UNVERIFIED”. Humans could build the database of trusted and un-trusted sites, and users could be allowed to flag a story as un-true or fake news, prompting review at a certain threshold.

Facebook did get in trouble for a supposed bias against conservative stories with their former news curation team. And there’s a danger of a backlash here if they lend credence to some sources but not others. But it’s better than simply letting propaganda and “parody” news spread misinformation because it’s vectorized for viral propagation.

Machine Learning And Automation

Machine learning is a term that gets tossed out a lot, often as a semi-magic solution to all of life’s ills. But computers could be trained on what’s reputable and not reputable and deployed to help sort that out as stories get posted. That data could feed into prioritizing what content shows up in your newsfeed or how it’s categorized.

Or you could automate in other ways. Here’s a solution a group of college students came up with in three days:

The college kids probably didn’t look like this.


The students’ Chrome extension cleverly checks the source of Facebook posts, cross-checks other sources to attempt to verify the story, and comes up with a “Verified” or “Unverified” nomenclature. And if a story is unverified, it even provides links to other, better sources of information.


Fake news is not the only problem related to Facebook news. The other large issue is that there’s a well-known “filter bubble.” The same forces that surface viral content like fake news ensure that your feed is largely filled with information that reinforces your worldview.

The Wall Street Journal has a great interactive demonstration of “red feed/blue feed” — or how Democrats and Republicans see completely different stories from different sources on the same topic — here:

One feed, two feed, red feed, blue feed

Here’s Wired’s take on why that’s a problem:


Essentially, if you only hear information that agrees with you, you’re missing half the picture and liable to come up with bad opinions or support bad policies. It’s good to have your opinions challenged from time to time to prevent your beliefs from becoming calcified.

So how can Facebook help change that? They could start by allowing in a percentage of differing posts and articles and videos into our feeds. If I’m a die-hard Hillary fan, then Facebook could show some content from the Wall Street Journal and Fox News that are critical of her ties to Wall Street and her conflicts of interest with the Clinton Foundation while she was Secretary of State. If I’m a Trump supporter, I could see stories like the relative honesty of various presidential candidates (Trump ranked lowest) or news about the various women who accused Trump of sexual assault.

As noted above, many people don’t read past the headlines on articles. Having a variety of opinions displayed in your feed would expose you to other viewpoints.

As Thomas Jefferson once wrote, “Whenever the people are well-informed, they can be trusted with their own government; …whenever things get so far wrong as to attract their notice, they may be relied on to set them to rights.” The converse of that is true as well. When the people are poorly informed — both from digesting information that largely agrees with their worldview and from being exposed to and believing false information — bad things happen. We can see examples of this in the propaganda arms of countries like the former U.S.S.R. or in the genocide in Rwanda (http://www.hscentre.org/sub-saharan-africa/media-tool-war-propaganda-rwandan-genocide/).

The Internet was supposed to bring about an information revolution. If Facebook doesn’t do something, unfortunately, much of the information it spreads will be false, to damaging effect.