Fake News, Bad Videos, and Sharebait: Facebook’s True Problem Is Pages

Why the algorithm changes may be about Pages, not news

To say that the changes to Facebook’s algorithm to include less actual news in its News Feed caused a stir in the media would be an understatement. People freaked out.

In the initial announcement, Mark Zuckerberg noted that news would go from making up 5% of the News Feed to about 4%, which sounds small, but is significant. Dozens of media outlets rely on Facebook as a source of traffic, including many small, independent, and niche publishers. (And all of the big ones.)

The panic, to a degree, is warranted. Journalists have been grousing about Facebook’s chokehold on the media for years. An announcement that on its face seems to be deprioritizing journalism in favor of more content from friends and family, particularly following use of Facebook’s platform to spread misinformation that may have impacted the 2016 presidential election, is certainly alarming.

In the subsequent weeks, Facebook went on a bit of a PR offensive to try to mitigate the controversy. Facebook has publicly discussed and tried to explain these changes, with numerous public statements and interviews in the media to try and explain the rationale behind the shift.

I think there’s a story that lies in between what Facebook is saying publicly, and what it can’t (or won’t) say: these changes are their efforts to try to address many core, underlying problems that originate from one of their signature products: Facebook Pages.

Facebook Pages were not supposed to be the point of Facebook. People roll their eyes when Facebook says that the point of the platform is connecting with friends and family (who largely do not publish to Pages), but it’s true; person-to-person connections are what makes Facebook valuable. But all the attention from publishers, brands, and politicians is on Pages. Pages suck up all of the oxygen in the room.


A big part of what I’ve done in my jobs is keep a close eye on what the rest of the internet is doing, e.g. what stories and story formats are resonating most with communities on Facebook, Twitter, and beyond.

There are roughly a dozen different 3rd-party tools I’ve used to do this, and many specifically devoted to Facebook. The trends were not difficult to spot. After years of monitoring the ups and downs of Facebook’s News Feed algorithm, a few things were abundantly clear.

  1. Engagement was the key driving variable for links in the News Feed algorithm. Facebook’s algorithm skews towards stories that get likes, comments, and shares, not a page’s authority. It’s how fake news permeated the platform; it’s also what caused a massive influx of recycled engagement-bait content from disreputable publishers.
  2. Publishers had figured out how to juke Facebook’s video-friendly algorithm to generate billions of video views. This was partly driven by Facebook counting a view after three seconds, a metric creators began to optimize for on their videos. Many reputable publishers shifted their entire business models to try to create videos optimized for Facebook’s News Feed. Original videos (with actual journalism!) performed a great deal worse than, say, any page’s short video summarizing/aggregating a news story over B-roll. What was succeeding was not the type of premium, high-quality product that draws advertisers. (Enter Facebook Watch.)
  3. Due to the exploitation of the algorithm, people were sharing random crap, not the stuff Facebook actually wants. You’ve seen it: your friends and family sharing memes, videos from places you’ve never heard of, and of course, partisan (and often fake!) news. They weren’t sharing status updates that could be mined for data, or pictures that could be used for facial recognition, or showing interest in brands and products so they could be served ads.

These were serious problems. Fake news was what concerned the media, bad videos could affect the bottom line, and the overabundance of terrible content was a looming existential threat. But all roads lead back to Pages.

I should also note that programming a Facebook page is essentially free. You’re guaranteed more success if you spend money on ads to grow the page or serve a wider audience, but success with a Page—for a while, anyway—was possible without Facebook making a dime.

Is giving people the ability to create and publish to Pages for no money really worth the headache of all the items above? If you’re Facebook, how do you address this?


Here’s what I think they did.

The first step would be to kill the low-quality short-form videos that have taken over the platform. Not only are they not the type of content Facebook wanted on its platform, Facebook’s heavy investment in premium content in Watch (i.e. not Pages) meant that these short videos were competition to Facebook’s own products.

This would not be the first time Facebook changed their algorithm to adjust for a specific type of content that was abundant on the platform. An earlier change infamously crippled the traffic for Upworthy, but the problem wasn’t necessarily Upworthy itself — it was all the copycats. (Google “Upworthy headlines,” and you’ll find many guides to replicating their success.)

In the 2017 scenario, Upworthy has been replaced by NowThis, a company that creates good videos, but has had their formula copied by what sometimes seemed like half the websites on the internet. There were entire companies formed to create software to streamline the creation of a NowThis-type video.

The second step would be to reduce the preponderance of content from Pages in the News Feed. As I noted above, the fake news epidemic is less a “news” issue than a Facebook Page issue, in which anyone can create a page, publish something that’s going to get a reaction, and put money behind it. Rinse, repeat, until you can maintain a reliable audience that rivals (or exceeds) that of a publisher that employs journalists. It works for fake news, and it works for news aggregators trying to make a quick buck.

Downgrading the prominence of Pages shifts responsibility from Facebook needing to moderate its millions of Pages, which can be run by anyone anonymously, onto individual users, whom Facebook is not fully responsible for controlling.


If truly anyone can do this, how do you discern between the good and the bad, the truthful and the untrue? Facebook, like many Silicon Valley companies, is data-driven, and would prefer not to have to introduce human (or editorial) judgment into their calculations.

It also serves them better to come up with a purely technological solution over someone’s editorial judgment, because that judgment can’t be exercised without accusations of political bias.

The best illustration of this was when Facebook scrapped their editorially curated trending topics; when it was revealed those editors were “suppressing” conservative news outlets, the criticism and outrage they received from right-wing publications was significant. This was not something Facebook would be willing to go through again.

Being noncommittal in exercising editorial judgment for fear of retribution and backlash is likely one of the factors that contributed to the explosion of fake news. When that inaction had a cost, it was clear Facebook had to do something. However, no matter what solution they introduced, Facebook would not take on the responsibility of being the arbiters; they wouldn’t fall into that trap again. The goal is to be apolitical.

Without Facebook taking that judgment into their own hands, how could they make the distinction between “quality news” and everything else? Easy: they’ve got two billion users who can help do that for them.

In Zuckerberg’s January 19 posts, he wrote:

The hard question we’ve struggled with is how to decide what news sources are broadly trusted in a world with so much division. We could try to make that decision ourselves, but that’s not something we’re comfortable with. We considered asking outside experts, which would take the decision out of our hands but would likely not solve the objectivity problem. Or we could ask you — the community — and have your feedback determine the ranking.
We decided that having the community determine which sources are broadly trusted would be most objective.
Here’s how this will work. As part of our ongoing quality surveys, we will now ask people whether they’re familiar with a news source and, if so, whether they trust that source. The idea is that some news organizations are only trusted by their readers or watchers, and others are broadly trusted across society even by those who don’t follow them directly.
This update will not change the amount of news you see on Facebook. It will only shift the balance of news you see towards sources that are determined to be trusted by the community.

People have made light and critiques of Facebook’s trustworthiness survey, which, as outlined in BuzzFeed, was two questions long. But, a survey that simple serves three purposes: it can be easily filled out millions of times, it can easily filter out newly created pages spreading misinformation, and most importantly, it gives Facebook a new set of numbers to use in weighing a page’s placement in the News Feed.

With that data, the percentage point of “news” that is lost from the News Feed comes from the bottom, not from the top.


Crippling the output of the worst Facebook pages would align with all of the public statements Facebook has been making: emphasizing “meaningful social interactions” (as engaging with garbage content from faceless pages is not meaningful), reasserting their commitment to “quality news” (i.e. outlets deemed to be “trustworthy,” hopefully those committed to journalism), and doubling down on the idea of video being Facebook’s future by dedicating themselves to managing the quality of the content they control on Watch.

Pages run by the media, celebrities, and brands helped fuel Facebook’s growth, but they’ve since spun out of control. Responsibly diminishing their impact seems like a logical way to fix the platform.