Fake News and Rabbit Holes: radicalization via the recommendation engine

Renee DiResta
The Graph
Published in
4 min readNov 12, 2016

--

There’s a lot of talk about Facebook News, fake news, and Donald Trump happening at the moment. Zuck has said he doesn’t think “fake news” influenced the election results. Maybe that’s true; we don’t actually know what News engagement looks like across the Facebook ecosystem because there’s no data out there. But while it may be true in the literal sense, it’s a bit of a cop-out…because, like every other feature of Facebook, News functions as an input for developing a user’s profile and improving the recommendation engine. Clicks and likes on News or Trending News stories indicate something about you, and since Facebook is incentivized to maximize your time spent on the site, they’re incentivized to keep suggesting content that you’ll like. We’ve known that certain aspects of what we see online are biased towards us for a long time — Eli Pariser’s “filter bubble” talk about search results happened five years ago. In that time, recommendation engines have made social networks increasingly powerful, and the financial incentives of these companies (Facebook, Twitter, etc) drive them to show people what they want to see, not necessarily what is.

There are several big features that Facebook uses to improve their internal profiles of their users. There’s Trending, there’s News Feed, there are Groups and Pages and Friends and Likes and Search. No one outside Facebook has any real insight into specifically how these things are weighted when coming up with a profile of a given user. So, for Zuck to say that “fake news” was a small piece of things may be absolutely true. BUT, it also is somewhat disingenuous. A user’s interaction with any given piece of fake news is another data point in developing that persona — for the purpose of more accurately targeting them with ads — and the persona is used when pushing other content, such as suggesting Groups and Pages.

I know a handful people who have Facebook accounts that are used exclusively for research purposes. These accounts have no friends and never directly interact with other users. They have location data by default, based on the IP address that the user signed up with, and perhaps some minimal amount of A/S/L stuff required for sign-up, so Facebook has some idea of what they might want to see from the get-go. The accounts exist to observe what Facebook serves in terms of Pages, News, and Group recommendations when the individual’s direct social graph is kept to a minimum.

Groups appear to be incredibly important. If you join a Facebook Group for a particular topic, it will naturally serve you other Groups, Pages, and news content related to that topic. Join a couple more, and it’ll look at the people who are common to the groups, decide that you are probably something like them, and then suggest other Groups based on groups that they are in. So even if you’ve never directly interacted with them, what you see is influenced by what people who share this interest with you want to see. I’ve looked at this as it pertains to pseudoscience — join a “vaccine hesitant” group, and your suggestions will quickly begin to include chemtrails, anti-GMO, flat earth, anti-flouride and homeopathy groups. This isn’t unique to Facebook, it’s just affinity marketing. It’s how every site with something to sell you tries to guess at what you might like. But on Facebook, the data set is the best in the world, and the recommendations are likely to include something that you’d be curious enough to click on. It’s fair to argue that Facebook is simply giving people what they would find on their own…but, anecdotally, it actually appears to be shaping what they want as it helps them discover new things.

The power of Groups is that they’re where people go to have conversations about specific topics. So, in the context of conspiracist or highly partisan communities, they can become incredibly powerful echo chambers; few people join a group to start challenging the prevailing opinion, and those who do typically get kicked out. There isn’t any data on how many people join the ancillary groups that Facebook suggests*, but what we know about radicalization or conversion in the real world is that it happens gradually and increasingly intimately, in places where people feel that they can let their guard down, where they develop close connections and trust the people that they’re listening to. And as people invest more in Groups and group relationships and become True Believers, they’re more likely to pull from the fake news or conspiracy theories that are shared with the Group and post them to where they might attract new followers from among their Friends. So, by pushing people down the rabbit hole, Facebook has an impact over how influential fake news can be in several different ways.

Now, of course some people seek out local groups and start down various strange paths regardless of the internet. And people have been reading weird news sources for decades — we used to call them “tabloids”. But the internet makes it easy to find your tribe, and having Groups and Pages and Trending News of dubious quality served up by Facebook on every login makes it easier still. So, the answer to whether the platform has facilitated the proliferation of nonsense and misinformation is clearly yes; I would also argue that as a result, it bears some responsibility for the downstream effects. The only question that remains is what it will choose to do about it.

--

--

Renee DiResta
The Graph

I work in tech, and occasionally write about the intersection of tech + policy.