How Facebook Convinced Itself Fake News Isn’t a Problem

Judd Antin
6 min readNov 15, 2016

--

If you’ve used Facebook this election season, you might find it a little strange just how easily Mark Zuckerberg dismissed the idea that fake news on Facebook contributed to the election’s outcome. By now his announcement has been thoroughly dismantled, and there’s even word of a rogue group of Facebook employees working on fake news. And to be fair, it sounds like Facebook is, in fact, working on this problem.

Who knows the true effect of fake news, but Zuckerberg’s ready denial feels out of touch. I used to work at Facebook, though, and I’m not surprised. Facebook prides itself on being a data and experimentation driven company, but this is a story about how myopic data-driven thinking can lead you astray.

My goal isn’t to throw Facebook under the bus — this stuff is really hard and I assume the well-intentioned and brilliant people there are doing the best they can. But there is a lot to learn for the rest of us about the power and limits of making decisions based on big data alone.

I think there are likely at least two fallacies at work here.

The Prevalence Fallacy

“Of all the content on Facebook, more than 99% of what people see is authentic. Only a very small amount is fake news and hoaxes. The hoaxes that do exist are not limited to one partisan view, or even to politics. Overall, this makes it extremely unlikely hoaxes changed the outcome of this election in one direction or the other.” — Mark Zuckerberg

Mark seems to suggest that things that happen less than 1% of the time can’t be obvious, important, or change outcomes. But it doesn’t take much thinking to debunk that idea. Indeed rare events are sometimes the most influential, if only because they’re out of the ordinary. As Rick Webb compellingly writes, the research suggests it’s not just possible but probable that <1% of Facebook stories could have an effect on the election.

Mark also seems to have forgotten his denominator, something that’s disturbingly easy to do when you focus only on descriptive statistics. Nearly 1.2 billion people use Facebook every day. If 1% of them see a fake news story, that means 12 million people see at least one fake news story each day. Here’s another cut. Let’s say the average FB user sees 100 unique News Feed stories in a day and 1% are fake news. Well, you do the math. These are hamfisted calculations, but you get the point.

This is the first way that data-driven decision-making may have gone astray at Facebook. Sitting atop the big machine, burdened by the need to make decisions that influence 1.7 billion people each month, it’s necessary to aggregate and abstract. Literally all the possible things happen on Facebook at least a little bit, and getting bogged down by any one of them is a recipe for paralysis.

But a shift in perspective could shift the decision-making process. As other commentators have pointed out, if 1% of articles the New York Times or the Guardian printed were wrong, that would be utterly unacceptable. A total failure of journalism, and like it or not Facebook is a journalistic entity. How does Facebook perceived responsibility as a news organization operate in this situation?

This is entirely about how you use data to understand experience. If all you’re doing is looking at aggregates, it’s easy to lose sight of the scale of even rare things. The trick here is to humanize big data in the course of decision-making — what does it mean and for whom. Few companies have data at the scale that Facebook does, but every company can face a flavor of this problem. Using big data to inform decisions is the right thing to do, but contextualizing that data in the experiences of users and in the role and impact of events is something Zuck seems to have missed.

“What’s Measured is What Matters”

I know we’ve all heard this catchphrase, and this is a dramatic illustration. Facebook is fantastic at counting things with high precision and large scale in real time. The ad business runs on it. Decision-making relies on it. But this is an example where it falls apart.

Mark seems to be focusing on frequency because it’s a known quantity. I saw this many times at Facebook — the winning argument is one that has a large N and many leading zeroes on the p value. When something cannot be measured in this way, it falls out of the decision-making process. In my time at Facebook I observed some leaders be actively hostile to these other ways of measuring.

What Mark can’t know from his count data alone, though, is the impact of fake news. How upsetting is the average fake news story? What’s the distribution of outrage across reactions to fake news? What impact did fake news have on political perceptions and voting decisions?

This is the moment where rigor falls apart. Having carefully counted the incidence of fake news stories, Mark and his leaders seem to have applied their own intuition with a hefty dose of bias and self-interest. He jumped to conclusions about what it means and why, likely in the absence of good information about how fake news can matter. And the absence of information gets filled with dirt.

Granted, these are difficult questions. But let’s not pretend the answers are unknowable. Research is a large toolbox. They could consult the literature to learn about the impact of rumors and misinformation online, and engage with outside experts. They could find people who have been exposed to a lot of fake news and understand their experiences. They could look through feedback channels that Facebook has at scale for comments about fake news and systematically analyze them. They could survey Facebook users in all kinds of clever ways to understand their potential effect. None of these are interesting if you care just about prevalence, but only if you care also about impact.

And look, maybe Facebook has done all these things — if so I hope they’ll come out and share it. I know the researchers who work on News Feed, and they’re brilliant. But I’m reacting to Mark’s quick disavowal, and my knowledge of how Facebook operates. When data-driven becomes data-mypoic, we all suffer. I worry that Facebook’s decision-making has lost its humanity. And that’s frightening given the central role it plays in our world.

This is one thing that I’m deeply proud of about how we approach decision-making at Airbnb. Big data and experimentation are a crucial part of how we make decisions, but we try very hard to be data-informed rather than data-driven. We want to do the driving. So we rely on the sacred-triumvirate of big data, rigorous multi-method research, and design/product vision. We actively seek to understand what our data means and apply it with our goals and our mission in mind, and with the context of the everyday experiences of guests and hosts.

What Can We Learn?

I trust that the brilliant people at FB are working on this, and that they recognize their role. Even as he dodged responsibility, Zuck admitted as much. For the rest of us, I think there’s a big lesson here:

Importance = Prevalence x Experience Impact

Common things that are trivial can still, in aggregate, have a dramatic impact on user experiences. So can rare but dramatic things. Ultimately, a single-minded focus on counting things will leave you with only half the picture. Understanding impact usually requires that larger toolkit: full-blooded, multi-method, empathy soaked research. Deep qualitative work focused on the how and the why followed by surveys which help us understand prevalence from a different view. This is the one-two punch we use at Airbnb.

--

--

Judd Antin

Executive coach, consultant, writer, teacher on leadership, management, social psychology, product design — Ex-Airbnb, Ex-Meta, Ex-Yahoo — https://juddantin.com