False News: No Easy Answers

danah boyd has written a brilliant new piece, “Google and Facebook Can’t Just Make Fake News Disappear.”

Her central point is that solving the fake news challenge — which I now call “false news” because the original term has been co-opted to mean any news someone does not like, even if it is true — is not really a technological problem. It’s a sociological and cultural concern. If people wish to construct alternate and competing realities that draw upon different conceptions of truth, there is no way to stop them. Any technological fixes we might consider, such as making it harder to share unsourced stories, can always be easily circumvented.

True. A corollary point, which boyd also makes, is that we do not have anything close to a shared understanding of what fake/false news means. Are we talking about blatant falsehoods, a la Pizzagate? Or do we mean stories that have a kernel of truth but which are exaggerated and distorted as clickbait? boyd: “I have consistently found that without a precise definition or a clearly articulated problem, all that is achieved from drumming up conversations about the dangers of XYZ is spectacle.”

True. This point may seem obvious — if you don’t know what you are dealing with, how can you respond to it? The trouble, though, is that there is no credible way to arrive at a shared and universal understanding of the definition of fake/false news. This is not math or chemistry, in which universal laws can be determined and shared. This is politics, in which language itself will always be the object of dispute. So while I agree with boyd on this point — that it would be better to have a shared understanding of what we are talking about — this will never actually happen.

For me this means I circle back to my own definition of false news: blatant and ridiculous falsehoods, cooked up by conspiracy theorists. Anything that has some degree of subtlety and nuance could be contested if branded as fake/false news. I am talking here about the “whoppers.”

Google and Facebook could, and should, make it harder to share and surface these kinds of stories. This is not censorship. Conspiracy theorists can still cook up conspiracies and spread them around on their own email lists and (yes) their own Facebook pages or as tweets that link to their pages. But it would be harder to find this kind of content via a Google search, or to share it through Facebook, or to retweet it. Not impossible, but more difficult.

The point of erecting such barriers would not be as some panacea that solves the whole challenge. As boyd notes, workarounds would quickly proliferate. The goal is symbolic, and as such important: It would place our leading technology businesses on the side of truth and accuracy, in a recognition that these platforms and tools have become modern media companies.

Even so, ridiculous falsehoods would flourish online. Nobody is naive about that. The only true resolution is genuine engagement with people with whom we passionately disagree about how the world works — not in an effort to ridicule them, but in an effort to understand their point of view.

boyd: “The design imperative that we need to prioritize is clear: Develop social, technical, economic, and political structures that allow people to understand, appreciate, and bridge different viewpoints. Too much technology and media was architected with the idea that just making information available would do that cultural work. We now know that this is not what happened. So let’s put that goal at the center of our advocacy and development processes and see what we can build if we make that our priority. Imagine if VCs and funders demanded products and interventions that were designed to bridge social divides.” Hear hear!