Is social media the cause of fake news— or the cure?
Irony alert: In order to get this online before Misinfocon wraps, I’m posting my first draft without review. Please leave your corrections or additions as annotations or responses to this post.
When people talk about fake news — and by “people”, I mean human beings other than Donald Trump — they often focus on the role of social media in disseminating stories from websites that traffic in rumor or deliberately manufactured falsehood. Facebook has come in for so much post-election scrutiny that Mark Zuckerberg just published a 6,000-word letter that tried to address the fake news problem.
The role of social media in the fake news phenomenon was front-and-centre at Misinfocon, a mind-blowing gathering that I was fortunate to attend this weekend. Jointly hosted by the MIT Media Lab and the Nieman Foundation for Journalism, Misinfocon convened an eclectic and influential group of journalists, technologists, librarians, academics and others with a strong investment in the future of media. This group not only turned its attention to the current challenges to media credibility, but also rolled up its sleeves to start working on practical solutions.
In the twenty-four hours of Misinfocon that I was able to attend, I was fascinated by the many ways social media wound its way into our conversation. Yes, social media has been a key enabler of the fake news problem, but fake news itself is only one part of a larger ecosystem of misinformation that runs the gamut from unintentionally misleading content all the way to deliberate, outright lies.
And social media shows up all over that misinformation ecosystem — not just in the way it drives misinformation, but increasingly, in fighting it. This post is my attempt to capture that various ways that social media is implicated in the problem of fake news and misinformation. More importantly, it’s an effort to capture some of the exciting ways in which Misinfocon participants see the potential of social media and social software to help address that problem.
Social media as a driver of misinformation
Social media is at the very heart of the misinformation problem, but if we’re going to address that problem, we need to look at the very specific (if interlinked) ways that social media is driving fake news and misleading content.
Media “democratization” and the distribution of fake news: Ironically, social media has ended up hurting our democracies for the very reason it was once greeted with enthusiasm: because anyone can create a blog, post a YouTube video or send out a tweet, established media outlets no longer have a lock on creating or distributing the news. For a very brief moment, that seemed like an unalloyed Great Thing — and certainly, the accessibility of social media has diversified and democratized media creation. But it’s also the reason that a random group of Macedonia teenagers could become a fake news powerhouse. Social media has become the overwhelming distribution network for fake news and other forms of misinformation.
Audience fragmentation: Social media has contributed to the fragmentation of audience attention for reasons that are closely related to the phenomenon of media democratization. As the explosion in online media sources gave people more and more choices of where to put their attention, readers and viewers have increasingly gravitated to the specific sources and stories that appeal to their narrow interests and worldview. That created the opportunity for media creators to thrive (politically and/or financially) by serving people the specific news and commentary they wanted to see…making it all but inevitable that some folks would start to seize those political and financial opportunities without worrying about whether the content they created was actually true.
The all-powerful algorithm and the invention of filter bubbles: As if audience fragmentation wasn’t a big enough problem, social media platforms have ossified the divisions among Internet users. As our particular interests and preferences manifest in what we choose to view or share online, the algorithms driving major social networks take note of what we like, and what we avoid. To encourage us to spend more and more time on Facebook, Facebook’s algorithm shows us the kind of content it knows we like; Twitter, YouTube and every other social platform do the same thing. That leads to a situation in which many of us spend our time online in a “filter bubble”: an online conversation in which we only hear from people and publications that reinforce our pre-existing worldview. That in turn fees the misinformation ecosystem, because we are less and less likely to get exposed to facts or perspectives that challenge us — which means that when we see stories that aren’t true, we may not see those stories get corrected.
Shortening attention spans: Social media has contributed to the shortening of our attention spans. There is little doubt that digital distraction has made it harder and harder for people to pay sustained attention to long-form content. (Though I personally think we spend too much time worrying about all the ways that shorter attention spans are bad, and not enough time exploring the ways our evolving cognitive styles can be usefully exploited — but that’s an argument for another day.) As attention spans get shorter, news stories have to get simpler, even though our political, economic and social challenges are only getting more complex. In a media environment that rewards stories that can be quickly absorbed and shared, not only depth but accuracy becomes a frequent casualty.
Parody culture: While humor can be a very effective way of challenging dominant narratives or bringing attention to complex issues, news parodies can inadvertently serve to disseminate false information. If audiences don’t realize that what they’re watching, hearing or reading is intended as a joke, they can end up repeating or sharing it as truth.
The privileging of provocation: The social media economy of attention — measured as clicks, likes and shares — rewards provocative content. A balanced story is all well and good, but if you really want to explode on Facebook, write a strongly opinionated article with a polarizing headline. This creates incentives to commission or create commentary (or biased reporting) rather than accurate and inaccurate reporting.
Meme nation: As First Draft’s Claire Wardle pointed out in her very helpful overview of the misinformation ecosystem, fake news doesn’t just come in the form of text. Images — like those ubiquitous social media memes — can be a powerful way of transmitting false information. Social media loves images, so once information is embedded in a shareable image, it spreads quickly and may be readily accepted as truth.
Social media and social software as a cure for misinformation
We may not be able to cure the problem of fake news through social media alone, but social media and social (collaborative) software are a necessary part of the fact-lover’s toolkit. There were a number of promising demos and nascent projects showcased at Misinfocon (and even more being birthed) that speak to the way social media can help address the problem it has created; there were also many conversations that suggested where and how we can demand more from our social networks and tools.
It’s worth noting that most of these social software approaches stand apart from the dominant social networks, in large part because Facebook, Twitter and other platforms don’t offer journalists and media organizations a whole lot of predictability or control over how they interact with their audiences. But there was also a palpable skepticism about how much we want to rely on the major networks to solve a problem that they have helped to create.
Rather than going tool-by-tool, I’m going to talk about the major areas of opportunity, offering some examples that were shared this weekend.
Participatory news gathering: One of the most exciting themes of Misinfocon was the multifaceted conversation about the relationship between facts, narrative and emotion in engaging media consumers. For all the focus on ensuring media accuracy, many journalists and media activists recognized that hearts and minds aren’t necessarily won over by an endless stream of facts; personal relationships and conversations are a crucial part of opening people’s minds to new information and ideas. To that end, it may be more effective to engage news audiences at an earlier stage in the news gathering by using online tools to invite input and conversation during the research and writing process. To see what that might look like, check out Hearken, which allows reporters to share their notes as they work, and to ask questions as they work.
Social verification: The social media users who spread misinformation can also stop it. Social verification—including media consumers in confirming the veracity of accurate stories, or refuting false ones — is a key tactic for tackling misinformation. In its simplest form, you see social verification all the time on social networks: whenever you see a friend backing up (or debunking) a Facebook post by adding a credible link in the comment thread, that’s social verification. And social media data can be an important resource for assessing the validity of a story or source. But there are a number of projects that aim to make that process more consistent and rigorous. For example, Hypothesis is a collaborative annotation platform that lets media consumers annotate the content they’re viewing online — so they can comment on or correct what they’re reading or watching.
Supporting media literacy: We can’t lay all the blame for misinformation at the doors of social media: it’s also a reflection on generally low levels of media literacy. Many people just don’t know how to judge the accuracy of what they watch, read or hear — and fair enough, since many of us came of age in a world that didn’t require the kind of media sophistication you need today. Happily, there are some amazing people and organizations who are hard at work on the challenge of supporting media literacy, particularly among young people, and they’re using social tools to do it. For example, The Lamp engages young people in media accuracy by challenging them to remix the news with their Media Breaker tool, and to share their results on a closed video sharing platform.
Fact checking goes social: If social media is the context in which many media consumers encounter fake news, then it can also be the context in which inaccuracies may be quickly corrected. There were a number of Misinfocon participants who were discussing or developing fact-checking tools, but if you want to take one for a spin right now, you can try the Washington Post’s fact-checking extension for Trump’s tweets.
Algorithms for accuracy: The algorithms behind social media platforms may be driving us into filter bubbles, but it doesn’t have to be that way. Some of the most interesting conversations at Misinfocon focused on how algorithms could be adjusted to eliminate (or at least reduce) fake news, inject a greater diversity of opinion and information into what people see on their favorite social networks, or offer users more transparency or control over what algorithms show and hide. I’m hopeful that we’ll see some exciting tools and experiments in this vein in the months and years ahead.
Sharing solutions and tools: Last but not least, social platforms and networks are crucial channels for discussing the problem of misinformation and disseminating potential solutions. Misinfocon itself used Medium, Hackpad and Slack as key tools for publicizing the conference and collaborating over the weekend. And there’s an ever-growing number of social accounts and blogs that aim at tracking the fake news phenomenon. If you want to learn more about misinformation — and the role of social media in driving or addressing it — you can start by tracking the misinfocon hashtag itself.