The Algorithmic Filtering Debate Desperately Needs More Nuance

Will Rinehart
7 min readAug 24, 2015

As we pass the year anniversary of the shooting of Michael Brown and the resulting protests in Ferguson, I am reminded of Zeynep Tufecki’s chronicle of the events, which ends as a tale of two social networks. While she watched in horror via Twitter at the events unfolding in Missouri, the next day she lamented that Facebook didn’t have immediate coverage:

This morning, though, my Facebook feed is also very heavily dominated by discussion of Ferguson. Many of those posts seem to have been written last night, but I didn’t see them then. Overnight, “edgerank” –or whatever Facebook’s filtering algorithm is called now — seems to have bubbled them up, probably as people engaged them more…

But I wonder: what if Ferguson had started to bubble, but there was no Twitter to catch on nationally? Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I cannot be sure.

My reaction is the same as it was then. Why would I go to Facebook for breaking news? I go to Twitter for that.

While there is much to criticize Facebook for and Tufecki has done a great job of theorizing some of the key concerns, Facebook is a site for news articles, in-depth conversations and family photos, not a source of breaking information. And I’m not alone in this. Pew survey data finds that Twitter is used nearly twice as much as Facebook for breaking news. And near its genesis, Twitter was populated by journalists that would often break news on the site. Even now, they remain prominent figures in the network.

Tufecki suggested Facebook’s algorithm slows down this dissemination. Yet, network theory predicts differing outcomes based on the network architectures of Facebook and Twitter. Importantly, the relationship between users on Facebook and Twitter exhibit distinct characteristics. Facebook friends are usually symmetrical; becoming friends indicates a two way relationship. On Twitter, only about 22 percent of users pairs have a reciprocal relationship. Unlike Facebook, Twitter is littered with power users. Taylor Swift has 61.5 million followers and only follows 226 people herself. Each of her posts have thousands of retweets. In other words, each is a piece of viral content. In this kind of environment, information cascades can travel across the network far quicker. A study of the lag in breaking news finds that Twitter takes about 2.36 hours to pick up an event compared to Facebook which takes nearly 10. One seminal paper in the literature even asks if Twitter is better understood as news medium because it looks far more similar to one. If we agree with Pew that “Twitter’s great strength is providing as-it-happens coverage and commentary on live events,” it is due to the nature of the network, and not simply the lack of an algorithm.

What has been dismaying in the debate over algorithmic filtering is that it is becoming difficult to separate out genuine criticism and founded worries from unfounded missives against technology, or what Adam Thierer calls technopanics. Though the boundaries are fuzzy, it is important to make the effort to distinguish between normative criticisms and descriptive claims.

The algorithm debate desperately needs more nuance. Commentators need to adopt a multidisciplinary approach that incorporates psychology, communication theory, network theory, economics, and data science. And journalists especially need to parse out real effects of these new mediums from already existing social tendencies.

Rarely will you find a concerted effort to winnow agenda setting from framing effects even though both have a long history in communication and journalism scholarship and clearly apply to the debate. Nor is it likely that you would hear a mention of the extensive literature on confirmation bias. Here is a quick and dirty of it. Everyone interprets evidence selectively and searches for evidence to reinforce current beliefs. And it happens on Facebook too. Surprisingly.

These issues are not easy to unpack to be sure, but we need to have a healthy perspective on the development of new communication technologies.

Consider Pew’s “Spiral of Silence” study released last August. The researchers begin by saying,

A major insight into human behavior from pre-internet era studies of communication is the tendency of people not to speak up about policy issues in public — or among their family, friends, and work colleagues — when they believe their own point of view is not widely shared. This tendency is called the “spiral of silence.”

What they found is that people are far less likely to post in about Edward Snowden and his revelations of government surveillance if their online networks disagreed with their viewpoints, but were nearly twice as likely to post on Facebook if they felt their network agreed. I would have expected this outcome, and not because Facebook is a nefarious site, but because we are social animals that constantly groom our image. If you and I had a conversation and I learned that you didn’t agree with me on Keystone XL, it is unlikely I would bring up that issue again. One would expect that logic to apply to online spaces.

The spiral of silence concept was first explored by Elisabeth Noelle-Neumann, who continued to toil at the problem for many years after her first paper came out in 1974. As she once pointed out, it is important to distinguish between opinions that can be expressed without risking social opprobrium and opinions that have to be expressed in order to avoid isolation. It is no wonder that Facebook users were twice as likely to post if they felt their network agreed, perhaps they felt as though they had to express these opinions to avoid isolation.

Here is where it gets sticky. The Pew researchers found people were far less likely to share their views on Facebook or Twitter as compared personal interactions. The study concluded with worries about the effect of social media on democracy:

An informed citizenry depends on people’s exposure to information on important political issues and on their willingness to discuss these issues with those around them. The rise of social media, such as Facebook and Twitter, has introduced new spaces where political discussion and debate can take place. This report explores the degree to which social media affects a long-established human attribute — that those who think they hold minority opinions often self-censor, failing to speak out for fear of ostracism or ridicule. It is called the “spiral of silence.”

A number of commentors have used this as evidence that social media is detrimental to deliberative democracy and encourages self-censorship. Yet, that isn’t the only read. For one, Americans changed their online behaviors in response to government surveillance, which Pew found in another surveys. So, is the finding a reflection of the structure of online networked spaces, or is it a reaction to widely known government surveillance?

Let’s assume for a minute that the changes are due to the topology of the online spaces. danah boyd provides an anecdote to explain why this might be the case:

Consider the case of Stokely Carmichael, which Meyrowitz details in his book. Carmichael was a civil rights leader in the 1960s. He regularly gave speeches to different audiences using different rhetorical styles depending on the race of the audience. When Carmichael began addressing broad publics via television and radio, he had to make a choice. There was no neutral speaking style and Carmichael’s decision to use black speaking style alienated white society. While Carmichael was able to maintain distinct styles as long as he was able to segment social groups, he ran into trouble when broadcast media collapsed those social groups and with them, the distinct contexts in which they were embedded.

Networked publics force everyday people to contend with environments in which contexts are regularly colliding. Even when the immediate audience might be understood, the potential audience can be far greater and from different contexts. Maintaining distinct contexts online is particularly tricky because of the persistent, replicable, searchable, and scalable nature of networked acts.

Thus, in order to maintain these contexts, an individual will limit the kinds of controversial topics they post about. Pew has provided much in the way of descriptions, but then they quickly make the leap to a normative critique of the network without much explanation of the criteria involved.

But how does this affect democracy exactly? The missing step in this logic is a prevalent model of democracy, the informed citizen, where “every adult can arrive at an opinion on every moot topic.” Walter Lippman, who penned that phrase nearly 100 years continued by saying, “it is not possible to assume that a world, carried on by division of labor and distribution of authority, can be governed by universal opinions in the whole population.”

While Pew rightly prides itself on being nonpartisan, that isn’t the same as saying they don’t have an ideology. Indeed, Pew has shown itself to be an Internet optimist in the past. This school of thought believes the Internet will help mobilize individuals for civic engagement and promote political participation of those with typically lower levels of political engagement. Supporting this theory is the populist conception that citizens must be informed for democracy to be robust. Many journalists, given their line of work, would also subscribe to the informed citizen model of democracy.

And yet, there’s a crinkle in Pew’s final sentences. The USA Freedom Act, which halted the bulk collection that Edward Snowden unearthed, actually got passed because a group of dedicated advocates used social networks and countless media outlets to bring about change. Interest group politics, a oft derided version of democracy, can be counted as the source of change and democratic involvement, as it has been for other policy issues. More nuanced theories of democratic governance explain the contours of reality far better than the simple informed citizen model, while adding much needed distance from the pessimistic tone many took up in response to this study.

In the coming weeks, I plan to write more extensively on this subject, and highlight some of the key texts that inform this debate. Obviously the changes occurring on the web with social media and algorithms need studying. Importantly however, a critical view, not a pessimistic one, needs to be adopted.

--

--

Will Rinehart

Senior Research Fellow | Center for Growth and Opportunity | @WillRinehart