This is the fifth in an occasional series of posts that chew over the whole phenomenon of ‘fake news’ and try to set it in a broader context
For too many of us, it’s become safer to retreat into our own bubbles… And increasingly, we become so secure in our bubbles that we accept only information, whether true or not, that fits our opinions, instead of basing our opinions on the evidence that’s out there. President Obama, Farewell Address
Following the EU referendum and US election there has been rising concern that many of us have become cocooned within online echo chambers and filter bubbles, in which we are only exposed to news and information that affirms our existing world view — even if it’s false. So acute is the anxiety, some have suggested these bubbles have the power to destroy democracy (link, link).
This all sounds very scary and — if proved true — it would be. Still, if we’re honest, we don’t yet know nearly enough about the dynamics and effects of online echo chambers to know what impact they are having on us. This is partly because social media is still so recent, though also because our ability to do research is severely restricted by the accessibility of data (the tech platforms keep their data closed for commercial and privacy reasons).
However, we do have many decades of social science research on how people’s attitudes develop in different circumstances, some of which we may be able to apply to social media. We know, for example about how people’s views evolve when they are exposed only to individuals of a similar mindset (group polarization). We know about people’s tendency to notice and remember information that confirms their worldview (confirmation bias). And, we know that people tend to gravitate towards those of like mind (homophily).
Even without the benefit of data from Facebook and other platforms, we can apply this research to social media by looking at the way in which digital services are structured. This at least gives us a basis from which to start to make informed judgments as to the extent to which the structure of social media does or does not suggest that many of us are now enveloped within online echo chambers.
What do we know about the effects of group dynamics on people’s perspectives? Well, we know that when people’s views are reaffirmed and consolidated by those around them, these views tend to be strengthened and made more extreme. Conversely, when they are contradicted, they tend to be weakened. Social psychologists call this group or attitude polarization. Over two decades ago Robert Baron and his co-authors saw this across a variety of different non-political settings: when judging other people’s looks, in rating the comfort of a dentist’s chair, or in deciding how much to give to a particular charity (Baron et al, 1996). When you apply the same tests to politics, similar things happen. People get more extreme in their views after discussing them with like-minded people. Cass Sunstein, in a fascinating 2002 article on ‘The law of group polarization’, cites studies that found:
(a) A group of moderately pro feminist women will become more strongly pro-feminist after discussion. (b) After discussion, citizens of France become more critical of the United States and its intentions with respect to economic aid. (c) After discussion, whites predisposed to show racial prejudice offer more negative responses to the question whether white racism is responsible for conditions faced by African-Americans in American cities. (d) After discussion, whites predisposed not to show racial prejudice offer more positive responses to the same question.
Social corroboration, in other words, leads people to become more confident and less equivocal in their attitudes. Less willing, in other words, to question their views or to compromise.
We also know that people tend to notice and remember information that confirms their existing worldview, a phenomenon commonly referred to as confirmation bias. In his extended review of the evidence for and against this phenomenon, Raymond Nickerson not only found it to be consistently true, but discovered that it held true in lots of different situations — including political. ‘We seem seldom to seek evidence naturally’ Nickerson writes, ‘that would show a hypothesis to be wrong’ (Nickerson, 1998).
It has also long been recognised, and shown, that people gravitate towards other people of like mind — a phenomenon called homophily, or love of the same. Ethan Zuckerman points to a reference to this way back in Aristotle’s Nichomachean Ethics:
Some define friendship as a kind of likeness and say like people are friends, whence come the sayings “like to like”, “birds of a feather flock together”.
Studies since the 1950s, Zuckerman shows, have shown not only that people of similar circumstance, background and belief flock together, but those with similar points of view (Zuckerman, 2013).
Structure of social media
If we look at the structure of social media, it appears to be structured in such a way that is highly likely to encourage these human tendencies.
Social media platforms allow us to choose our networks. We decide, in other words, who to follow. Just as we decide whether or not to accept friend requests or whether to connect with someone. Given social networks are not geographically constrained, and not reliant on circumstance (in other words, you may change jobs but you keep your Facebook profile), then — when it comes to homophily — the importance of selecting people on the basis of attitudes and points of view is accentuated.
Having chosen to follow or connect with ‘people like us’, many of whom share similar views, we are more like to be exposed to, and engage with, information and news that conform to this view. We are, in other words, likely to have our biases confirmed. This is heightened further by the aspiration of these platforms to give us more of what they think we want. Facebook’s EdgeRank, for example, filters the news and information you are exposed to in your feed based on what you — and your network — have most liked and shared.
If our online social networks are made up chiefly of those whose views are similar to ours, and as a group we like and share news and information that tends to confirm our pre-existing biases, then group polarization theory suggests that our views will be strengthened, and in some cases more extreme.
Recent studies on online echo chambers and political views appear to bear this out, though add some important detail and caveats. Jonathan Bright’s 2016 study looked at discussion networks around ’90 different political parties in 23 different countries’ and found echo chambers of varying cohesion, but particularly at the ideological extremes. Similarly, a 2017 Demos study (which I helped with) found that echo chambers tend to be more coherent and cohesive the further they are from the political centre. Equally importantly, it found that the further from the centre they were, the more likely they were to share news from non-mainstream sources.
If you marry this with a recent study from the Media Insight Project which found that people are more likely to believe news when it is shared by someone they trust — even if the publisher is previously unknown — then this has significant implications for the distribution of fake news and its effects. It suggests that people who share non-mainstream political views are more likely to be exposed to, and trust, news from unknown publishers. That this and other similar news will reinforce their existing perspective, and will lead them to become more extreme in their views.
We don’t know
It needs to be re-emphasised, however, that there are many things we do not know about the dynamics of social media interaction. We don’t know how porous people’s social media networks are — both online and offline. We don’t know the extent to which people make an effort to expose themselves to diverse perspectives. We don’t know what tools most accentuate the phenomenon. Nor do we know which social media platforms are worse and why.
This is not to argue that the phenomenon is not a problem for democracy — far from it. It is to emphasise that the importance of this issue obliges us to find out more about what is happening and why. Unfortunately, we are significantly hampered from doing this by the tech companies. Unless they enable independent research on the dynamics of social media interaction and exposure on their platforms, then we will be hamstrung in what we can and can’t conclude.
The next post in this series will be about sponsored content, native advertising and fake news.