Why Social Networks Promote Fake News and How to Fix Them so They Don’t
The days immediately following the conclusion of the 2016 United States presidential campaign saw a rise in concern of the effects of fake news on Donald Trump’s defeat of Hillary Clinton. Many analysts and commentators have criticized Facebook and other social networking services for the way these companies addressed the spread of fake news on their networks, asserting that Donald Trump’s campaign benefited greatly from the spread of misinformation on social media.
Given that some people get most of their news from Facebook and other social networking services, you have to wonder how many voters had fake news stories in mind when they went to the polls on November 8 last year.
But rather than debating the effects of fake news on the 2016 campaign, I want to (1) explain why fake news is able to spread through networks as efficiently as it does and (2) suggest how social networking services could address these issues to prevent fake news from spreading.
Let’s first take a look at some of the mechanisms of social networks and human psychology that promote fake news and other forms of misinformation.
Social Networks and the Principle of Homophily
Homophily is a principle that governs the structure of social networks. According to Easley and Kleinberg’s Networks, Crowds, and Markets, homophily is the idea that we are, in general, very similar to our friends. Your friends tend not to be a representative sample of the population; rather, they tend to be relatively similar in terms of age, race and ethnicity, geographic location, interests, opinions, and religious and political affiliations.
This shouldn’t be too surprising. After all, friendships are usually formed because individuals have something in common, whether it be their political affiliation, the year in which they were born, or simply the city in which they currently live. Overall, edges in a social network connect individuals who are relatively similar to one another.
This principle extends to social networking services such as Facebook. Scroll through your list of friends on Facebook, and you will very likely see people who are very much like you — people who live in the same city as you, people who went to the same university as you, people who work where you work, and so on.
Similarly, in your News Feed, you will likely see your friends like and share content that is similar to the content you like and share. This is due, at least in part, to the fact that your friends share a great deal of your interests and opinions. This is an important and concerning aspect of social networks because it often times allows users to be exposed mostly only to content which affirms the beliefs and opinions they already hold.
Additionally, some users may be exposed to a disproportionate amount of fake news simply because it has been shared or liked by enough of their friends. Moreover, these individuals may not be exposed to content created by other sources or shared by users who hold different opinions, allowing fake news to appear unquestionably true to these users.
The term “confirmation bias” refers to the psychological tendency of individuals to agree with information that supports their existing beliefs and reject information that contradicts those beliefs. In his farewell address, President Obama highlighted the role of confirmation bias in the propagation of fake news.
“Increasingly we become so secure in our bubbles that we start accepting only information, whether it’s true or not, that fits our opinions, instead of basing our opinions on the evidence that’s out there.” — Barack Obama
One study conducted by researchers at Stanford University found that, even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs.” Other studies have demonstrated that humans experience a rush of dopamine as they process information that confirms their beliefs.
Confirmation bias is especially relevant to the issue of fake news on social networks because it allows individuals to have their beliefs reaffirmed by a potentially large number of other individuals in their network.
I’m from South Carolina, but I go to college in New York. The news articles and other content shared by my friends in South Carolina are usually very different from the news articles and content shared by my friends in New York, so I usually end up with a decent variety of opinion in my News Feed.
But this isn’t always the case. Many times, as a consequence of the principle of homophily, the majority of the opinions users see in their News Feeds are opinions they hold themselves. As a result, these users constantly have their ideas and opinions reinforced my their friends. Differing opinions become severely underepresented.
Elizabeth Kolbert illustrates the effects of echo chambers and misinformation:
If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views.
This is dangerous. Not only does this allow fake news and baseless opinions to spread through networks, it allows us to isolate ourselves from those who disagree with us. It becomes too easy for us to silence people and push them out of our networks instead of trying to understand where they’re coming from and see things from their point of view. Without people to disagree, we become to comfortable with our own opinions.
I’ve laid out a few reasons why it’s easy for misinformation to spread through networks. So how can social networking services address these issues?
Allow Users to Flag Content as Potentially Fake
Perhaps an obvious approach to address fake news on Facebook and other social networking services would be to allow users to flag content as potentially fake. Similar to other means of reporting inappropriate content that have already been implemented by Facebook, this feature would allow users to raise issue with content that seems questionable.
After enough reports have been filed, a moderator — a Facebook employee trained to identify fake news — would need to review the content and approve its removal should it actually turn out to contain misinformation.
It’s worth pointing out that the removal of fake news not only prevents users from seeing misinformation in their News Feeds, it also makes writing fake news a less lucrative profession — fake news becomes far less profitable if no one can see it or its advertisements.
Some may use the structure of social networks and the principle of homophily to argue that the individuals who would flag content as fake are exactly the individuals who would never see that content in the first place. Although friends in social networks are certainly quite similar to one another, networks do sometimes connect communities that share comparatively little in common.
Consider your friends. There is a decent chance you are friends with someone from another country. By extension, you are also connected with your friend’s mother, her friends, their friends, and so on — even if you are not actually friends with these individuals on Facebook. So occasionally you may, for example, come across content that was shared by your friend’s mother’s friend who lives in another country and speaks a different language.
This happens to me a lot on Facebook. I couldn’t tell you how many photos of weddings, vacations, and graduations I’ve seen posted by people I’ve never met. I don’t know these people and we may share relatively little in common apart from a mutual friend on Facebook. Yet, I see their photos, posts, and the content they share.
My point is that is isn’t too difficult for information — including misinformation — to spread through a network and even across networks. This illustrates the possibility that some amount of misinformation would indeed become visible to users who may be more inclined to flag it as fake.
Use Algorithms and Human Moderators to Assess Content
Another potential response to fake news would involve implementing algorithms to assess the credibility of content on Facebook. Machine learning algorithms could analyze articles and warn users of potentially untrustworthy content. These kinds of algorithms are already used by email providers to filter dangerous emails from users’ inboxes.
In the context of fake news, algorithms could use various factors including URL structure and domain names to make a decision about the validity of the content. If the algorithm decides that a particular article constitutes fake news, the content will be automatically hidden and placed in a queue for review by a human moderator. If the moderator determines the content is indeed misinformation, he or she will remove the content. If, on the other hand, the moderator concludes that the algorithm was mistaken, he or she will restore the visibility of the content.
The algorithm would be designed in a way such that it would flag only content that is nearly obviously fake. It would merely assess the articles on the basis of the factors listed above, rather than flagging content of certain convictions.
The role of the human moderators in this situation would be to ensure the algorithm has not made a mistake. This would also eliminate the possibility of Facebook’s moderators removing content on the basis of their personal beliefs and opinions.
Promote Diversity in Content
In the 1927 Supreme Court decision Whitney v. California, Justice Louis Brandeis said, in effect, that the best response to bad speech is more speech. Similarly, the concept of “the marketplace of ideas” claims that the truth will emerge through the competition of ideas. Taken together, these ideas point to a means of addressing fake news through the promotion of diversity in content on social networks.
Facebook could, for example, promote content that is trending across the network as a whole and has not been flagged as suspicious — even if it is not trending in a user’s immediate network of friends. Related content from different media outlets could also be included as suggested content alongside content that is shared by friends. This would foster a greater diversity of opinion on each user’s News Feed, allowing users to make decisions themselves about what to believe with more information at their fingertips.
Social networking services have the potential to become stratified by political affiliation and other defining characteristics. Without having immediate access to diverse content that challenges opinions and convictions, users who are members of these stratified networks may not have access to journalism that contradicts the validity of other claims and will potentially fall victim to misinformation.
Due to human psychology and the structure of social networks themselves, misinformation is able to spread through networks rather efficiently. We’ve seen how social networks tend to be segmented into clusters of individuals who share similar opinions and identities and how humans are naturally very responsive to ideas that support their own beliefs. We’ve also seen that some users’ social networks contain very little diversity in content and opinion.
Each of these factors contribute to the spread of fake news and misinformation on Facebook and other social networking services. There are a number steps these companies could potentially take in order to prevent fake news from being promoted on their networks.
The suggestions I’ve laid out probably wouldn’t be able to fix the problem of fake news altogether — I understand that. They are, however, a few small steps in the right direction.