Why we lie to ourselves and others about misinformation

Brian Southwell
Trust, Media and Democracy
5 min readMar 28, 2018

--

Or, why we underreport our spread of misinformation — just as we do about other behaviors like smoking, drinking, and unprotected sex

Credit: Flickr

A 2016 Pew Research Center poll suggests that even as Americans tend to view misinformation as an important problem for society most do not see themselves as being responsible for its spread and diffusion. A majority of those polled were confident they could detect false information if they saw it and less than a quarter of people believed they had ever shared a fabricated news story with others.

Despite this pattern, however, new research suggests false information not only spreads online via social media but also tends to be shared more readily than accurate information.

Perhaps we are more vulnerable than we think–or report. Perhaps even if only a few of us are spreading misinformation the sharing that occurs nonetheless can lead to widespread exposure. Perhaps we even share more misinformation than we admit or know. Perhaps this a bit like self-reporting of smoking, drinking, or unprotected sex: we aspire to an ideal that we don’t always meet.

Finding new ways to help citizens curb their tendency to spread misinformation is the focus of a recently announced Rita Allen Foundation initiative. Generating ideas for remedies to the problem will require contributions from a wide range of researchers, practitioners, and citizens, as the circumstances that led to our current dilemma have many authors.

Perhaps we are more vulnerable than we think–or report. Perhaps even if only a few of us are spreading misinformation the sharing that occurs nonetheless can lead to widespread exposure. Perhaps we even share more misinformation than we admit or know. Perhaps this a bit like self-reporting of smoking, drinking, or unprotected sex: we aspire to an ideal that we don’t always meet.

Why are we in this situation? One important explanation for the apparent discrepancy between general worry about misinformation affecting society and what people say affects them personally lies in a phenomenon that social scientists have documented for decades now.

Researchers who measure public opinion or investigate the influence of media content on people’s beliefs long have been familiar with something known as the third-person effect. What seems to happen is cognitive distinction between what people think happens to other people when they encounter media content — such as an erroneous online report — and what they report happens to them personally when encountering similar content.

Some of that pattern might be attributable to ego defense, some of it might be overestimation of effects on others, and some of it might be inability to detect subtle effects on oneself. Regardless, the tendency to view media content as powerfully effective in shaping the minds of others but not oneself has been an important observation by communication researchers.

What do we know about our interaction with demonstrably false information? As colleagues and I outline in a new book, Misinformation and Mass Audiences, the balance of recent empirical evidence suggests that we tend to accept information — and misinformation — at face value and then subsequently tag it as being true or false. That process, long ago described by the philosopher Baruch Spinoza, leaves the door open to fatigue, emotion, and even distraction as barriers to the careful scrutiny and rejection process that we seem to think we always apply. That door, in turn, does not need to be open for long to allow us to click and share a story online.

Aside from what might be our underreported openness to false information, why should we worry if only a minority of people report sharing false information with other people?

Even if most of us were consistently perceptive and accurate in our assessment of what is false and only a minority of people ever shared false information online, social network research nonetheless suggests that some people, acting as network hubs, still can be responsible for the spread of information because of their social position.

Even if only one in 10 people willingly spreads false information, the social networks of that 10 percent nonetheless could be exposed to noise and false information via social media and everyday conversation, especially if the 10 percent are socially connected to many other people.

What if we also share more misinformation than we realize? Importantly, people value information not just for its truth value but also as relationship currency.

Research suggests that people have a variety of motivations for everyday conversation. A person might choose to forward a meme that contains false information just because the political sentiment expressed aligns with a group identity the person wants to express. In other circumstances, people forward faked photographs along with sarcastic comments or share outlandish stories while criticizing the content, but in doing so they nonetheless also spread the original content, in a sense.

Imagine if I were to comment on a social media platform about a debunked story that claimed dandelion root can cure cancer. (The Independent reported on the spread of such a false story in 2017.) Even if I used the story to make a point about fraud, I nonetheless also would raise the salience of dandelion root as a possible medical treatment for an audience of my peers. Even using the example here introduces the topic and may encourage you to turn to a search engine to learn more. Colleagues and I found something similar in studying search engine use following news coverage of controversy about mammography: simply calling attention to a topic can encourage people to look for information online, some of which might be inaccurate or false.

What does all of this suggest about misinformation as a social concern? Are we correct in worrying about it? Or are we correct in saying only other people are the problem?

What seems most likely is that public opinion data on misinformation perceptions reveal our humanity: our ego defensiveness as well as our interest in social connection, our vulnerability to cognitive activation by attention-demanding stimuli but also our curiosity about the world and tendency to search for information.

Attempting to sanitize our information environments and eliminate falsehoods is an unrealistic quest and full of pitfalls, including the unintended consequences of allowing censorship. The theoretical possibility of misinformation, in other words, is perhaps a consequence of our media system in the U.S. and elsewhere.

At the same time, collectively we also are responsible, in part, when demonstrably false information pushes other useful information off the public stage. Accepting the risk of misinformation’s appearance on our screens doesn’t mean we can’t also seek ways to combat its prevalence, undermine its credibility, and together invest in information sources dedicated to presenting accurate information that usefully predicts future outcomes rather than noise for the sake of drawing eyeballs.

Brian Southwell directs the Science in the Public Sphere program at RTI International. He also teaches at Duke University and the University of North Carolina at Chapel Hill and hosts a public radio show called The Measure of Everyday Life for WNCU (Durham, NC). Follow him on Twitter @BrianSouthwell.

--

--

Brian Southwell
Trust, Media and Democracy

Director, Science in the Public Sphere @ RTI International, Faculty Member @ Duke University and UNC-Chapel Hill, & Public Radio Professional @ WNCU (90.7 FM)