Spread of Misinformation (Solutions Ep. 2)

Grant Erwin
Solutions
Published in
18 min readFeb 17, 2021

(NEW) Join our Discord server: https://discord.io/solutions

Our second episode of Solutions is about misinformation: what is it, why does it spread, who spreads it, and what can be done about it? The problem of misinformation is perhaps more pressing now than it ever has been, as social media and divisive politics have reshaped the way that we interact with the world around us.

As was the case with our previous installment, this article works both as show notes for the podcast and as a standalone article. In this episode, we modified our research strategy, spending more time collaborating with each other during our preparation. A topic such as this requires a lot of diligence on our part, and we believe that this improved process allowed for a more thorough approach.

Listen: Spotify | YouTube | Apple Podcasts | RSS

Intro music: “Eternal Bonds” by Sha Nova. Check them out on Spotify and Bandcamp!

Timestamps

0:00 Introduction
1:04 Quick note about our modified research strategy
2:02 Defining “misinformation”
3:09 Are fact-checkers a good resource?
8:11 Corporations pushing misguided studies
20:18 “Independent” media
23:31 Which online sources can we trust?
26:22 Memes on social media and their origins
31:28 News organizations appealing to political demographics
34:03 Combating feelings of powerlessness
41:38 In-group preferences, effective communication strategies
45:08 Social media algorithms, echo chambers
51:04 Potential solution: decentralized social networks?
54:46 Section 230 and potential amendments
1:03:50 Conclusion: What can you do?

What is misinformation?

In today’s hyper-partisan political landscape, one man’s misinformation is another man’s political reality. Even creating an episode on this subject creates a level of contention with those that may not agree with our conclusions. The contention often comes from a feeling that “misinformation” is a vague, politically loaded term with no real definition. This is a valid concern, as the term has certainly been used for political ends in some situations. However, social media echo chambers and spikes in political polarization have recently allowed content to spread that contains verifiably, unquestionably false information.

What do we mean when we say “verifiably, unquestionably false”? In order to do accurate research, we rely on a set of critical thinking methods that we will be exploring throughout this article, such as checking original sources, verifying specific details, looking for emotionally loaded rhetoric, and much more. Fact-checking organizations do all of this on a regular basis, and are an excellent resource for verifying claims.

But fact checkers are biased just like the rest of us, many will say. This is absolutely correct; no one is immune from cognitive bias. But compared to other media sources, fact checkers have more of an incentive to be correct, and a great track record of doing so. A study published in the Journal of Political Marketing analyzed claims that were reported on by multiple fact checkers and found that they overwhelmingly agree on which ones are true or false. This isn’t to say that there are no issues with their methods of reporting, however. A subsequent study from Stanford University found that there is little overlap between different fact checkers in the statements that they cover, and that there are often minor disagreements regarding the scale in which some statements are partially true or misleading. So fact checkers must be held to scrutiny just like everything else. Fact checkers know this — that’s why they detail all of their sources and reasoning in the articles they publish, which are freely available for anyone to look through.

The fact is, if it were true that major fact-checking sites such as Snopes and PolitiFact were getting things wrong a lot of the time, then we would start to see competitors spring up to do mass debunks of their articles. After all, there’s a huge market of people who distrust these websites for political reasons, and a publication like this would fulfill that market demand. The fact that this hasn’t happened is further proof that fact checkers tend to be a good resource for truth. As with any media you consume, always engage with it critically. Don’t fall back on vague statements of general distrust or baseless theories about where information might be coming from, but look through these articles and examine the logic for yourself. In our experience, fact checkers are incredibly reliable and objective in their research, deriving their analysis from official, publicly available information and using sound reasoning to arrive at conclusions.

How corporations spread misinformation, and why

Part of the issue regarding the spread of misinformation is rooted in how corporations spread misinformation to protect their profits. Many of the subjects where misinformation is spread most prominently, such as climate change, have a history of corporate sanctioned deception.

A primary example of this is the misinformation campaign led by ExxonMobil, which, according to a report by InsideClimate News, knew as early as 1977 that burning fossil fuels would lead to an increase in global temperature. In 1989 however, once climate change was becoming a more pressing concern to the public, they created the Global Climate Coalition, which worked to sow doubt regarding the scientific consensus surrounding this issue. This was done through the propagation of rhetorical fallacies, with the purpose of obfuscating the consensus regarding climate science. The fallacies used by climate change deniers tend to consist of a mixture of cherry picking data, relying on logical fallacies, relying on fake experts, and promoting conspiracy. For example, someone may try to disprove the disprove climate change by pointing out a particular time in winter would be cold, or that heating cycles have occurred before in our planet’s history. Both of these examples ignore the larger trends of the data, which show that the average global temperatures have been increasing, and that we have never had a heating cycle as rapid as the one we find ourselves in now. People who spread this misinformation also rely on the testimony of false experts. For example, an internet petition of “30,000 scientists” was made, petitioning the US government to reject any kind of global warming protocols on the grounds that carbon output is not only unharmful, but possibly good for the planet. Only one small issue with this: an overwhelming majority of them were not even climatologists. While some of these people are in fact scientists, many of them are not experts in the relevant subject matter at hand. By flashing something that sounds official, like “scientists”, without really getting into the details of who those scientists are, misinformation is able to be spread under an official sounding guise.

Many of these techniques come from the tobacco industry, which has spent a concerning amount of time and money trying to convince the world that cigarettes don’t cause lung illness. Tobacco companies have been fighting against the scientific consensus for as long as there was a consensus to fight against. An article from The Atlantic, “Contesting the Science of Smoking”, does a very good job of summarizing the relationship between the tobacco industry and its attempts to obfuscate reality. As lung cancer was becoming the deadliest cancer in the United States during the early fifties, the heads of tobacco companies were working to figure out how to counter the bad PR that was heading their way. The industry then formed its own independent committee, that would argue on its behalf against unfavorable science. This should sound familiar, because it is exactly what companies like ExxonMobil were doing in the 1980s. By 1964, the Surgeon General put out a report linking smoking tobacco to cancer after examining more than 7,000 published articles on the subject. Companies like Phillip Morris have since done everything in their power to promote the idea that some of their tobacco products are less harmful, much of which is supported by very bad science (for example, one of their studies, which concluded that switching to supposedly less harmful cigarettes lead to lower nicotine levels, left out the control group that didn’t switch). Once the control group was accounted for, it demonstrated that there was no significant change between the types of cigarettes one consumes.

While this specific example was a study underwritten by Phillip Morris, it has also relied on outsourcing this kind of work to certain scientific firms. Gradient Corp is a science firm that works on behalf of chemical producers, oftentimes working to stall any regulations that would affect the profits of its clients. Their clients are often adversely affected by regulations like the 1997 Clean Air Act. When not working for tobacco companies, Gradient Corp works to stall regulations put in place on air polluters, arsenic, asbestos, and more by publishing their own studies in friendly columns such as Critical Reviews in Toxicology and Regulatory Toxicology and Pharmacology. They also rely on the same rhetorical techniques as those who engage in climate change denialism. The similarity of tactics are not accidental, as their objectives are fundamentally the same.

The goal of all of these corporations is to stall regulations that will affect their profits. By spreading enough misinformation to cast doubt on the scientific consensus of climate change, tobacco’s correlation with lung cancer, or any number of scientific truths, corporations are able to secure more profit for themselves at the expense of others. The solution is to advocate for the strengthening of agencies like the EPA, and supporting politicians who do not align themselves with industry interests. It is also important to examine who is funding certain studies, and assess the material benefit of those who do fund such studies. Now, just because a study is funded by a certain industry interest doesn’t mean it is completely invalid right off the bat, but it should make one more critical of the information.

Misinformation on social media: understanding its origins and causes

The internet revolution has democratized speech and lowered the barrier to entry for ordinary people to share ideas. This is a great thing, however it has come with a major caveat: it’s now easier than ever to pose as a reliable source and spread falsehoods in a convincing manner. According to a study published in the journal Nature: Human Behavior, Facebook referred users to misinformation 15% of the time when browsing the site leading up to the 2016 election. Since then, due to the public pressure they’ve faced, social media sites have gotten better about handling their content, but the problem still persists.

The underlying motive behind these small fake news publications, just like the large corporations mentioned earlier, is often monetary profit. Natural News, a site notorious for frequently publishing debunked and misleading information, claimed to be a “truly independent perspective” for discovering alternative medicine products, but it was later found out that the owner had been financially involved with products reviewed on the site.

Within the last decade, we’ve seen an even more damaging type of misinformation dominating the internet and having catastrophic effects: misinformation stemming from political interests. Many of these come in the form of personally-targeted memes which are intended to confuse and polarize people, some of which have been traced back to a troll farm which calls itself the Internet Research Agency, operating in Russia. Their posts reached millions of people, and eventually Facebook removed over 100 accounts that were traced back to them for being unverified and spreading fake news. Russia, as recently confirmed in a bipartisan Senate report, had a large amount of control over online narratives during the 2016 election. This is incredibly damaging to our democracy, and we need social media sites to be vigilant about tracking down troll farms like this going forward.

When sharing a piece of info online, it is crucial that you’re aware of the source it’s coming from and how reliable this source is. Our recommended way to check this is through Media Bias Fact Check, a nonpartisan organization that documents the political bias and factual accuracy of thousands of websites. You can look up anything from well-known publications to obscure alternative blogs, and it will present you with quickly digestible ratings, detailed history, and a list of recent fact-checks of their content. For breaking news stories, there’s also Google’s Fact Check Explorer which compiles claims that have been fact-checked by USA Today, PolitiFact, Snopes, and hundreds of other organizations. In addition to using tools like these, it’s also important to critically examine the details of any website you’re on. Who owns this site? What sources of information are they using? Just because a post is long, detailed, and seems to contain a huge amount of evidence, doesn’t mean that it must be true. This About.Us blog post contains several guidelines for discerning if a website is trustworthy, including checking for verifiable details on a bio page. There’s no one silver bullet for determining the reliability of a source, but you can combine these tools and critical thinking skills to form well-rounded conclusions.

The social media business model: a market failure

It’s great that we have access to tools and guidelines that allow us to determine the reliability of news sources, but this doesn’t attack the heart of the problem. Social media sites currently rely on one thing to stay afloat: user attention. A recent Netflix documentary called The Social Dilemma explores this in great detail. These sites make their money from advertising, and the longer they can glue your eyes to the screen, the more ads they can show you. As shown in a 2018 MIT study, people are more likely to click on and share false news than true stories. It’s easier to create highly emotional and attention-grabbing narratives when you’re making things up and/or distorting the truth, and social media, much like traditional media, thrives on this attention. So while many sites are currently making an effort to curb misinformation by displaying fact checks and banning accounts that continuously deceive people, the profitability of this type of content makes it unlikely that they will continue to crack down on this for long. It could very well be a PR stunt that will fade away once this topic is no longer in the spotlight.

But does social media need ads to survive? One potential alternative to the privacy invasive, emotionally manipulative, fake-news-prone ad-based model that we have today might be a subscription model. Consumers would pay a flat price to access the website, and there would be no ads. This removes the incentive to keep users engaged for long periods of time, and potentially adds competition incentives for sites to create algorithms that filter out misinformation instead of perpetuating it (maybe they could even allow you to easily sort by recent posts, like in the good ol’ days). This idea sounds like a no-brainer, especially for those who are privacy conscious or prefer an uncluttered, ad-free experience. However, having to pay money would still be a tough sell for many users. A 2018 online survey conducted by Recode found that only 23% of Facebook users would prefer a subscription-based version of the site to the ad-based version. This is consistent with the current state of the market: all of the dominating social media sites are funded by ads, not subscriptions, despite many users being weary of the way sites track them and sell their data. Apple is attempting to lead the change by forcing apps to disclose all of the ways that they track users (an action that sparked an ongoing feud between them and Facebook); however, the dominant mobile operating system is still owned by what is essentially an advertising company, so Apple doesn’t have a whole lot of power in that regard.

So, since companies don’t have the financial motivation to get rid of ads, the next logical question would be whether or not we could force their hand through legislation. During our initial research, it seemed unlikely that any legislation seeking to ban or limit digital advertising would be able to survive legal scrutiny. However, in between recording the podcast audio and writing this article, Maryland has taken steps toward approving the first U.S. law that places a tax on digital advertising. It hasn’t passed both chambers of their Congress yet, and it’s expected to face court challenges, but if passed, it could force tech companies to finally re-evaluate their business models. Hopefully they’ll still be able to provide us with valuable services in the process, even if it requires us to chip in a little cash every now and then.

For those who are (understandably) doubtful that Big Tech will bring change in a meaningful way, some tech enthusiasts are advocating for a different idea entirely: open-source, decentralized social media. Projects like Mastodon, PeerTube, Element, and the ActivityPub protocol that powers many of these, are attempting to place power back into the hands of users instead of large corporations. The way these sites work is that, rather than everyone using the same website to interact, communication can take place between several different apps using open protocols. It’s similar to technologies we’re already familiar with — email messages can be sent and received using any email server, SMS messages can be exchanged using any phone, RSS feeds can be viewed in any feed reader, etc. — in the same vein, social media posts can be sent and accessed using any app that agrees to use the same open protocol. Almost all of the apps in this space are open source, meaning that you can inspect the code to make sure there aren’t any, for example, manipulative algorithms sorting the content or tracking your data. However, this strength may also be a weakness — open source applications are notoriously difficult to maintain due to their reliance on donations. Social media sites have millions of daily active users, and handling all of this traffic and data isn’t cheap: Facebook’s operational costs in 2019 amounted to $46.71 billion. Yes, much of this cost would be spread out if social media were decentralized, but the data and labor costs would still be a lot for any app to handle, and it remains to be seen whether subscription models would be effective inside a decentralized ecosystem. However, this is a very exciting innovation that, if made profitable, could have the potential to eliminate the toxic algorithms that are largely responsible for spreading misinformation.

Misinformation’s psychological appeal and how we can counteract it

In addition to analyzing how businesses and websites spread misinformation, it’s important to understand why it is that so many of us fall for it time and time again. Psychologically speaking, there are two primary mechanisms that allow people to be deceived by misinformation: denialism and in-group psychology.

Denialism, in a psychological context, is when one denies certain realities that create stress. Sarah and Jack Gorman in their Psychology Today article “Climate Change Denialism” argue that one of the primary concerns is how something like climate change represents such a terrifying change to our lives, both in terms of the actions that must be taken to curb its effects and what the effects would mean for humanity as a whole. Because of the stressful nature of climate change, many people are motivated not to acknowledge it despite the overwhelming evidence. A solution presented by Sarah and Jack Gorman is to encourage small changes made to one’s life to not just bring about small material change, but to acknowledge fundamentally the cause of the stressor being denied in the first place. For example, it might be beneficial for everyone to carpool or walk to work part of the week, rather than attempting to completely cut out the use of motor vehicles from our own personal lives. Similarly. engaging in a “Meatless Monday” or other attempt to curb your meat intake is far easier than becoming a vegan overnight. Not only do these types of actions have the benefit of allowing people to act against what they may be in denial over, but they also allow for larger grassroots movements to take place. If multiple people are engaging in small acts like carpooling or meat reduction for the purposes of addressing climate change, then those people are primed in a way to organize against climate change in a way that they weren’t before. While addressing climate change will require large structural changes to society, these small changes can provide a small material benefit as well as a larger psychological effect.

In-group psychology is also incredibly important if one is to understand how misinformation spreads. This is not only a contemporary problem in our currently partisan political landscape, but has always been the case. A 2015 metastudy found that conservatives who knew more about the topics of energy policy and politics, as well as having higher cognitive function skills, were more likely to believe in climate conspiracies and engage in climate change denialism. This bias also applied to liberals when faced with consensus regarding nuclear waste management or open-carry laws. Those that believe in misinformation often times are not stupid or uninformed, but have are being affected by their their in-group bias. Ideologies are informed by perceptions of reality, and our ideologies are a way of personal identification. A conservative or leftist must believe certain things about the world in order to belong to each respective ideological group. When those perceptions are challenged, it can feel like a personal attack to one’s identity. The response is then not a logical one, but an emotional one. To combat this in our daily lives, it is recommended to engage with others in a non-combative way. Another part of dealing with this issue comes from recognizing your own cognitive biases, which we all have. Keep an open mind and always engage in discussions with people who you disagree with. It is important that these discussions also elicit an exchange of sources that may challenge your worldview, which you can assess with a keen eye.

Section 230: Does it help or harm?

Section 230 is a piece of legislation that is oftentimes caught in the middle of discussions of misinformation spread online, and consequently has a lot of misinformation surrounding it. Often referred to as the backbone of the internet, it’s a law that protects internet companies from being held legally liable for what users post on their sites. Additionally, this protection also applies to others who may share content that may be found to be defamatory or otherwise illegal. Essentially, only the poster is held liable for their own posts. This law protects both social media platforms as well as users who share content on these platforms, and is what has allowed social media platforms and the internet at large to exist as we understand them. The open nature of the internet is partly what enables misinformation to flourish, as well as other problematic and even illegal content, so Section 230 has been a prime target for politicians looking to resolve this issue. However, as we’ve seen from recent proposed and applied amendments to 230, restricting the free flow of information on the internet can have devastating side effects.

One example of legislation that altered 230 was the FOSTA-SESTA act of 2018. In short, this law makes it illegal for a website to knowingly assist in the facilitation of sex trafficking. It removes 230 protections from sites that promote or allow this kind of content to exist on their platforms. While the aims of this law are noble, the language of the bill is vague enough to adversely affect other sex workers, as well as those who provide services and support to victims of sex trafficking. Much of the concern that we have regarding 230 has to do with the unintended implications of amendments and changes to the bill.

The EARN IT act of 2020, introduced by Linsey Graham but with bi-partisan support, would remove 230 protections from any website that fails to comply with a set of guidelines centered around preventing child exploitation, established by a 19-member government committee headed by the Attorney General. The problem is that, in order to fulfill these requirements, websites would allow this committee to inspect any communication that occurs on their platform, fundamentally undermining the use of end-to-end encryption. This should raise concerns for all internet users, as the last thing anyone ought to want is more ability on the part of the government to see private messages of citizens.

One proposed piece of legislation that seems less objectionable is the Protect Americans From Dangerous Act, which would remove 230 protections from sites that “…used [algorithms] to amplify or recommend content directly relevant to a case involving interference with civil rights… neglect to prevent interference with civil rights… and in cases involving acts of international terrorism”. The bill is being introduced by Congress members Tom Malinowski and Anna Eshoo, and is also supported by the ADL. The bill would remove the liability protections from sites that promoted hateful or radicalizing content through sorting algorithms. Being that algorithms are often the root of the misinformation problem, and they are not a necessary function of social media, this seems like the least problematic amendment we’ve come across so far. However, we do have some concerns about entirely removing free speech protections from sites on these grounds, and it is unclear how the law would be implemented and what the subsequent effects would be.

Conclusion: What can you do?

  • Do your part in making sure you aren’t accidentally consuming or spreading fake news. Look at fact checks, analyze sources, and be aware of inflammatory headlines. Most importantly, always be willing to question your current beliefs and seek the most up-to-date information.
  • Follow (and donate!) to fact-checking organizations like Snopes, whose journalistic work is crucial for effectively navigating information on the internet.
  • Try to avoid being combative in dealing with people who are spreading false information. Understand their concerns, offer good sources in response, have a dialogue, and encourage the other person to draw their own conclusions.
  • Pressure the federal government to provide more funding and power to regulatory agencies that can crack down on private firms, as they will spread minsinfomration to protect their profits.
  • Pressure social media sites to be more vigilant about tracking down troll accounts and bad actors, and to stop creating algorithms that produce echo chambers.
  • Try out alternative, decentralized social media networks such as Mastodon and Element which don’t rely on ads to stay afloat and don’t use problematic algorithms to sort their content. Alternatively, you may also want to consider cutting back on social media in general and getting your news directly from reputable sources. Use news readers like Feedly, Inoreader, and Apple News to consume multiple sources in one combined feed.
  • Be aware of how denialism and in-group preferences play a role in distorting people’s worldviews.
  • Encourage others to make small changes in their lives to combat climate change and other issues, so that we can all feel more empowered, less fearful, and therefore less likely to fall prey to emotionally manipulative and conspiratorial content.

--

--