How far should social media platforms go in removing disinformation?

Nadav Oren
SI 410: Ethics and Information Technology
7 min readMar 5, 2023
https://insight.kellogg.northwestern.edu/content/uploads/_800x418_fit_center-center_82_none/Full_0921_Covid_Conspiracy.jpg?mtime=1631029936

In December 2016, a man named Edgar Maddison Welch walked into a pizzeria a short walk away from my house with a rifle and fired a shot. Thankfully, no one was hurt and he was quickly arrested, but many people in my neighborhood had been fearing that something like this would happen for a while. The reason Welch had driven hours from his North Carolina home to the Comet Ping Pong pizza restaurant in Washington DC was that he had become indoctrinated by an online conspiracy theory called Pizzagate, that alleged that the basement of the restaurant was home to a child sex trafficking ring which was controlled by prominent officials in the Democratic Party, including then-Presidential nominee Hillary Clinton. For months after the conspiracy theory emerged on fringe social media sites such as 4Chan, Comet and other small businesses adjacent to it had been receiving threats from unknown people who had found the theory online and became hooked in. It was only a matter of time before someone took matters into their own hands.

A few years later, a conspiracy theory loosely associated with Pizzagate, QAnon, inspired a mob of rioters to attack the US Capitol to attempt to stop the confirmation of President Biden’s victory in the 2020 election. In both cases, many of those who had become radicalized by the theories had interacted with them over social media websites including mainstream sites such as Facebook, Twitter, YouTube, and TikTok. A question that many raised particularly after the Capitol riot was what responsibility social media platforms had to remove false conspiracy theories from their platforms. While legally, as private companies, social media sites can decide to allow or ban anything posted on their sites, many platforms have historically been hesitant to remove politically-oriented content, even if it has been proven to be false and harmful. After all, many of those posting content about conspiracies such as QAnon may genuinely believe in them and are just expressing their opinions, and are not consciously trying to deceive anyone. However, this is a very narrow view of this issue, as ethical social media companies also have the responsibility to ensure that their platforms do not become hubs of misinformation and hate speech.

This is also compounded by the fact that many conspiracy theories do not emerge organically on social media sites and are spread by organized campaigns using misinformation tactics. For example, as investigated by the Media Manipulation Casebook, the “#SaveTheChildren” movement, which attempted to subliminally push QAnon and Pizzagate adjacent theories to people who might not otherwise believe them, had an organized group of influencers leading it that customized content to appeal to different groups, such as women and Black people. In addition, it tried deliberately to get around content bans through steps like changing around the hashtags. A coordinated disinformation campaign like this is very different then just any user expressing their opinion online, as it has a clear agenda and is being dishonest about its goals. Of course, one might point out that almost all forms of commercial advertising uses tactics similar to these to be effective, and advertisements have always been a large part of social media. However, unlike in disinformation campaigns, even the most subliminal forms of advertising generally have some identifying information about the company who produced it in the ad, and advertisements for products seen as harmful, such as tobacco, have government restrictions about how and where it can be advertised. There are no regulations about online media manipulation campaigns, and the only ones who can stop them, if they can even detect them in the first place, are social media platforms.

Even if social media companies try to act against coordinated manipulation campaigns, the more ethically complicated issues at stake related to misinformation on social media is how exactly misinformation should be defined and what actions, if any, should be taken against users who post misinformation but genuinely believe in it. In the first case, there is the fundamental epistemological problem that particularly with current events and political topics, there is no one fundamental “truth” that can be logically proven. However, at the same time, not all claims should be taken equally as often some claims have much more evidence pointing to them then others. In my opinion, the best way to classify whether something is misinformation is whether it has any credible evidence supporting it, as problematic as it can sometimes be to define what that means. Another issue, which was particularly illustrated during the COVID-19 pandemic, is sometimes information can change due to new evidence. For example, at the beginning of the pandemic US public health officials told the public not to wear masks because they did not see evidence they were effective against asymptomatic transmission and wanted to prioritize them for healthcare workers, but the guidance quickly changed as new evidence proving their effectiveness was discovered. This did not stop videos of public health officials giving the previous guidance from being widely shared out of context on social media, and even though these were real videos of authoritative figures giving what was at the time standard guidance, as soon as this guidance was outdated it became misinformation to share without giving context.

A similar dilemma happened regarding how social media companies treated various theories regarding the origin of the COVID-19 virus. Originally, there was no concrete evidence pointing to a lab leak or artificial origin to the pandemic, but due to the fact that Wuhan hosted one of the main research labs in China studying Coronaviruses, many social media users speculated that the pandemic could have started this way. In 2021, Facebook decided to mark this claim as disinformation and announced that posts promoting this theory would be removed from the site, but backtracked several months later as intelligence information started pointing towards a lab leak being at least plausible. In fact, by 2023 some US government agencies declared that a lab leak was in their opinion the most likely origin of the pandemic, although in low confidence and without direct proof. The initial suppression of this theory even though it eventually turned out to be more plausible angered conservatives who saw it as an overreach by social media companies against free speech and open debate.

The assertion that social media companies censor viewpoints for political reasons is actually widely believed among Americans, a 2022 Pew survey found that 77% of Americans believe that it is likely that social media platforms intentionally censor political viewpoints they disagree with, including 92% of Republicans and 66% of Democrats. Attempting to change this was one of the main reasons that Elon Musk decided to buy Twitter in 2022. Far from actually implementing true “free speech”, Musk has multiple times censored or interfered with content that criticizes himself and his companies. He has however reinstated many prominent accounts of users who were banned for hate speech or misinformation, such as Donald Trump and Andrew Tate. These changes led to a large spike in hate speech being posted on Twitter compared to the period before he bought it.

While Musk and other free-speech absolutists may rejoice at the new era of Twitter, it is impossible to ignore the fact that the hate speech and disinformation bans were working prior to their removal. Researchers from Zignal Labs found that after President Trump and other 2020 election disinformation influencers, content promoting lies relating to election fraud allegations were reduced by 73%, and there was also a significant drop in content celebrating the attack at the US Capitol. One big reason why that was true might have been that taking out the central actors of a disinformation network could have a disproportionate effect on the continued functioning of that network since oftentimes many simple users just follow what the big influencers are promoting. If this theory is true, it would mean that removing online disinformation and hate speech could in many cases be an easier problem than one might think, as just removing central figures will significantly reduce the amount of undesirable content.

Most importantly, there are also strong ethical reasons for social media platforms to act against hate speech. The examples I mentioned above show that in many cases it is difficult to define what these terms mean and that many people do not believe that social media networks currently do a good job with content moderation. However, I believe that even that is much better than doing nothing and allowing social media to become infested with hateful and false content. As the philosopher Karl Popper explained in his “Paradox of Tolerance” if we give complete tolerance to intolerant ideas, then eventually tolerance itself will be destroyed. Popper specified that he did not believe suppressing these ideas should be the first course of action, but if it becomes the case that intolerant ideas become a clear threat to society, then a tolerant society has not only a right but an obligation to fight against them in any way possible. In this spirit, as long as social media companies are being careful and more transparent than they have been so far about why they remove certain content, from an ethical perspective they must remove false and hateful content that has the ability to cause harm. It may help prevent the next mass shooting or act of political violence.

--

--