What the Pandemic Would Have Looked Like Without Facebook
Social media has become an integral part of everyday life for nearly the entire world. The rise of social media giants like Myspace back in the day, then the advent of Twitter and Facebook have changed the world in their own right, giving people instant access to immeasurable amounts of information at the press of a button. But as the saying goes, with great power comes great responsibility, and to many experts and journalists, Facebook has bobbed and weaved to skirt responsibility for what happens on its platform. Look at any conflict in the world today and you will likely find Facebook front and center, whether it be the genocide of the Rohingya in Myanmar or hate speech on the platform bringing Ethiopia close to its own genocide, among a number of other events.
Sitting in the middle of a global pandemic and realizing that there are segments of society that rely solely on places such as Facebook for their information, I began to question how the two things connected. What would the COVID-19 pandemic have looked like if Facebook didn’t exist? Would there have been fewer deaths? Would people have taken the virus more seriously? Or would the troves of disinformation simply have moved to another platform?
Of course, this is all hypothetical, but if we can measure the impact that Facebook has had on the pandemic in any way, then we might be able to paint a somewhat accurate picture of what the pandemic would have looked like without it.
The most glaringly obvious thing about Facebook is the disinformation that is allowed to thrive on the platform. And one of the most glaringly obvious things about the pandemic, at least in the United States, is the disinformation that people believe about COVID-19. The Journalism and Pandemic Project from the International Center for Journalists (ICFJ) and the Tow Center for Digital Journalism at Columbia University surveyed more than 1,400 English-speaking journalists from 125 countries. The project found:
“Significantly, the respondents identified politicians, elected officials, government representatives, and State-orchestrated networks as top sources of COVID-19 disinformation. They also pointed to Facebook as the most prolific enabler of false and misleading information within the social media ecosystem. And, they expressed substantial dissatisfaction with the platforms’ responses to the content that they had flagged for investigation.”
66% of the journalists surveyed identified Facebook as the most frequent “prolific disinformation vector.” Politicians and elected officials came next at 46%, and Twitter came third at 42%.
One of the biggest culprits for spreading disinformation about COVID-19 and vaccines, as well as being a petri dish for hate speech and violence is Facebook groups. In September, the platform said it had removed 1 million groups for violating their policies, and over the summer, the company had removed hundreds of groups relating to the “Boogaloo” far-right movement and thousands more connected to Qanon. This is relevant to our main question because groups were created by Facebook with the goal to connect 1 billion people, so the platform’s algorithm recommends groups to users in order to reach that goal, but the groups that have gone unchecked with disinformation about COVID-19 are able to spread through the platform like wildfire. Now, after their ban of 1 million groups, Facebook said that it would keep its algorithm from recommending health groups to users — a step to try to combat the spread of virus disinformation by using machine learning to identify those groups. But nearly ten months into the pandemic, the move seems to be too little too late, and the effectiveness of Facebook’s content moderation algorithm has been a cause for concern.
In November, some of the company’s moderators who constantly search the platform for hate speech and other disinformation that the algorithm can’t pick up wrote an open letter to CEO Mark Zuckerberg concerning working conditions during the pandemic. Beyond the human-risk aspect that the moderators were forced to work under, one of the most interesting things in the letter read, “Without our work, Facebook is unusable. Its empire collapses. Your algorithms cannot spot satire. They cannot sift journalism from disinformation. They cannot respond quickly enough to self-harm or child abuse. We can.”
According to some statistics, Facebook has an average of almost 1.82 billion users a day, and there are tens of millions of active groups on the platform. As I cited a Pew research report in one of my previous articles on disinformation, that report found 1 in 5 US adults gets their political and election news primarily through social media. That group tends to be “less aware” about wide-ranging issues, and “more likely than other Americans to have heard about a number of false or unproven claims.” We can say with relatively high certainty that pandemic-related information would also fall under that category as well. And the biggest problem with those statistics is that only 37% of those in that group are “very concerned” about made-up news.
The nonprofit Avaaz published a report where they found that, “Global health misinformation spreading networks spanning at least five countries generated an estimated 3.8 billion views on Facebook in the last year.” They continued, “Only 16% of all health information analysed had a warning label from Facebook. Despite their content being fact-checked, the other 84% of articles and posts sampled in this report remain online without warnings.” But even on the posts that do contain warnings, given the low number of those concerned that the information they are consuming might be misinformation, how many people see those warnings as valid?
Saying all of this, Facebook has been responsible for the biggest spread of pandemic-related disinformation, but disinformation would still have existed in other forms if the platform didn’t exist, such as politicians, elected officials, and “State-orchestrated” news networks as the Journalism and Pandemic Project stated. Instead of people getting the wrong information from Facebook, nightly cable news would be the biggest loudspeaker of disinformation.The biggest change would have been the amount of information being disseminated, which in all likelihood, would not have been as far-reaching.
But even 100 years ago when the 1918 Spanish Flu hit the United States, there was orchestrated disinformation by the US government and news sources downplaying the severity of the virus. On October 15, 1918, the Philadelphia Inquirer published an article with the headline, “Scientific Nursing Halting Pandemic.” Some reports suggested that over 4,500 people died in Philadelphia alone that week that the article was written. One way or another, disinformation would have found a way to be a contagious virus through society just as the actual contagious virus spreads now.
Researchers with more time and more access to resources might be able to quantify, in some ways, the actual impact that Facebook has had on the pandemic, but would a platform such as Twitter have replaced Facebook with the same amount of disinformation? Would Twitter have been more proactive and prevented disinformation earlier? People still would have protested local mask mandates and government regulations, but the conspiracy theories would have had to flourish on other platforms and certainly would not have reached the number that they have currently. At the very least, it would have eased the strain on both the economy and the healthcare system, and possibly even led to less deaths.