Joe Rogan and the Role of Media Figures in the Spread of Misinformation

Jack Tucker
SI 410: Ethics and Information Technology
8 min readFeb 24, 2022

In January, a group of medical community members penned a letter to Spotify asking it to remove a Joe Rogan podcast episode they said peddled COVID-19 misinformation. Musician Neil Young read the letter and subsequently pulled his catalog from the platform in response. This is just one recent example of media moguls using their platform to spread misinformation. This brings into question how the role that leaders in pop culture play in sharing information on their platforms. I believe that large media figures, like Rogan, should focus on using their platforms to moderate insightful conversations with experts without interjecting their own perceived conceptions.

So, how is this information getting out there in the first place? As we learned in our reading, “some of it is being amplified by journalists who are now under more pressure than ever to try and make sense and accurately report information emerging on the social web in real time.” This only emphasizes the importance of these popular figures like Rogan to make sure they are not pushing out ideas that could be false.

Joe Rogan often brings up how he purposely brings on guests with differing viewpoints than him. While this all seems great, the issue arises when Joe tries to argue with the experts he brings on his own show with loosely supported studies from unknown sites. And while he should be moderating the conversation and certainly should be asking insightful questions, him trying to disprove the experts only helps spread more false information.

Rogan recently had CNN’s Sanjay Gupta on his podcast to discuss COVID and vaccines. Throughout their 3 hour conversation, Rogan pulled up many “studies” from shady websites and with no peer review. He used these studies to fuel his counter-argument to Gupta’s about why the COVID vaccine is safe and effective. Gupta received some backlash for going on Rogan’s podcast, but when asked why he went on Gupta said, “I guess a small part of me thought I might change Joe Rogan’s mind about vaccines. After this last exchange, I realized it was probably futile. His mind was made up, and there would always be plenty of misinformation out there neatly packaged to support his convictions.” There was Sanjay Gupta, an esteemed doctor and surgeon, trying to explain to Joe and his listeners to real science behind the vaccine and why it is safe and effective, yet Rogan insisted on arguing against the science with the support of his unknown studies.

Rogan’s podcast is not the only medium being used to spread misinformation during the time of COVID. An analysis found over 500 sites were spreading COVID And COVID vaccine misinformation throughout the pandemic. In September, Children’s Health Defense, a website controlled by anti-vaccine advocate Robert F. Kennedy, Jr. garnered more engagement in 90 days than the Centers for Disease Control and Prevention, and National Institutes for Health.

Many will argue that trying to silence the opinions of figures like Joe Rogan will be limiting the right to free speech. However, not all speech is created equal. Speech can be dangerous. Just like yelling “fire” in a crowded theater, peddling false information on vaccines or a virus could be just as catastrophic. Further, platform companies have their own content moderation policies. In many cases, these have historically been very permissive on political issues, in line with the First Amendment tradition of the US, where many of these companies were founded and are headquartered, even as they have been more restrictive on some specific issues (such as nudity) in ways that reflect a mix of commercial and cultural considerations.

How these policies are implemented in practice varies, and, like other aspects of how platforms operate, sometimes seem to disadvantage already historically marginalized and disadvantaged communities.

At least on paper, the policies are generally meant to apply equally to all users everywhere. It is important to note that, while these content moderation policies are often significantly more restrictive than US laws regulating free speech, they can be more permissive than local laws across the world that are often more restrictive than those found in the US.

An example of this occurred when Facebook employees raised questions about whether Facebook’s misinformation policy is enforced evenhandedly. According to the policy, publications and individual users will receive a “misinformation strike” for a post that a fact checker determines is false or misleading. A publication with multiple misinformation strikes in 90 days is supposed to lose its eligibility to be in Facebook News, a curated section that generates traffic for publications. In August, Buzzfeed reported that at an all-hands meeting the previous month, Facebook employees asked Zuckerberg how Breitbart News remained a news partner after sharing the video in which doctors called hydroxychloroquine “a cure for Covid” and said “you don’t need a mask.” Through Breitbart’s page, the video racked up more than 20 million views in several hours before Facebook removed it. Zuckerberg said Breitbart didn’t have a second strike within the 90-day period.

This is a display of the confusing gray area that these media platforms sit in as content curators and moderators. Balancing the line between laissez faire and full on totalitarianism. One solution is relying on as many people in the public as possible to self-police these platforms and help identify the misinformation themselves. In March 2020, the World Health Organization appealed for help with what it called an “infodemic.” Facebook, YouTube, Twitter and others pledged to elevate “authoritative content” and combat misinformation about the virus around the world.

A chilling report came out in August 2020, when the global activist group Avaaz released a report showing that conspiracies and falsehoods about the coronavirus and other health issues circulated on Facebook through at least May, far more frequently than posts by authoritative sources like W.H.O. and the Centers for Disease Control and Prevention. Avaaz included web traffic from Britain, France, Germany and Italy, along with the United States, and found that the U.S. accounted for 89 percent of the comments, likes and shares of false and misleading health information. “A lot of U.S.-based entities are actually targeting other countries with misinformation in Italian or Spanish or Portuguese,” said Fadi Quran, the campaign director for Avaaz. “In our sample, the U.S. is by far the worst actor.”

In our reading on “Censorship and Access to Expression,” Mathiesen says that although the motivation for censorship is often disapproval of the content or worry about its effects on “public morality,”this is not always the case. “I may not morally disapprove of some content, but still think it would be bad if people had access to it,” Mathiesen explains. This excerpt from the reading explains this hard gray area that we sit in as content consumers. My argument for Joe Rogan to limit his personal notions on his platform is not me trying to censor his content that his creating. I feel that the misinformation he is putting out there about a virus and a life saving vaccine is harmful to people if they have access to it. Sanjay Gupta, touched on this in his interview, as scary as it may seem, some people are looking to Rogan for medical advice. Joe’s anti-vaccine rhetoric could really hurt people if it is consumed by the wrong person. There are many impressionable people out there, and if Rogan is not careful with what he says then it could cause some serious damage.

So, what is there to be done about Rogan and other media figure spreading misinformation? The answer is not so cut and dry. While many people have asked to just ban creators like Rogan from platforms, this seems like an extreme. Many Spotify employees and creators have asked Spotify to remove Rogan. Some artists have even stopped sharing their content on the platform in boycott of the company keep Rogan’s show up. However, the Spotify CEO has stated clearly that silencing Joe is not the solution. I agree with this as it related back to our Mathiesen reading. There is not right in this situation to call for an end to Rogan’s content completely. What we should be doing is changing the role of these figures to be more moderators. Instead of Rogan trying to interject his own, wrong beliefs into a conversation with a real doctor, it would be more beneficial if he could listen and ask insightful questions that are conducive to a productive conversation with his guests. While some might argue that he has gained his following because of his personal opinions, I would say that they do not need to be stopped completely, they just need to change how they are said. Instead of Rogan trying to argue with Sanjay Gupta and pulling up unreliable, mysterious “research” that support anti-vaccine rhetoric, he can change his tune to ask questions that are more conducive to conversation. Whereas before he would say something like, “well we should not vaccinate teens as I’ve seen research that teens have died from this vaccine and it is not necessary for them,” it should look something more like, “can I see the data that supports the safety and effectiveness of vaccinating teens?”

At the end of the day Rogan’s spread of misinformation is just the tip of the iceberg for this “infodemic.” As the New York Times reported, “Quackery won’t disappear by deplatforming or censoring people…If we really want to push back against health nonsense, we also need more than one-off celebrity condemnations and targeted content disappearing. Instead, we need to prevent false or misleading health claims from reaching millions of people in the first place.” They acknowledge that removing people like Rogan is not the answer. They also agree that changing this issue is going to take cooperation from every including the big content companies. For example, platforms like Spotify could introduce fact-checking for their nonfiction health content. They could provide additional context, including links to credible information sources, or adjust their algorithms to limit the spread of health misinformation. They could also play an educational role, developing programs that improve media and information literacy.

In summary, while the solution may not be as clear, the need to change this issue is as clear as day. Large figures like Joe Rogan should not be able to spread their beliefs grounded in misinformation. This is especially important for content that relates to health as it could be devastating to groups of people who look to figures like Rogan for health advice. Solving this misinformation issue is going to take cooperation from all sides of the table. From the figures themselves, the corporations and even possibly governmental policy changes. It’s clear that this is an issue that needs cooperation from all parties. However, I believe that it can start with a change in the way that people like Rogan share their opinions and moderate conversations. If they can learn to to be better moderators without interjecting their own opinions that are skewed by misinformation, this will be a great start to combating the “infodemic.” As we move on through the pandemic, we will hopefully see a shift in our society back to relying on facts and experts for information. If we can make this shift in who we look towards for help, this will be a great start.

--

--