Spotify’s Ultimate Battle: Neil Young or Joe Rogan

The War Between Truth and Profit

Image of Joe Rogan (left) and Neil Young (right), copyrighted by The Guardian

Neil Young or Joe Rogan? Before January 26th, those are two names I would have never expected to see in the same sentence. Yet, after singer Neil Young announced in an open letter last month that he would be pulling his catalog off Spotify should they continue allowing popular conspiracist Joe Rogan’s harmful content to remain, the platform had a big decision to make. It’s a decision that begs the question of how a large, influential company like Spotify should censor certain types of content. With disinformation spreading rapidly online, it’s irresponsible for Spotify to ignore the role they play in allowing harmful rhetoric to proliferate, and they should work to curate their content more carefully.

In recent years, social media has typically been a hotspot for misinformation to spread. From election fraud claims to coronavirus hoaxes, it seems as though our social media platforms can’t break free of this constant influx of incorrect, and increasingly harmful, content. Since vaccines rolled out in early 2021, I’ve watched as the online world has become shockingly more dangerous than ever with its seemingly silly coronavirus conspiracies. While I laugh when I watch my great-uncle post on Facebook another hoax about not getting the vaccine because it’s Microsoft and Bill Gate’s secret plan to inject you with a controlling microchip, I can’t help but feel worried for those who truly believe these falsities. Or worse, those who are at risk of believing.

Today, a lack of content curation on social media platforms has led to many users finding themselves surrounded by ill-informed, damaging rhetoric. As Claire Wardle explained in her article Fake News. It’s Complicated, these “social networks allow ‘atoms’ of propaganda to be directly targeted at users who are more likely to accept and share a particular message” (Wardle, 2017). When our social media consists of a network of friends, family, and trusted connections, these shared posts in our feed by people we know are much more likely to be accepted by us. And without much verification on the part of these platforms, this is where danger arises.

Our online information ecosystem has metastasized since its creation, making it increasingly difficult to decipher who to listen to, or what to believe, especially when the information that thrives in this ecosystem includes mis and dis-information. Describing some types of this dangerous content, Wardle highlights fabricated content, or “content that is 100% false, designed to do harm and deceive,” as well as misleading content, or content with “misleading use of information to frame an issue or individual” (Wardle, 2017). In my head, I equate misleading content with Facebook posts I used to see about Hillary Clinton. That crazy great-uncle I mentioned earlier was the biggest perpetrator of sharing articles about Hillary simply because of a shocking headline. However, when reading the full article it was obvious the title was taken out of context, and even more obvious that he never bothered to actually read past that title.

As dangerous as spreading this type of misleading information is, I believe fabricated content is where platforms need to be expending more energy to curate and censor it. Fabricated content is put out into our information ecosystem with the complete and utter attempt to harm. With the pandemic and its surrounding conversations about quarantine and vaccinations, this is where we can begin to see the actual damage disinformation has on our society. Since the start of the pandemic, there have been voices arguing that the coronavirus isn’t harmful, or that there’s no need to quarantine, or even worse, that the government actually created and planned this virus. While these beliefs are dangerous, the worst of it came when vaccines finally rolled out. Despite finding a solution that could effectively lessen the spread of this deathly virus and keep our society safe, mistrust in our government and disinformation online stopped many from taking the vaccine.

While it might be easy to precisely pinpoint that much of this blame falls on platforms allowing this disinformation to run rampant without any consequence or fact-checking, it’s far harder to ask these platforms to make it stop. Why should they? In Joan Donovan’s testimony at the U.S. Senate, the researcher highlighted how exactly these social media platforms profit off of its users, and how closely linked disinformation is to this profit. Looking at companies like Facebook or Twitter, one key performance indicator for the business models of these platforms emerge: growing user engagement metrics (Donovan, 2021). If companies wish to continuously increase user engagement no matter the societal cost, they must deliver “more novel and outrageous content” which is why fabricated stories and news are so successful on these platforms (Donovan, 2021). Ultimately, making profit is exclusively mutual to putting the public’s best interest first (Donovan, 2021).

Now going back to Spotify and the big debate over Young or Rogan, we have to look at the platform through a more critical lens. Why have they been so hesitant to take down Rogan’s content if this censorship is in the pursuit of truth and transparency? While they might not be a social media company like Instagram or Facebook, at the end of the day, they are a profit-driven platform that also benefits from increasing engagement metrics.

In the past few years, Spotify has attempted to build up its repertoire beyond just music, and into the land of podcasts. Becoming a podcast giant, though, requires investing money and time. In order to acquire the exclusive rights to Joe Rogan’s “The Joe Rogan Experience” podcast, it was recently publicly released that Spotify paid the conspiracist $200 million (Towey, 2022). Spotify has also reportedly invested $1 billion into its podcasting sector, which explains its steadfast determination not to censor the world’s top podcaster, Joe Rogan, so long as he continues being a successful investment (Towey, 2022).

Just like Joan Donovan explained to the U.S. Senate, as long as these platforms value profit over public interest, harmful information will continue to spread. In January, along with singers Neil Young and Joni Mitchell, over 270 health experts demanded Spotify to “immediately establish a clear and public policy to moderate misinformation on its platform,” specifically highlighting a recent episode of Rogan’s which encapsulated his podcast’s “concerning history of broadcasting misinformation” (Timsit, 2022).

In the podcast episode, Joe Rogan spoke with Robert Malone, a doctor against the coronavirus vaccine. Throughout the episode, Malone compares the United States’ pandemic regulations to Nazi Germany’s treatment of Jewish people, a comparison that many in the Jewish community have openly repudiated (Higgins & Risch, n.d.). Furthermore, Malone purports that esteemed immunologist and Chief Medical Advisor Anthony Fauci is “hypnotized,” citing an unscientific and unverified theory called “mass-formation psychosis” (Timsit, 2022). With all of this dangerous disinformation, many platforms immediately took charge to eliminate its site of this content. Spotify, however, has yet to remove the episode.

After Spotify received this open letter about Rogan’s episode with Malone, the platform responded that they acknowledge their role in “balancing both safety for [their] listeners and freedom for creators” and have worked to remove “over 20,000 podcast episodes related to Covid since the start of the pandemic” (Sisario, 2022). But keeping up Rogan’s episode is antithetical to this. Honestly, most of his episodes are antithetical to this. I’m all about free speech, but when does this freedom for creators cross a line?

Beyond just allowing Joe Rogan’s disproved COVID-19 content to remain, Spotify has also fed into this spread of disinformation by trying to increase engagement. While their cute, personalized playlists might seem like they care about you as a listener, it’s your time on the platform they care about. The deeper into their catalogues you go, the more money they make.

Recently, news publication The Guardian reported that Spotify was actively promoting anti-vaccine content (Das, 2022). When one user played a song containing some anti-vaccine lyrics, the platform generated a personalized playlist in which over 19 songs explicitly mentioned Covid-19 misinformation and anti-vaccine rhetoric, with one song specifically singing about vaccines being used to microchip people (Das, 2022). While Spotify isn’t actively engaging with these hoaxes through publicly supporting or denying them, Joan Donovan explains that “the cost of doing nothing is even worse” (Donovan, 2021). Just one song lying about microchips in the vaccine means health-care professionals, journalists, and researchers have to give up their own time to continuously correct this misinformation and stop it from spreading rapidly (Donovan, 2021).

The publication also reported that some of the other songs in these personalized Spotify playlists referenced a plethora of other conspiracy theories such as satanic pedophiles running the world and the Sandy Hook school shooting of 26 people being a hoax (Das, 2022). So, listening to just one anti-vaccine song sends users down a rabbit hole of horrifying disinformation and conspiracies. This is where Spotify needs to begin acknowledging the part it plays in either hurting or helping our public interest.

Allowing songs that question the legitimacy of Sandy Hook or a podcast that compares the mass decimation of the Jewish people to coronavirus protocols isn’t simply free speech. It’s destructive disinformation. Allowing hoaxes to snowball into deep conspiracies because it keeps users engaged on your platform isn’t balancing the safety of your listeners.

This pandemic continues to predominantly kill those who are not only immunocompromised or elderly, but unvaccinated (Sullivan, 2021). In fact, the Centers for Disease Control and Prevention has dubbed this new stage of the coronavirus as “a pandemic of the unvaccinated,” considering “more than 99% of recent deaths were among the unvaccinated” this past summer (Sullivan, 2021). While I understand the need for free speech and allowing your content creators to make content as they please, vaccinations and the pandemic shouldn’t be a source for content or debate. Allowing podcasters like Joe Rogan to release episodes questioning the vaccine’s legitimacy is a seriously dangerous decision to make. We should be encouraging people to take vaccines that are scientifically proven to lessen the spread and effects of a dangerous respiratory disease, rather than giving platforms to podcasters who have no more medical background than a high schooler in a 10th grade Biology class.

So, how exactly can Spotify step up and begin curating content more carefully? While I think many platforms still have more work to do in terms of censoring disinformation specifically regarding the pandemic, I think Spotify could benefit from looking around them. Other platforms have started content curation through warning banners that explicitly state the following content might have incorrect information. TikTok, a platform in which millions of user-made videos have the ability to go instantly viral, has taken an active stance against disinformation through warning banners that read “Caution: Video flagged for unverified content”(Kastrenakes, 2021). Twitter, another large platform where just about anything can be spread out into the world, has begun suspending accounts that encourage COVID-19 misinformation. And that doesn’t exclude accounts with large followings. In fact, just recently, Georgia Representative Marjorie Taylor Greene, a frequent figure to spread false information about the pandemic, was permanently suspended by Twitter and suspended by Facebook for 24 hours (Hernandez, 2022). Both platforms cited Greene’s violations of their COVID-19 misinformation policies. Ultimately, even if Spotify doesn’t want to harm the profits it reaps from podcasters like Joe Rogan by taking them off the platform, they can still take a stance against disinformation through warning banners. Having fact-checkers who can review and flag content would at least be acknowledging the presence of disinformation, and then putting it in the hands of the user as to whether or not they want to listen or even believe what they are playing.

But, then again, maybe I’m asking for too much. While I would love for Spotify to address the role their platform plays in allowing dangerous disinformation to thrive, and subsequently begin a better curation of its catalogue, I’m not sure if that will happen. As Spotify yearns to be successful in the podcasting sphere, shutting down disinformation coming from top earners like Joe Rogan might not be possible. So, ultimately, the answer to our question is clear: in the battle between truth and profit, Rogan will always come before Young for Spotify.

Citations

Das, Shanti. (Feb 13, 2022). How Spotify playlists push dangerous anti-vaccine tunes. The Guardian. Retrieved from https://www.theguardian.com/technology/2022/feb/13/dont-take-the-damn-thing-how-spotify-playlists-push-dangerous-anti-vaccine-tunes

Donovan, Joan. (April 27, 2021). Testimony to U.S. Senate. Judiciary Subcommittee on Privacy, Technology, and the Law. Retrieved from https://www.judiciary.senate.gov/imo/media/doc/Donovan%20Testimony%20(updated).pdf

Hernandez, Joe. (Jan 3, 2022). Facebook suspends Marjorie Taylor Greene’s account over COVID misinformation. NPR. Retrieved from https://www.npr.org/2022/01/02/1069753102/twitter-bans-marjorie-taylor-greenes-personal-account-over-covid-misinformation

Higgins, Mary Pat & Risch, Frank. (n.d.) Statement on Comparisons of COVID-19 Regulations to Hitler and Nazis. Dallas Holocaust and Human Rights Museum. Retrieved from https://www.dhhrm.org/public-statements/statement-on-comparisons-of-covid-19-regulations-to-hitler-and-nazis/

Kastrenakes, Jacob. (Feb 3, 2021). TikTok will now warn you about videos with questionable information. The Verge. Retrieved from https://www.theverge.com/2021/2/3/22263100/tiktok-fact-check-warning-labels-unverified-content

Sisario, Ben. (Jan 26, 2022). Spotify Is Removing Neil Young Songs After He Complains of ‘Misinformation’. The New York Times. Retrieved from https://www.nytimes.com/2022/01/26/arts/music/spotify-neil-young-joe-rogan.html

Sullivan, Becky. (July 16, 2021). U.S. COVID Deaths Are Rising Again. Experts Call It A ‘Pandemic Of The Unvaccinated’. NPR. Retrieved from https://www.npr.org/2021/07/16/1017002907/u-s-covid-deaths-are-rising-again-experts-call-it-a-pandemic-of-the-unvaccinated

Timsit, Annabelle. (Feb 6, 2022). 70 Joe Rogan podcast episodes removed from Spotify amid racial slur controversy. Seattle Times. Retrieved from https://www.seattletimes.com/entertainment/70-joe-rogan-podcast-episodes-removed-from-spotify-amid-racial-slur-controversy/

Towey, Hannah. (Feb 17, 2022). Spotify paid $200 million for Joe Rogan’s podcast — double previously known price. Business Insider. Retrieved from https://www.businessinsider.com/spotify-paid-200-million-for-joe-rogan-experience-podcast-report-2022-2

Wardle, Claire. (Feb 16, 2017). Fake News. It’s Complicated. Medium. Retrieved from https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79

--

--