Youtube, Content Moderation, and Authorship in the Era of Deepfakes

Maggie Monahan
b8125-fall2023
Published in
3 min readNov 16, 2023

Since its inception in 2005, YouTube has grappled with the ever evolving challenges of content moderation and copyright issues, striving to strike a delicate balance between diverse needs from diverse stakeholders. The platform’s journey has at turns needed 1) to address copyright compliance while respecting legal and authorial ownership, and 2) to address the challenge of mitigating hate speech and violent content without infringing on freedom of speech. This dual commitment has forced YouTube to adapt its content moderation strategies to emerging challenges in the dynamic landscape of online content creation. As YouTube enters the era of cinema-quality deepfakes created by generative AI, it must add a new layer of complexity to its content moderation endeavors.

The platform gradually implemented structured programs such as the Content Verification Program in 2007, requiring significant human labor which eventually received negative press. The introduction of the “three-strikes” rule in 2008, which applies to community guideline issues not specific to copyright violations, demonstrated YouTube’s goal to establish a more structured approach. The platform engaged with issues like newsworthiness during the Green Revolution in 2009, taking a stand as a believer in free speech. Of course, the manual labor required by their approach led to what a 2014 Wired story called modern-day sweatshops; the labor implications of such a content moderation system are especially complicated. Of course, YouTube’s parent company’s rival Meta is facing protracted and significant legal and public backlash related to abuse of sub-contracted labor doing this psychologically scarring work across sub-Saharan Africa.

By 2018, YouTube was experiencing truly massive upload volumes while also navigating requests from governments worldwide to remove content. The potential diplomatic and regulatory implications have yet to be explored. YouTube’s journey reflects an ongoing struggle to adapt to the ever-changing landscape of online content, with each evolution in its content moderation strategy attempting to address the complex interplay between copyright concerns, freedom of expression, and the imperative to ensure a safe and inclusive digital environment.

In this era, these two competing needs will converge as deepfakes threaten both the copyrights of established musical artists and the possibility of artificially generated hate speech, political misinformation, and violent imagery. (As noted in a recent piece in the New Yorker, the excellent historian Daniel Immerwahr cautioned the public against deepfake-hysteria, but highlighted that potential downstream effects and worst-case scenarios remain to be seen.) Seeking to get ahead of the curve, YouTube laid out a two-tier system yesterday.

This system explicitly builds upon the existing content guidelines. It will require creators to label content as AI-generated, with steep penalties for not doing so. However, until YouTube can sufficiently invest in and develop the proprietary “tools to help us detect and accurately determine if creators have fulfilled their disclosure requirements when it comes to synthetic or altered content,” it will rely on individuals and accounts filling out “the existing privacy request form.” This mechanism is simply not prepared for the tasks it has at hand. However, I worry that other power dynamics are being baked into their initial, two-tier AI-enforcement system.

This system clearly seeks to prioritize the platform’s existing, highly structured deals with major music labels, understandably so, as the first artists to be deepfaked effectively will be at levels of fame approaching that of the Beatles. If I were an active, highly prolific and lucrative contract holder like Taylor Swift or Beyonce, I would hope that my label pressured YouTube to protect my voice, as it is a significant component of artistry. Those labels will surely deploy their own resources to ensure appropriate labeling of content and will also lean on YouTube to coordinate those efforts.

However, I have concerns about the utter lack of enforcement for protections for artists not under music label partners; this omission should be concerning for all who care about a fertile, diversified music industry. Why would those who are not already enriched and protected by music label partnerships be left without the same level of enforcement? Perhaps this bifurcation will lead to those who have yet to be “picked” by the music industry’s powers that be even more vulnerable to exploitation by those using generative AI to rip off their creative ownership. Those artists who can less afford to be exploited by gen AI may be the losers in this system.

Simultaneously, Google is “scraping the entire internet to power its own AI ambitions.” Will potential partners, whether musicians, writers, or other content creators, hold YouTube’s feet to the fire as its parent company uses underprotected content to develop its own AI products?

--

--