Devalue or Defend: Generative AI, Artists, and the Law

Krista L.R. Cezair
Berkman Klein Center Collection
6 min readMar 16, 2023

An interview with Jessica Fjeld, Lecturer on Law & Assistant Director of the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society

Photo by Christina @ wocintechchat.com on Unsplash.

OpenAI, creator of text generator ChatGPT, started out as a nonprofit in 2015 but became a “capped profit” entity a short three years later. This week, the release of their GPT4 product has swept the news cycle with debate of its ability to analyze and describe visual content.

But the companies responsible for generative AI are just that: companies. Companies must turn a profit eventually, which makes the outcome of concurrent intellectual property disputes around generative AI incredibly important. This is why I wanted to interview Jessica Fjeld, whose legal practice is focused on supporting the work of creatives, archivists, and advocates, especially as it intersects with emerging technology and debates around topics like intellectual property but also equity and inclusion. Across the world, people who study the ownership, governance, and power of emerging technology companies are concerned about the power of generative AI to displace artists and creative workers. Now is the time to discuss the potential devaluing of creative work and the unique skills and vision of artists in the labor market through machine learning appropriation and reproduction.

Fjeld is a Lecturer on Law and the Assistant Director of the Cyberlaw Clinic at the Berkman Klein Center for Internet & Society at Harvard University. She is an accomplished poet whose legal practice focuses on supporting creatives as their work intersects with emerging technology. We sat down to discuss intellectual property in the context of the rise of generative AI.

Krista Cezair: Generative AI, like ChatGPT, has been predicted to displace poets, writers, and artists. One novelist asks, “Can ChatGPT write a better novel than I can?” Are these fears valid?

Fjeld: I’m not worried about generative AI replacing artists and writers. I think we’ll find ways to manage it.

Using photography as an example, my artist friend explained that when the camera was invented, most art schools devoted a lot of time to teaching people to paint in photorealistic styles. With the advent of the camera, there were worries about whether this meant the end of painting. But when art schools stopped teaching with the emphasis on photorealistic painting, it gave us abstract painting, a meaningful cultural advance. But for the upheaval of the camera, we wouldn’t have it.

Similarly, the skill of initially drafting will become less important, and the skills of editing and topic selection will become more important. Fundamentally, these algorithms are recombinatory. They’re not novel. They’re not innovating. They’re just producing raw material.

They’re not novel. They’re not innovating. They’re just producing raw material.

Cezair: If your poems were used as training or seed data, and then something was produced in the style of your work, would you expect attribution?

Fjeld: Yes, I think so, and that aligns with contemporary practice. When people are integrating elements of other works or inspired by a particular poem, you see captions at the top of poems that say “after so-and-so” or acknowledgment sections in the backs of books that cite other texts.

I fundamentally trust people and think that they’ll do the right thing if they’re given appropriate guidance. Ethically, that is the practice in most domains, and to the extent that The Blueprint for an AI Bill of Rights has a future in legislation or otherwise becomes enforceable through law, we’ll see it even as a kind of social best practice. It’s reasonable to expect.

Photo by "My Life Through A Lens" on Unsplash.

Cezair: That makes sense, especially with your theory about how fears would be overblown right now as we start to use this technology. Would it be fair to say that you think that regulation of generative AI should be at a more local level, like within a contract, rather than at the more systemic state or federal level?

Fjeld: I think it’s very tempting, especially when we have revolutionary technologies — “disruptive technologies” to use the Silicon Valley term — to start to think about regulating them technology by technology because we’re worried about the change brought up by that technology. But in general, I think the most impactful regulation has to do with a harms-based approach.

In the copyright infringement context, we already have the Getty and Stable Diffusion lawsuit brought by three artists. So, I think we’ll see that existing framework begin to deal with these new technologies. Copyright also had a hard time with the camera. Photographs came to be recognized as copyrightable after the courts realized they required originality and creativity to produce. The challenge here isn’t exactly parallel, but it will be similar to see courts grapple with how these technologies implicate copyright.

Cezair: Again, it shows that you have faith in humans and our already-built structures to be able to manage these very human-made creations. My final question is about this copyright idea and if it’s found that working artists don’t own the intellectual property of the product of what was trained on their work. Do you think that this ownership scheme reifies current power structures that devalue the labor of working artists who are disproportionately of a lower socioeconomic status than other professional workers in favor of corporations and capitalist hegemony?

Fjeld: Yeah, it’s a great question, and it’s worth recognizing the work of scholars like Anjali Vats and others who have routinely and persuasively pointed out that even the existing copyright laws themselves have often been used in ways to reify existing racialized and economic power structures. You see the use of copyright law in music to devalue the work of Black musicians and to allow sampling of their works by white musicians.

Something that’s kind of interesting that I haven’t seen people thinking about yet is the Blurred Lines case in music copyright. In that case, Marvin Gaye’s estate won a judgment against Robin Thicke for a song that copied the feel or groove of Marvin Gaye’s original. If an algorithm is prompted by typing in, “I want something in the style of Krista’s work,” there could be an expansion of that type of case law to say there’s liability for this as a derivative work because it captures the overall feel of her work.

It’s also important to remember that the creativity that goes into making these types of software programs is valuable and that those people should have rights, too.

It’s also important to remember that the creativity that goes into making these types of software programs is valuable and that those people should have rights, too. And so, we want to reward the artists who are contributing to social progress through all of the important work they’re doing, and we also want to reward the technologists.

There are good guys on the tech side, too.

Cezair: Thank you. That’s a fantastic note to end on.

So, Fjeld fundamentally trusts us. She trusts us to apply existing structures and frameworks to new problems that arise from disruptive technologies. She trusts us to be honest about our use of generative AI. And she trusts us to protect those of us who would otherwise be exploited by this technology. But I wonder how much that optimism should be tempered with caution. Technological advances that implicate the skills of laborers have not been kind to people who are already subordinated in the American workplace, like people of color, women, and people with disabilities.

Recalling the Anjali Vats example, we see that the work of these kinds of marginalized groups is often already diminished even without the convenience of disruptive technologies to use to perpetuate that discrimination. As Fjeld’s discussion makes clear, technology outpaces law. I argue that particular regulatory attention must proactively be paid to artists and creative workers, themselves an underpaid class, and especially to those who exist at the intersection of other subordinated classes to guard against their further subjugation in response to the proliferation of generative AI.

Krista L. R. Cezair is a research assistant at the Berkman Klein Center for Internet and Society at Harvard University, a 3L at Harvard Law School, and a Master of Public Health candidate at the Harvard T.H. Chan School of Public Health.

--

--

Krista L.R. Cezair
Berkman Klein Center Collection

Krista writes about mental health uniquely and as evocatively as she is informative. With public health and legal training from Harvard, she explores identity.