Does Artificial Intelligence Threaten Copyright and Jobs? A Discussion for Solutions to Protect the Future of Creativity.

Liam Masias
12 min readApr 12, 2024

--

Illustration 1: Photo on Unsplash by Nick Fewings

Artificial intelligence (AI) is the new trend for companies to integrate into their technologies. It has many uses for education, idea generation, and fast production. However, research has begun on what possible intellectual property (IP) infringements AI may commit, especially since AI-generated content is not entirely original. A statement approved by the Library of the Congress by Zirpoli, a legislative attorney, provides an overview of AI training practices:

AI systems are ‘trained’ to create literary, visual, and other artistic works by exposing the program to large amounts of data, which may include text, images, and other works downloaded from the internet. This training process involves making digital copies of existing works.

This begs the question of what holds AI responsible for any violations in copyrighted content, especially since the biggest worry is that this exhibits theft of IP, ideas, music, voices, and other individuality markers. Currently, no laws dictate AI’s use and collection of this data, such as collecting texts from websites and books to create new outputs or gathering works of art and mixing pieces together. Without regulations on AI usage and training, there are many risks of content theft, lengthy and inconsistent copyright lawsuits, and more will arise from the imminent increase of AI use in various sectors of the economy. However, there are potential lawful and practical solutions to create fair use of AI that benefits all parties. This technology is a novelty and requires further research for optimal implementation. Nevertheless, there are significant recommendations for getting new perspectives on AI usage, discussions on filling legal gaps to minimize the risk of IP compromise, creating AI software that follows ethical policies, seeing how AI redefines human roles in production, and reflecting on how “bad AI” has affected society today. There are more questions than answers, but we need to spearhead a path in the uncharted realm of AI.

AI and the Creation of Intellectual Property

Illustration 2: Photo on Ashutosh Kumar’s Article on GrowthJockey; Non-Exhaustive List of Current AI Uses

From the early stages of self-driving vehicles and text prediction to uses in security, automated production, social media algorithms, and countless other practices that AI is implemented in. Each day brings new practicalities that grant unlimited possibilities for human productivity. Even inventions are generated from AI, raising concerns about intellectual property rights (IPR) and copyrights to AI-generated products. Picht and Thouvenin, from the law department at the University of Zurich, discuss patent concerns regarding the many uses of AI and some current examples of the intersection of AI and IPR, especially with the DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) case.

Comer, a J.D. candidate at the University of North Carolina, elaborates about DABUS being a generative AI system created by Stephen Thaler that produced two avant-garde products: a beverage container with enhanced gripping abilities and a flashing beacon for emergencies that is more noticeable for humans. This extended concerns about who owned the patents and whether they should be protected. Picht and Thouvenin suggest that copyright protection should only be extended if there is a human contributor, which means that AI cannot be completely autonomous in creating the product. If a human prompts for the ideas or gives a human-made base product for the AI, then it should be considered for copyright protection. Picht and Thouvenin has another suggestion for human representation, also known as a human proxy, for the AI to ensure that human-led AI content ensures that humans are not entirely displaced from the creative process. However, they contend that “the intellectual property law is traditionally based on the idea of one (or several) human creator(s). That is especially true for copyright law…” However, if protecting AI works is being investigated, what does this mean for their role in infringing on other works, especially during training?

Is AI Held to Copyright Infringement?

Illustration 3: Shutterstock Photo on Job Gold’s ComputerWorld Article: AI and IPR Laws

The intersections of AI and IPR laws are hazy, as they are case-by-case. This can be seen in Shumakova and others’, at the Law Institute of South Ural State University, research on legal regulations of AI in the creative industry to fill the regulatory gaps. They provide two mixed approaches to AI use. One of them being the Beijing Film Law Firm v Beijing Baidu Netcom Science and Technology case that was found that the content in question was utterly AI-generated, which led to it being barred from copyright protection. On the other hand, the Shenzhen Tencent Computer System Co Ltd v Shanghai Yingxun case found that a human took action to have the AI generate the content, which ruled to extend copyright protection. The two dissimilar outcomes cause a disparity in an already unsure resolution on whether protection and liabilities would be held against AI. Shumakova et al. call for more research as there are risks of AI creating plagiarism opportunities, unfair competition, and other illegal content. However, first it will be best to break down these concerns for AI and how this originated the concern of content theft and IPR compromises. Zirpoli’s Congressional Report (September 2023) discusses the burning question of AI copyright infringement. Zirpoli’s rundown of AI training states:

Generative AI programs are trained to generate such outputs partly by exposing them to large quantities of existing works such as writings, photos, paintings, and other artworks… [Which then is used for] generat[ing] new images, texts, and other content (or “outputs”) in response to a user’s textual prompts (or “inputs”).

Even though these are human-prompted-generated content, they follow Picht and Thouvenin’s concept of human-led creativity. Current laws have declined to extend IPR and copyright protections to “nonhuman authors,” as in his examples of a monkey and a garden, which were declined a patent due to a lack of human authorship. The unclear morality of AI use brings concerns in the industry.

In the New York Times, Koblin and Barnes cover the writer’s strike story of 2023, which generally aimed to put guard rails up against artificial intelligence. The studios’ deal included “increases in royalty payments for streaming content and guarantees that artificial intelligence will not encroach on writers’ credits and compensation.” The idea that this is already becoming a big concern and causes strikes due to AI’s role in content production leads to the question of what will this mean as AI advances?

The U.S. Court of Appeals case for The Authors Guild, Inc. v. Google, Inc., (2015) give previews of the growing concerns of copyright come about as technology evolves. Which the U.S. Court of Appeals ruled that Google was found not guilty as the replication of books for databases was fair use since the data was used “to make the digital copies available for library collections and for the public to search electronically using a search engine.” In contrast, the discussions about current AI and training data are foundationally different because generative AI produces content using the data rather than giving access to the public. Nevertheless, current doctrines are ambiguous on whether the AI user and AI company are liable for the infringement of copyrighted content due to transferring ownership of the generated content. The discrepancy and unclear statutes lead to the call for further discussion and research to give a definitive set of laws for AI development. Furthermore, a more immediate action is to apply general ethics in AI development in a way identical to other scientific fields.

Ethics for AI and its Intertwining Disciplines

Illustration 4: Photo for AI Ethics and Opinions by Kate Rattray

Ethics is a universal tool for guidance on doing “the right thing,” which has many definitions depending on the culture, background, and field of practice. This is where Yadav, an independent researcher with a doctorate in AI, brings modern ways to solve ethical issues in AI. Yadav points out that many scientific fields, like chemistry, physics, and even computer science, have a code of ethics to follow. Computer scientists follow the ten commandments of computer ethics, explained in an article from Capstick, a writer for Parker Shaw. These ten commandments are an essential part of the General Certificate of Secondary Education in computer science syllabuses. Take note of the most prominent commandment, number 8: “Thou shalt not appropriate other people’s intellectual output,” which refers to plagiarism and taking people’s intellectual property. However, there are no standards board to take pertinent action on this as laws are currently catching up to this issue. Yadav expresses how guardrails must be implemented in AI and robotics, as we can see the current havoc of its deployment without such guidelines. Therefore, Yadav has several solutions to implement this:

Inscribe Ethics in AI Software

The base solution for Yadav’s recommendations is encoding ethics in the core of an AI’s software, which can create confidence in responsible robotics that will avoid devastating outcomes. There are two examples of AI: good and bad AI. Good AI is a system with predictable outputs that require constrained inputs, and bad AI will have more free-form inputs and outputs but can have side effects of bias, privacy compromises, and more. All the issues stem from bad AI, but Yadav recommends implementing ethical codes in all kinds of AI, from simple to general intelligence-based AI to be safe.

Creative Work and Credits Due

With AI being trained on many human-made works of art, music, writing, and more to create novel artwork, it becomes difficult to identify the similarities to the original training data or even the real deal. An example of this can be found Thompson’s article about the confusion between pictures of human faces and AI generated pictures. Therefore, instilling a system that detects and gives a similarity metric, like plagiarism checker systems, can help Yadav’s IPR solution be effective.

Yadav suggests that AI-generated artwork with more than eighty percent similarity with the original data gives grounds for negotiations with the AI user who wishes to use the content and the original artist. The original artist can negotiate a share of the profit or completely prohibit the sale, but under the original artist’s terms. In cases with less than eighty percent similarity, the original artist can claim shares of the profits but cannot completely prohibit the sale. While these numbers may seem off to some, having some standards that lawmakers can discuss or change may help to foster a fairer system.

Creating an AI Ethics Standards Team

AI-based autonomous cars create new opportunities for human transportation, but being faulty can harm people’s lives. Yadav provides scenarios for AI ethics to consider the standards set about whether AI-based automobiles should be deployed on the roads. While Yadav’s solution is for autonomous cars, this can also be applied to IPR and copyright, ensuring that AI follows a standard to giving credit where due and other requests of permissions to avoid patent, ownership, or intellectual property abuses. A designated team should encode and follow up with the system to ensure that it follows the code of ethics. Enforcing ethical standards into AI systems and uses thorough testing to ensure privacy, credits given due, and copyrights are not compromised, we can then minimize further problems caused by bad AI and other similar systems. Ethics will be critical to creating a safe environment for all parties involved.

The New Outlook on AI

With technology, there will be an evolution in production and a shift in human roles. Technological shifts occurred often with motor vehicles, typewriters, and now self-checkout systems replacing cashiers in retail stores. Two perspectives on AI changing the human role are discussed by Bender, a doctor in the Department of Film, Television, and Screen Arts at Curtin University, and Burk, a chancellor’s professor of Law at the University of California Irvine, about how AI will enhance human productivity and change our view on the value of content.

Illustration 5: ChatGPT 4 Generated Picture, inspired by the IMEC article “Technology will help us tackle the problems of the 21st century

Bender points out that after only two weeks of ChatGPT’s release in December 2022, there was an explosion of concerns about technology taking people’s jobs. It is critical to point out that people go to school for the creative arts to play a role in the creative field rather than as a mode of extending their passions onto a medium. He focuses on how AI will aid content creators instead of stealing their roles. “Therefore, we should not expect the existence of Gen-AI to have a direct negative influence on student enrollments in the creative arts… [there is] the potential benefits that Gen-AI could offer [help] as assistants.” Bender believes that using AI in this context can provide a larger capacity for creators to “articulate their creative intentions” rather than focusing on the nitty-gritty details of the art.

The more historical approach comes from Burk, who gave examples of the Industrial Revolution in the British Victorian era for mechanized mass production. Burk believes there will be an emphasized value of traditional, “genuine,” and “authentic” production methods. Even when the production of the machine is of equivalent quality to that of human labor, the value of the human labor is perceived as superior as the valuation of the imperfections designates the origin of artisan products. There are already patterns from historical records of how humans place value on human production, such as craft beers, local produce, refurbished historic housing, and tourist attractions.

However, today, there is a shift in the materialistic value of obtaining as much “stuff” as possible. Generally, humans would prefer imperfect human creations. Having art as mechanized products without flaws would be boring and undesirable. The articulated mistakes add a sense of peculiarity to the product, creating value that a machine is not designed to do.

Key Take Aways for AI and our Future

AI raises many questions, as it is a contemporary technology. A great list of starting points from the guidance of Bender, Burk, Yadav, and Shumakova gives confidence that there are solutions, but further action is required.

Suggested Solutions, Ideas, and Conversations

  • Proxy system for a human to represent AI
  • Defining the role and liability of the AI company and AI user
  • Instilling ethics into AI software
  • Giving due credit and royalties for training data to create AI-generated content
  • Understanding AI’s use as a tool and the benefits it may provide
  • Understanding the redefined roles of human labor and work as AI evolves

Much work is to be done to ensure fair use of AI and to create fail-safes to prevent theft for those who will use the tool for mischievous purposes. Nevertheless, avoiding fear of AI will be beneficial because it creates new opportunities. Like many tools, as Shumakova expressed, there is a broad spectrum of use and abuse. We must be aware of these abuses and ensure that there are appropriate protections in place, and we need to find them soon as AI will keep evolving without us.

References

Authors Guild, Inc. v. Google Inc., №13–4829-cv (U.S. Copyright Office Fair Use Index October 16, 2015). https://www.copyright.gov/fair-use/summaries/authorsguild-google-2dcir2015.pdf

Bender, S. M. (2023). Coexistence and creativity: screen media education in the age of artificial intelligence content generators. Media Practice and Education, 24(4), 351–366. https://doi.org/10.1080/25741136.2023.2204203

Burk, D. L. (2023). Cheap creativity and what it will do. Georgia Law Review. https://doi.org/10.2139/ssrn.4397423

Capstick, F. (n.d.). What are the ten commandments of computer ethics? Parker Shaw. https://parkershaw.co.uk/blog/what-are-computer-ethics#:~:text=The%20idea%20was%20originally%20put,the%20Ten%20Commandments%20of%20behaviour

Comer, A. C. (2021). AI: Artificial inventor or the real deal? North Carolina Journal of Law and Technology, 22(3), 447–486. https://doi.org/https://scholarship.law.unc.edu/ncjolt/vol22/iss3/4

Koblin, J., & Barnes, B. (2023, March 21). What’s the latest on the writers’ strike? The New York Times. https://www.nytimes.com/article/wga-writers-strike-hollywood.html

Picht, P. G., & Thouvenin, F. (2023). AI and IP: Theory to policy and back again — policy and research recommendations at the intersection of artificial intelligence and intellectual property. IIC — International Review of Intellectual Property and Competition Law, 54(6), 916–940. https://doi.org/10.1007/s40319-023-01344-5

Shumakova, N. I., Lloyd, J. J., & Titova, E. V. (2023). Towards legal regulations of generative AI in the creative industry. Journal of Digital Technologies and Law, 1(4), 880–908. https://doi.org/10.21202/jdtl.2023.38

Thompson, S. A. (2024, January 19). Test yourself: Which faces were made by A.I.?. The New York Times. https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html

Yadav, N. (2023). Ethics of artificial intelligence and robotics: key issues and modern ways to solve them. Journal of Digital Technologies and Law, 1(4), 955–972. https://doi.org/10.21202/jdtl.2023.41

Zirpoli, C. T. (2023, September 29). Generative artificial intelligence and copyright law. CRS Reports for Congress. https://crsreports.congress.gov/product/pdf/LSB/LSB10922

--

--