AUTHORS VS ARTIFICIAL INTELLIGENCE: WHO WILL WIN?

Digital & Analogue Partners
Coinmonks

--

OpenAI has been sued by The New York Times. This article delves into the depths of this legal confrontation, exploring the legitimacy of The New York Times’s claim, the potential defences for OpenAI, and the broader implications for AI development and copyright law.

Source: © The New York Times © 2024 Digital & Analogue Partners

In 2023, significant advancements in Artificial Intelligence (AI) were evident, particularly in the field of chatbots, which marked a pivotal moment in the evolution of generative AI technology. Google introduced its Bard AI, an innovative chatbot utilising the company’s extensive suite of tools. Bard AI, powered by LaMDA (Language Model for Dialogue Applications), was first made available to a select group of testers before its wider release, signifying Google’s commitment to advancing conversational AI.

Similarly, OpenAI made remarkable progress with the debut of GPT-4, a multimodal large language model that set new benchmarks in processing and generating text, images, and voice data. This advancement underlined the versatility and robustness of GPT-4 in handling various forms of data input and analysis. Complementing GPT-4, OpenAI also released DALL-E 3, an advanced text-to-image generator seamlessly integrated with ChatGPT. This integration established new standards in image generation technology, demonstrating the capability of AI to create detailed and contextually accurate visual content from textual descriptions.

Meta, stepping into the chatbot domain, introduced its AI chatbot assistant, designed for integration with platforms like WhatsApp, Instagram, and Messenger. This AI chatbot stands out for its capability to perform real-time internet searches using Microsoft’s Bing, showcasing a new level of interactivity and information accessibility within chat platforms. This integration highlights the evolving landscape of AI chatbots, where real-time data retrieval and user interaction converge to provide enhanced user experiences.

These developments in AI and chatbot technology in 2023 represent technological milestones and signify a broader shift in how AI is integrated into everyday tools and platforms, shaping the future of human-computer interaction and information accessibility.

The New York Times (NYT) lawsuit against OpenAI and Microsoft centres on allegations of copyright infringement, underscoring the complex legal landscape surrounding AI and intellectual property. The NYT claims OpenAI used its articles to train ChatGPT without authorisation, potentially impacting its web traffic and revenue. OpenAI counters that their use of publicly available data, including the NYT articles, falls under fair use, essential for innovation and competitiveness.

This case reflects a broader legal trend where AI companies face challenges over using internet content for AI system development. The outcome could significantly influence how AI entities utilise copyrighted materials and balance innovation with intellectual property rights​​​​​​​.

IS OPENAI ECHOING SPOTIFY’S PATH IN PUBLISHER DYNAMICS?

OpenAI’s emergence in the digital information sphere is akin to the transformation Spotify instigated in the music industry. Before Spotify, music studios and record labels maintained stringent control over music access, similar to the current grip of publishing houses on text and novel copyrights. Spotify disrupted this paradigm by offering a legal and transparent solution for music accessibility, challenging the longstanding dominion of traditional gatekeepers.

Similarly, OpenAI, mainly through ChatGPT and other Large Language Models (LLMs), heralds a new era in information accessibility. This shift echoes the impact of the internet, which marked the end of the record labels’ exclusive hold over music distribution. Today’s users are inclined towards more straightforward and more convenient access to knowledge, a need that ChatGPT and LLMs cater to effectively. The scenario posits a significant decision for publishing houses: either to negotiate with OpenAI, which proposes a legal and transparent method for accessing archives to train LLMs, or to risk losing their influence to alternative platforms that might not engage in dialogue.

This movement towards decentralised solutions underscores a pivotal ‘adapt or perish’ landscape. Like the music industry before, the publishing sector is compelled to evolve and re-evaluate its position as the sole custodian of intellectual property. Despite facing legal challenges, as exemplified by the lawsuit with the NYT, OpenAI represents a stride towards more accessible, user-focused models of information distribution, prompting traditional copyright holders to reconsider their approaches amidst the rapidly evolving digital environment.

A crucial aspect of OpenAI’s strategy mirrors Spotify’s content licensing. OpenAI has actively engaged with several publishers to licence content, ensuring their AI models are fed with updated and accurate data. This point is particularly significant in the face of growing scrutiny over data sourcing and ethical AI development. A notable achievement is OpenAI’s multiyear licensing deal with Axel Springer SE, the parent company of Politico, “valued at tens of millions of dollars”. Another significant agreement was reached with The Associated Press, though the details remain undisclosed. These partnerships are critical for OpenAI’s commitment to providing well-informed, reliable AI interactions. However, as we already know, not all negotiations have succeeded, and the parley with the NYT led to legal action.

Source: © 2024 Digital & Analogue Partners

NEW YORK TIMES IS NOT ALONE AGAINST AI DEVELOPERS

The legal battle between the New York Times and OpenAI is not the first instance illustrating the complex interplay between AI technology and copyright law. Three cases, Andersen v. Stability AI Ltd., 23-cv-00201-WHO (N.D. Cal. Oct. 30, 2023), Kadrey v. Meta Platforms, Inc., №23-cv-03417-VC (N.D. Cal. Nov. 20, 2023) (now consolidated with Chabon v. Meta Platforms Inc., 3:23-cv-04663, (N.D. Cal.), and Thomson Reuters Enterprise Centre GmbH et al v. ROSS Intelligence Inc., №1:2020cv00613 (D. Del. 2022), explore the legal boundaries of using copyrighted material to train AI models. Yet, it is essential to highlight that there have yet to be final decisions on these cases.

Andersen et al. v. Stability AI Ltd. began in January 2023, with artists suing Stability AI, Midjourney, and DeviantArt over their AI-powered image generation tools. The artists claimed their copyrighted images were used to train these AI models without consent. Most plaintiffs’ claims were dismissed except for Sarah Andersen’s direct copyright infringement claim against Stability AI. The case underlined the necessity for specific allegations about how AI outputs incorporate copyrighted materials.

Filed in July 2023, a class action lawsuit in the Kadrey v. Meta Platforms case involves Meta’s LLaMA large language models, alleging that they were trained on copyrighted books. The court dismissed most claims, except for the potential direct copyright infringement due to the unauthorised use of books for training. The plaintiffs’ argument that the AI models and their outputs were derivative works was rejected as “nonsensical”. An amended complaint focuses on direct copyright infringement.

Returning to May 2020, the Thomson Reuters case focuses on ROSS Intelligence’s alleged use of copyrighted headnotes from Thomson Reuters’ Westlaw database for training an AI legal research tool. The court’s September 2023 denial of summary judgment emphasised the unresolved nature of crucial facts, particularly regarding the transformative use of copied material in AI training. The ruling brings to the forefront the debate on whether AI’s use of copyrighted material for training falls under transformative use, a critical aspect of fair use. The jury trial is expected in 2024.

The initial rulings in all three cases, being in the early stages, have left many crucial aspects of AI-related copyright law unresolved. In the Andersen and Kadrey cases, while the court dismissed most claims except for direct copyright infringement, these rulings hint at the challenges plaintiffs may face in proving that AI models or their outputs are infringing works. Success in these claims appears contingent on demonstrating that the AI-generated content contains portions of the copyrighted material or is substantially similar. In contrast, the Thomson Reuters case highlights the difficulties in resolving fair use issues in AI-related cases at the summary judgment stage, especially when there’s contention over the functioning of the AI tools in question.

Source: © 2024 Digital & Analogue Partners

In its public response to the NYT, OpenAI stated:

“Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents.”

But what is fair use?

“For nearly three hundred years, since shortly after the birth of copyright in England in 1710, courts have recognised that, in certain circumstances, giving authors absolute control over all copying from their works would tend in some circumstances to limit, rather than expand, public knowledge. … Courts thus developed the doctrine, eventually named fair use, which permits unauthorised copying in some circumstances, to further “copyright’s very purpose, ‘to promote the Progress of Science and useful Arts.”

The fair use doctrine in US copyright law, pivotal for balancing copyright owners’ rights with public access to copyrighted materials, is applied flexibly. It evaluates various factors like the user’s purpose, the nature of the copyrighted work, the extent of the used portion, and the market impact. This doctrine isn’t rigidly applied but is assessed case-by-case, reflecting each unique situation. “The Act’s fair use provision, in turn, “set[s] forth general principles, the application of which requires judicial balancing, depending upon relevant circumstances.”

Section 107 of the Copyright Act encompasses 4 main criteria for determining whether the use of copyrighted material is fair:

  • the purpose and character of the use, including whether such use is commercial or is for nonprofit educational purposes;
  • the nature of the copyrighted work;
  • the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  • the effect of the use upon the potential market for or value of the copyrighted work.

Further, we will break down each criterion of the fair use doctrine and analyse how OpenAI’s stance may or may not align with the principles of fair use. However, while attentively analysing each criterion of the fair use doctrine, it is crucial to remember that copyright law protects how an author expresses their ideas rather than the ideas themselves. “[The] copyright does not protect ideas, but only “the original or unique way that an author expresses those ideas, concepts, principles, or processes.” It is crucial to keep this in mind while going through each criterion.

(1) the purpose and character of the use

This first factor has two subparts: commerciality and transformativeness. “Commercial use weighs against finding fair use, while transformative use weighs in favour. The more transformative the new work, the less will be the significance of commercialism.”

OpenAI’s use of materials in its AI models raises questions in this context. Originally a non-profit, OpenAI has evolved into a significant for-profit entity. The New York Times, in its lawsuit, highlights this transformation, noting that OpenAI is now valued at up to $90 billion, with projected revenues exceeding $1 billion in 2024. This shift raises questions about the commercial nature of OpenAI’s activities.

Reflecting on the evolution from ChatGPT-3.5, offered at no cost, to ChatGPT-4, which comes with additional features for a monthly subscription, one observes a clear shift towards a revenue-generating model. This transition implies a commercial strategy. OpenAI leverages the experiences and data acquired from the freely available version to refine and improve the subscription-based version, which is further extended to business customers. This technological shift raises another critical question: Can OpenAI still claim its primary use of materials is for non-profit educational purposes?

The Supreme Court, in the Cambell case, recognised that non-profit educational use could favour a finding of fair use. This legal approach is particularly relevant in education, where fair use doctrine is vital. In the Cambridge Univ. Press v. Patton case, the Eleventh Circuit considered the fair use of academic textbook excerpts by Georgia State University (GSU). Although not transformative, the court found that GSU’s use was for non-profit, educational purposes, aligning with section 107’s Fair Use exception.

However, when educational use intersects with commercial activity, as in the Princeton University Press and Basic Books cases, the balance may tip against fair use. These cases involved for-profit copy shops making non-transformative educational copies, highlighting the role of commercial intent in fair use determinations.

The role of copyright in promoting learning is fundamental, allowing some leeway for educational fair use. This principle raises a compelling question: If humans can use copyrighted material for educational purposes, why can’t AI? The legal landscape is still evolving in this regard. Courts are just beginning to address such cases, and legislative action, such as the proposed ‘AI Foundation Model Transparency Act of 2023’, is underway to provide clarity. This act, if passed, would require AI model creators to disclose training data sources, aiding copyright holders in understanding how their information is used.”

Therefore, while OpenAI’s origins and specific applications may align with non-profit educational purposes, the commercial aspects could potentially weigh against a finding of fair use.

Central to their defence is the transformative nature of their use, where existing content is repurposed to create new, AI-generated material, marking a significant shift from the original purpose of the materials. Yet, while evaluating transformativeness, it is necessary to consider several aspects.

Firstly, the degree of transformativeness. In the Andy Warhol case, the Supreme Court stated, “The first factor asks “whether and to what extent” the use at issue has a purpose or character different from the original. The larger the difference, the more likely the first factor weighs in favour of fair use. A use with a further purpose or different character is said to be “transformative,” but that is also a matter of degree. To preserve the copyright owner’s right to prepare derivative works, defined in §101 of the Copyright Act to include “any other form in which a work may be recast, transformed, or adapted,” the degree of transformation required to make “transformative” use of an original work must go beyond that required to qualify as a derivative”.

Based on the degree of transformativeness, it is possible to state that the OpenAI’s use of materials outweighs fair use as its chatbot generates an entirely new text. However, the analysis of transformativeness does not end here.

Secondly, it is crucial to analyse how OpenAI used copyrighted material and whether it would qualify as “intermediate copying”.

Intermediate copying was considered in the Sony Computer Entertainment Inc. v. Connectix Corp. 203 F.3d 596 (9th Cir. 2000), where “the defendant used a copy of Sony’s software to reverse engineer it and create a new gaming platform on which users could play games designed for Sony’s gaming system. The court concluded that this was fair use for two reasons: the defendant created “a wholly new product, notwithstanding the similarity of uses and functions” between it and Sony’s system, and the “final product [did] not itself contain infringing material.” The Supreme Court has cited these intermediate copying cases favourably, particularly in the context of “adapt[ing] the doctrine of fair use … in light of rapid technological change.”

Further, Judge Bibas, in the Thomson Reuters case, pointed out that intermediate copying depends on the precise nature of someone’s actions. “So, whether the intermediate copying case law tells us that Ross’s use was transformative depends on the precise nature of Ross’s actions. It was transformative intermediate copying if Ross’s AI only studied the language patterns in the headnotes to learn how to produce judicial opinion quotes. But if Thomson Reuters is right that Ross used the untransformed text of headnotes to get its AI to replicate and reproduce the creative drafting done by Westlaw’s attorney-editors, then Ross’s comparisons to cases like Sony are not apt”.

Unfortunately, we do not know how OpenAI used the NYT’s materials to teach ChatGPT.

As ChatGPT stated, “OpenAI utilises a wide range of materials to train ChatGPT through a comprehensive process. This process begins with the collection of extensive datasets from publicly available texts, which may include news articles, books, websites, and other publications. Following data collection, there’s a phase of filtering and preprocessing to ensure the suitability of the content, which involves removing inappropriate material, duplicates, and irrelevant information. The model training phase then uses these refined datasets, employing advanced machine learning algorithms to analyse the text. This analysis helps the model learn language patterns, context, and structure, enabling it to predict subsequent words in a sentence and generate coherent, contextually relevant responses.”

If it is accurate and ChatGPT only studied the language patterns to learn how to formulate answers, OpenAI would have a chance to use the transformativeness criterion in its favour, while if it is established that ChatGPT replicated and reproduced the NYT’s materials, then the NYT would have a better standing.

With regard to transformativeness, the court may rule in favour of OpenAI if it is proved that OpenAI’s degree of materials use goes beyond that required to qualify as a derivative and if OpenAi’s actions of using copyrighted material do not end in replication and reproducing the NYT’s materials.

Source: © 2024 Digital & Analogue Partners

(2) the nature of the copyrighted work

The second criterion makes courts consider the nature of the copyrighted work, “including (1) whether it is “expressive or creative . . . or more factual, with a greater leeway being allowed to a claim of fair use where the work is factual or informational, and (2) whether the work is published or unpublished, with the scope of fair use involving unpublished works being considerably narrower.”

In the Harper & Row case, the Supreme Court stated that “[t]he law generally recognises a greater need to disseminate factual works than works of fiction or fantasy.” However, it does not mean the infringer gets a free pass if factual works are copied.

While the copyright does not protect facts or ideas set forth in a work, it does protect that author’s manner of expressing those facts and ideas. At least unless a persuasive fair use justification is involved, authors of factual works, like authors of fiction, should be entitled to copyright protection of their protected expression. The mere fact that the original is a factual work, therefore should not imply that others may freely copy it.”

Therefore, the nature of the NYT materials plays a crucial role. It is difficult to claim that the NYT articles are mere informational materials (like headnotes in the Thomson Reuters case). In journalism, the role of creativity cannot be understated. When journalists work on news notes about widely known information, they are not merely transmitting facts. Instead, they are tasked with adding layers of context, such as historical background, connections to other events, and expert opinions. This enrichment process transforms basic information into a nuanced and informative piece that offers value beyond merely presenting facts. The journalist’s perspective is also a critical element. Their background, the publication’s focus, and their interpretation can significantly influence how the information is presented. This aspect underscores the individuality of each article, even when different journalists cover the same event. The resulting narratives are distinct, moulded by the journalist’s unique viewpoint and approach.

Therefore, it is highly likely that the court will come to the opinion that the NYT materials are creative, and this fact may weigh in the NYT’s favour as courts tend to give greater protection to creative works. Conversely, the NYT materials are published; hence, the OpenAI may have a stronger standing than the NYT. “The scope of fair use is narrower for unpublished works because an author has the right to control the first public appearance of his or her expression”.

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole

The third factor considers “the amount and substantiality of the portion used in relation to the copyrighted work as a whole.” While analysing “amount” and “substantiality”, the court will assess the quantity and quality of copyrighted material used. “In assessing this factor, [the court] consider[s] not only ‘the quantity of the materials used’ but also ‘their quality and importance’” in relation to the original work. The ultimate question under this factor is whether “the quantity and value of the materials used are reasonable in relation to the purpose of the copying.”

It stands to reason that utilising a smaller amount of copyrighted material increases the likelihood of the fair use doctrine being applicable. “A finding of fair use is more likely when small amounts, or less important passages, are copied than when the copying is extensive or encompasses the most important parts of the original.”

However, as stated in the Authors Guild v. Google case, “notwithstanding the reasonable implication that fair use is more likely to be favoured by the copying of smaller, rather than larger, portions of the original, courts have rejected any categorical rule that a copying of the entirety cannot be a fair use”, we can conclude that even if the whole work is copied, it can still meet the third criterion of fair use doctrine.

Moreover, if the usage of copyrighted materials positively influences the transformative nature of the outcome, this too can contribute favourably towards a determination of fair use. In the Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994), the Supreme Court concluded that “the extent of permissible copying varies with the purpose and character of the use” and characterised the relevant questions as whether “the amount and substantiality of the portion used . . . are reasonable in relation to the purpose of the copying”. “Complete unchanged copying has repeatedly been found justified as fair use when the copying was reasonably appropriate to achieve the copier’s transformative purpose and was done in such a manner that it did not offer a competing substitute for the original.”

In the AuthorsGuild, Inc. v. HathiTrust,755 F.3d 87 (2d Cir. 2014), the court concluded that “because it was reasonably necessary for the [HathiTrust Digital Library] to make use of the entirety of the works in order to enable the full-text search function, we do not believe the copying was excessive”.

A similar position is seen in the Authors Guild v. Google case: “As with HathiTrust, not only is the copying of the totality of the original reasonably appropriate to Google’s transformative purpose, it is literally necessary to achieve that purpose. If Google copied less than the totality of the originals, its search function could not advise searchers reliably whether their searched term appears in a book (or how many times).”

In the Thomson Reuters case, “the parties [fought] over whether the use was “tethered to a valid … purpose.” Westlaw sa[id] Ross copied far more than it needed. Ross said [that] it needed a vast, diverse set of materials to train its AI effectively. Though Ross need not prove that each headnote was strictly necessary, it must show that the scale of copying (if any) was practically necessary and furthered its transformative goals.”

Concerning substantiality, transformativeness again plays a significant role. “The ‘substantiality’ factor will generally weigh in favour of fair use where … the amount of copying was tethered to a valid, and transformative, purpose.” “It cannot be said that a revelation is ‘substantial’ in the sense intended by the statute’s third factor if the revelation is in a form that communicates little of the sense of the original.”

Therefore, even if OpenAI utilised all copyrighted materials from the NYT, their action might still meet the criterion for fair use if it’s demonstrated that OpenAI’s usage was transformative. This implies that for OpenAI to fulfil its goal of effectively training its chatbot, it was essential to use a significant portion, if not all, of the NYT articles.

(4) the effect of the use upon the potential market for or value of the copyrighted work

The fourth factor in assessing fair use concerns the “meaningful or significant effect” on the value of the original or its potential market. This assessment hinges on whether the secondary work usurps the market of the original by being a competing substitute rather than on whether it directly damages the market for the original. The Thomson Reuters case illustrates this: despite Ross and Thomson Reuters competing in the legal research market, it was crucial to determine whether Ross’s AI product was a substitute for Westlaw or a transformative, new research platform serving a different purpose. It highlighted that a transformative use, which creates a new product serving a different purpose, is less likely to be a market substitute.

Applying this to OpenAI’s use of materials in ChatGPT, it’s worth considering if ChatGPT acts as a substitute for the NYT. The extent of copying plays a significant role here; extensive copying increases the likelihood that the secondary work could compete with the original, potentially diminishing the rights holder’s sales and profits. The critical aspect is the purpose of the copying and its potential to serve as a market substitute. If the copying, despite being extensive, is for a purpose differing from the original, the likelihood of it being a satisfactory substitute diminishes.

ChatGPT via Bing curates recent news in news distribution, highlighting its competition with traditional publishers like the NYT. Previously, news aggregators challenged publishers by merging headlines, diverting essential traffic and impacting revenue. The NYT alleges OpenAI’s use of its articles decreased subscriptions and revenues. If OpenAI’s usage reduces the NYT’s income, it could negatively affect fair use arguments, focusing on whether the secondary work competes in the market, thus lowering the original creator’s earnings.

Globally, responses to similar challenges in digital news dissemination vary. In the EU, for example, the Directive on Copyright in the Digital Single Market created a new right for press publishers to demand fees from aggregators. In contrast, the Australian government chose to address the balance of interests through its Digital Platforms Inquiry and the News Media and Digital Platforms Mandatory Bargaining Code rather than through copyright law. Additionally, the Online News Act was enacted in Canada at the end of 2023. This legislation ensures that major digital platforms compensate news publishers fairly for their content.

ChatGPT accesses news from various internet sources via Bing, adhering to the permissions set by publishers, such as The New York Times, which has opted out, thus limiting ChatGPT’s access to its content. As a result, ChatGPT’s functionality through Bing does not inherently involve accessing or replicating content from publishers with restricted access. ChatGPT typically offers transformed text when displaying news, serving purposes different from the original articles, primarily for responding to specific queries rather than aggregating news. Considering these points, ChatGPT, like Ross’s AI in the Thomson Reuters case, can be seen as a transformative and distinct platform, not a direct market substitute for the NYT. This is reinforced by the fact that the NYT has chosen to opt out of Bing’s access to its content. Thus, it can be concluded that OpenAI’s ChatGPT is unlikely to be considered a market substitute for the NYT.

Finally, the public benefit is also considered in analysing this criterion. “[The court] must take into account the public benefits the copying will likely produce.” “The public benefit need not be direct or tangible but may arise because the challenged use serves a public interest”. Additionally, “[the court is] free to consider the public benefit resulting from a particular use notwithstanding the fact that the alleged infringer may gain commercially.” Nevertheless, the “analysis of this factor requires us to balance the benefit the public will derive if the use is permitted and the personal gain the copyright owner will receive if the use is denied.”

The widespread use of AI models, as evidenced by the 180.5 million users of ChatGPT, suggests a strong public interest in this technology. While the broader implications of AI on humanity pose certain risks, deploying AI in chatbots is generally viewed as less worrisome. This is true primarily because chatbots are designed for specific, beneficial tasks such as answering questions and gathering information, which are inherently safe and advantageous functions. Therefore, as the NYT, in its lawsuit, characterises itself as “a trusted source of quality, independent journalism whose mission is to seek the truth and help people understand the world”, it is possible to conclude that it is beneficial to use “a trusted source … the quality of whose coverage has been widely recognized, including 135 Pulitzer Prizes (nearly twice as many as any other organisation)” to train AI chatbots. Also, the NYT’s argument that OpenAI benefits financially by using its material while training its chatbot may not hold if the public benefit is established.

(5) сonclusion on the fair use doctrine

In conclusion, examining OpenAI’s use of the NYT content under fair use is complex. OpenAI’s commercial operations may not favour fair use, while its AI’s transformative use may challenge and support fair use claims. The creative nature of the NYT’s content could favour them, yet the fact that these works are published might benefit OpenAI. The usage scale and its transformative intent are fundamental; extensive use could be permissible if transformative. The impact on the NYT’s market remains uncertain; however, ChatGPT’s innovative nature and public advantages could lean towards a fair use argument.

Source: © 2024 Digital & Analogue Partners

WILL ALL CHATBOTS BE DELETED?

Another noteworthy point in the lawsuit between The New York Times and OpenAI is the Times’ demand to destroy all chatbots trained with their content and seek billions in damages. This request is an example of ‘specific performance,’ a form of equitable relief found in common law. Equity law, deeply rooted in the English legal system, was developed to answer common law’s inflexibilities and occasionally severe outcomes. It introduced a more adaptable and equitable approach, concentrating on fairness and justice, particularly in cases where financial reparations are inadequate or inapplicable. Although equity and common law systems merged by the Judicature Acts of the 1870s, the principles of equity remain vital in crafting legal solutions and guiding judicial rulings, including those that address new technologies. The significant advantage of English common law is that it allows applying time-tested legal concepts to contemporary issues in the digital space.

Specific performance as a legal concept allows courts to compel a party to perform a specific action or to refrain from doing so when monetary compensation doesn’t sufficiently redress the harm. In English law, this remedy is used sparingly, often in unique cases where the obligations are inherently linked to the debtor’s personality, and no other individual can fulfil them. Nevertheless, replacing monetary damages with specific performance requires strong evidence that a fair and just resolution is unattainable through financial means alone. Given the stringent nature of these requirements, it seems improbable that a court would grant the Times’ request to dismantle the chatbots.

CONCLUSION

The lawsuit between The New York Times and OpenAI highlights the complex interplay of copyright law with AI technology, underscoring the significant advancements AI brings to information accessibility and interaction. Yet, this progress sparks off legal debates over using copyrighted content in AI training, a dilemma epitomised by the NYT’s legal action against OpenAI.

This legal landscape reflects disputes similar to those seen in the music industry with Spotify’s challenge to traditional content access. OpenAI has sought licensing agreements with publishers to inform its AI ethically, yet not all talks, including those with the NYT, have succeeded. The NYT’s lawsuit, alongside others like Andersen v. Stability AI and Kadrey v. Meta Platforms, focuses on using copyrighted content for AI training. These early-stage cases underscore the unresolved complexities of AI and copyright law, emphasising the difficulty in establishing fair use and copyright infringement by AI-generated content.

At the heart of OpenAI’s defence, the fair use doctrine embodies the delicate dance between innovation and copyright protection. As we edge into 2024–2025, the unfolding legal narratives promise a pivotal shift in AI governance, spotlighting the urgent call for guidelines that champion both creative ingenuity and copyright holders’ interests. This era beckons a significant transformation in the digital realm, redefining AI’s role and addressing the intricate copyright questions it conjures.

Liza Lobuteva
Yuriy Brisov
Alexandra Zviagintseva

This article was written by Liza Lobuteva , Yuriy Brisov & Alexandra Zviagintseva of Digital & Analogue Partners. Visit dna.partners to learn more about our team and the services.

Be digital, be analogue, be with us!

--

--

Digital & Analogue Partners
Coinmonks

D&A provides legal, economic, and strategic consulting services.