A consortium of well-known American authors, including the Pulitzer Prize recipient Michael Chabon, has initiated legal action in a federal court in San Francisco, alleging that OpenAI violated copyright laws. Their claim centers on the accusation that OpenAI inappropriately employed their literary creations for the training of its widely used AI chatbot, ChatGPT, without securing their explicit consent. This legal action signifies the third instance of authors pursuing class-action lawsuits against OpenAI for alleged copyright infringement.

These authors, among them playwrights David Henry Hwang, Matthew Klam, Rachel Louise Snyder, and Ayelet Waldman, assert that OpenAI employed their works to educate ChatGPT in responding to human-generated text prompts. They argue that their literary works are of significant value for ChatGPT’s training, serving as prime examples of high-quality, extensive textual content. Additionally, they contend that ChatGPT has the capacity to accurately distil their works and produce text that emulates their distinct writing styles.

Why does this matter?

The case highlights the ethical considerations surrounding AI development. Should AI models like ChatGPT be trained on copyrighted content without authorisation, and what are the implications for original creators?

The lawsuit’s primary objectives are to secure unspecified financial compensation and obtain a court injunction preventing OpenAI from participating in what the authors perceive as ‘unlawful and unjust business practices.’ OpenAI has faced prior legal challenges related to copyright infringements in the realm of AI training, with the central issue revolving around the argument of fair use concerning publicly accessible internet content.

The outcome of this lawsuit could set a legal precedent for future cases involving AI and copyright, potentially impacting how AI models are trained and the responsibilities of AI developers.

cross-circle