The Legal Battle Intensifies: Growing Artist Support and Mounting Evidence Against AI Art Generators

The Legal Battle Intensifies: Growing Artist Support and Mounting Evidence Against AI Art Generators

Subscribe to our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage.

As the first year of the generative AI era concludes, the question of whether generative AI models—trained on vast amounts of human-created content and data, often scraped from the internet without creators’ consent—constitutes copyright infringement remains unresolved.

However, there’s been a significant development in a leading lawsuit involving human artists against companies responsible for AI image and video generators like Midjourney, DeviantArt, Runway, and Stability AI, the latter of which developed the Stable Diffusion model used in many AI art generation apps.

Initially, the artists’ case faced a setback. In October, U.S. District Court Judge William H. Orrick, from the Northern District of California, dismissed much of the initial class-action lawsuit filed by artists Sarah Anderson, Kelly McKernan, and Karla Ortiz against these AI companies. Orrick’s rationale was that many cited artworks hadn’t been registered for copyright by the artists. Nonetheless, he allowed the plaintiffs to file an amended complaint.

In this new complaint, seven additional artists have joined the original plaintiffs: Hawke Southworth, Grzegorz Rutkowski, Gregory Manchess, Gerald Brom, Jingna Zhang, Julia Kaye, and Adam Ellis. Rutkowski, a Polish artist known for his work in video games and other media, had previously raised concerns about AI apps replicating his distinctive style without his permission or compensation.

New evidence and arguments have bolstered the amended complaint. For example, even non-copyrighted works may be protected if they feature the artists’ unique identifiers, such as signatures. The complaint also highlights that AI companies using datasets like LAION-400M and LAION-5B, which include copyrighted works, would have had to download actual images to train their models, thereby making “unauthorized copies.”

The technology behind the diffusion models is also under scrutiny. These models add visual “noise” to an image and then reverse the process to replicate the initial images as closely as possible. This method suggests that the primary aim is to recreate the training images with high accuracy.

Nicholas Carlini, a research scientist at Google DeepMind, and researchers from MIT, Harvard, and Brown have questioned whether large-scale models create new output or merely copy training examples. This question could be crucial in determining the case’s outcome.

It’s evident that AI art generators can mimic existing artwork, though the results depend heavily on user input. While these tools can create new images, they have been trained on human-made artworks, including some that are likely copyrighted. Whether this practice falls under fair use or constitutes copyright infringement will ultimately be decided in court.