Use of Generative AI output carries a high risk of intellectual property infringement liability, particularly with respect to infringement of copyright protected works.

In our previous Generative Artificial Intelligence 101 blog post, we highlighted the various ways in which Generative AI is unreliable and still incapable of truly replicating human work. In today’s post, we discuss another unreliable aspect of Generative AI—its rampant intellectual property infringement.

Generative AI is remarkable because of its ability to create original output without the involvement of humans. These models, however, are incapable of performing any kind of moral or ethical analysis of their output. The lack of human involvement in the content generation process means that there is no legal risk assessment or identification of potential legal liability of the output. Generative AI output is often riddled with intellectual property infringement, hate speech, defamatory statements, right to privacy violations, and a number of other potential legal claims. As the technology is still in its infancy, governments are still working towards enacting policies to regulate use of Generative AI, and lawsuits involving Generative AI based claims are still making their way through the court system in the United States.

Potentially the highest legal risk comes from the sheer amount of intellectual property infringement associated with both the input data and the generator’s output. Without any regulation as to how Generative AI can be taught or prohibitions against inputting data protected under copyright and trademark law, the majority of Generative AI programs have been trained using infringing input information, thereby creating potentially infringing output content.

Copyright-protected material is particularly susceptible to infringement. The nonconsensual use of copyright-protected work is problematic for even one instance, but inputting infringing materials also has a long-term impact. Once copyrighted material is input to a Generative AI program, that input information is indefinitely available to the program to create future output content.

Under United States federal law, new work based upon an existing copyright-protected work constitutes a derivative work. The right to create derivative work is reserved exclusively to the copyright owner. (17 U.S.C. § 106). The Copyright Act of 1976, however, codified the fair use doctrine, outlining instances in which copyrighted material can be used without permission or a license from the rights holder. Courts have also subsequently found that the use of copyright protected works to create “transformative works,” or innovations that are substantially different from the original work, constitutes fair use.[1]

Whether the use of copyrighted material to train Generative AI programs constitutes infringement due to the creation of derivative works or fair use to create transformative works remains unresolved. Even experts in the field of copyright law are divided on this question.[2]

The enaction of the Digital Millenium Copyright Act (DMCA)[3] in 1998 set forth a procedure by which copyright owners can enforce their rights against bad actors who have infringed upon their works online. The DMCA, however, also created statutory protection for third party platforms from liability when its users engage in infringing activity on that platform. The purpose of the DMCA is to protect against the unauthorized dissemination of copyright protected materials. The issue at hand, however, is that Generative AI is not just disseminating copyrighted material through a website post. It “learns” the material, and then, based upon the prompt, it can unlawfully disseminate the original work, create a derivative work, or both. Yet, the output does not create website or posting that the rights holder can demand to have taken down under the DMCA. (17 U.S.C. § 512) How to enforce the DMCA and the interplay between those two provisions with regard to the use of copyrighted materials and Generative AI programs also remains unaddressed by U.S. federal courts, though lawsuits on these issues are also pending.

Up next, Generative AI is getting into other kinds of legal trouble, too—right to privacy violations, fraudulent explicit content, defamation, and even election interference. What are lawmakers in the U.S. doing to catch up and address Generative AI’s legal liabilities beyond intellectual property infringement?


[1] See 17 U.S.C. § 107; Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994).

[2] Alter, Alexandra and Harris, Elizabeth A. “Franzen, Grisham and Other Prominent Authors Sue OpenAI” New York Times (2023), https://www.nytimes.com/2023/09/20/books/authors-openai-lawsuit-chatgpt-copyright.html (last visited Jul 9, 2024).

[3] The DMCA amended 17 U.S.C. §§ 101, 104, 104A, 108, 132, 114, 117, 701 and created 17 U.S.C. §§ 512, 1201–1205, 1301–1332; 28 U.S.C. § 4001.


Stafford Rosenbaum LLP is a full-service law firm with two convenient office locations in Madison and Milwaukee, Wisconsin. 145 years of dedication to businesses, governments, nonprofits, and individuals has proven that effective client communication continues to be the heart of our practice.

The post The High Risk of Intellectual Property Infringement with Use of Generative AI first appeared on Stafford Rosenbaum LLP.