On July 31, 2024, the U.S. Copyright Office issued the first of several planned reports on the intersection between copyright and generative Artificial Intelligence (“AI”), following a lengthy period of public comment, during which it received more than 10,300 comments, discussed previously on this blog. Titled “Part 1: Digital Replicas,” the first report (the “Report”) focuses on digital replicas.
The Copyright Office defines a “digital replica”—sometimes termed a “deep fake”—as “a video, image, or audio recording that has been digitally created or manipulated to realistically but falsely depict an individual,” whether authorized or unauthorized and AI-generated or not.[1]
The Report flags potential harms from digital replicas. In the creative realm, digital replicas pose an obvious threat of replacement to human creatives—for example, the use of AI to generate sounds or images instead of employing singers or voice actors on sound recordings, or background extras in movies and other video presentations.[2] But the Report also flags three areas of concern for the general public: (1) creation of explicit images using digital replicas; (2) use of digital replicas to perpetrate fraud (for example, replicas of a loved one or an attorney, or replicas of celebrities used to endorse products); and (3) use of digital replicas to disseminate misinformation and undermine the political system.[3]
The Report describes existing legal frameworks at the state and federal level that address threats posed by digital replicas. At the state level, those include the longstanding common law rights to privacy and publicity,[4] and some states have also enacted or modified statutes to codify publicity rights.[5] Louisiana and New York have both passed laws directly addressing the use of digital replicas.[6] Federal regulations also govern different aspects of digital replicas,[7] and there are private agreements—such as the recent SAG-AFTRA agreement—that provide bargained-for protections to some groups.[8]
Ultimately, however, the Report concludes that existing legal frameworks are inadequate and that “new federal legislation is urgently needed” to address shortcomings in current state and federal law in responding to the new threats posed by digital replicas.[9] Noting two existing congressional proposals, the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (“No AI FRAUD”) Act, H.R. 6943, 118th Cong. (2024), and the discussion draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe (“NO FAKES”) Act of 2023,[10] the Report calls for a new, more comprehensive, statute, and outlines the following critical elements informed by Congressional hearings and comments the Copyright Office received in response to its August 2023 request for public comment on generative AI:
- Definition of a “Digital Replica”—target replicas that “are difficult to distinguish from reality,” not those that “merely evoke an individual.”[11]
- Persons Protected—protect all individuals, “consistent with the common law right of privacy, which typically requires neither fame nor commercial value.”[12]
- Term of Protection—“prioritize the protection of living persons,” with potential to demand postmortem rights with “an initial term shorter than twenty years, perhaps with the option of extending it if the persona continues to be commercially exploited,” because a long-term or perpetual right could burden free expression and raise practical challenges.[13]
- Infringing Acts—“proscribe activities that involve dissemination to the public”; to the extent personal rights are covered, there should be a “defense for legitimate and reasonable private uses”[14]; liability should not be limited to commercial uses;[15] there should be an actual knowledge standard;[16] and secondary liability could be modeled on existing copyright law.[17]
- Licensing and Assignment—“individuals [should] be able to license their images and voices for use in digital replicas but not to fully assign all rights”; to avoid abuse, there should be a “ban on outright assignments” and guardrails for licensing, such as “limitations in durations and protection for minors.”[18]
- First Amendment Concerns—provide a “balancing framework” to allow courts to “assess the full range of factors relevant to the First Amendment analysis,” including: “the purpose of the use, including whether it is commercial; its expressive or political nature; the relevance of the digital replica to the purpose of the use; whether the use is intentionally deceptive; whether the replica was labeled; the extent of the harm caused; and the good faith of the user.”[19]
- Remedies—offer “effective remedies,” including “special damages enabling recovery by those who may not be able to show economic harm or afford the cost of an attorney,” such as, where appropriate, statutory damages or an award of the claimant’s attorneys’ fees.[20]
- Preemption—no preemption of state law, allowing the federal legislation to fill in gaps where appropriate and permitting greater clarity.[21]
The Report considers the relationship between Section 114(b) of the Copyright Act, which “clarif[ies] that ‘mere imitation’ of a copyrighted sound recording does not constitute infringement,” and “state law protections against unauthorized digital replicas of voices in sound recordings.”[22] The Report observes that they serve different purposes, and “nothing indicates that Congress intended for [Section 114(b)] to deprive individuals of rights in their unique voices.”[23]
Finally, the Report notes that, in response to the Copyright Office’s request for comments, many artists expressed concerns about AI’s replication of individual artistic styles.[24] On this point, the Report responds by noting the “several sources of protection under existing laws that may be effective against unfair or deceptive copying of artistic style,” including the Copyright Act, the Lanham Act and state right of publicity statutes or common law. It concludes that new legislation protecting artistic style as such is not warranted at this time. [25]
The Copyright Office intends to issue further guidance on other topics related to generative AI. The Report previews some of these areas: for example, potentially of concern to technology companies, it warns that “as future Parts of this Report will discuss, there may be situations where the use of an artist’s own works to train AI systems to produce material limiting their style can support an infringement claim.”[26] We will continue to monitor developments.
[1] Report at 2.
[2] Report at 3.
[3] Report at 4-6.
[4] See Report at 8-15.
[5] Report at 15.
[6] Report at 15-16.
[7] Report at 16-21.
[8] Report at 21-22.
[9] Report at 22.
[10] Report at 26-28. The discussion draft is available at https://www.coons.senate.gov/imo/media/doc/no_fakes_act_draft_text.pdf.
[11] Report at 29.
[12] Report at 30.
[13] Report at 32-33.
[14] Report at 33.
[15] Report at 34.
[16] Report at 35-36.
[17] Report at 36-39.
[18] Report at 41.
[19] Report at 46-47.
[20] Report at 47-48. The Copyright Office did not take a position on including criminal penalties in a new federal law on digital replicas, or if they should be addressed separately in standalone criminal legislation. Report at 48.
[21] Report at 50.
[22] Report at 50-51.
[23] Report at 52.
[24] Report at 53.
[25] Report at 55-56.
[26] Report at 55.