United States federal law still does not address use of Generative AI, though lawmakers have proposed legislation addressing various issues.

In our previous Generative Artificial Intelligence 101 blog post, we discussed intellectual property infringement as a legal risk of using Generative AI. Today, we jump into a discussion of the various other legal liabilities associated with Generative AI.

Not only is there no case law yet addressing the allegedly unlawful activity associated with Generative AI, but also the United States federal government has yet to implement any regulations on the development, training, or use of this technology. There is some proposed legislation, however, that would potentially create baseline federal causes of action on a number of Generative AI related issues.[1] The majority of these issues—violations of rights to privacy, creation of false depictions of sexually explicit content, defamation claims, and election interference—would not pose a risk to most corporations seeking to use Generative AI for business practices.

In response to concerns that certain inputs and uses of Generative AI violate individual’s right to privacy, a bipartisan group of lawmakers introduced the “No A.I. FRAUD Act” (No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act), H.R. 6943 in January 2024.[2] The right to privacy is a difficult issue to address in the U.S., as the extent of your right, and the ability to protect it, is controlled by state statute or state common law. Currently there is no federal law that grants a right to privacy or publicity nationwide. The right to privacy can include an individual’s right to publicity and the right to both protect and monetize one’s name/image/likeness (“NIL”). State laws protecting the right to publicity can create a private right of action against others for the unauthorized commercial use of one’s NIL. The right to privacy, however, does not have to include rights to publicity or NIL. It can be much more limited and only based upon invasions of privacy.[3]

Related to the right to privacy is a concern regarding the use of “deepfakes” in many different contexts. Deepfakes are deceptive AI-generated or AI-altered content, usually featuring individuals NIL without their consent.[4] The use of deepfakes can result in many different causes of action, depending upon the use itself. If used commercially, state right to privacy laws can provide a remedy. In instances where deepfakes are used to commercially promote products or service, they can constitute false endorsement under Section 43(a) of the Lanham Act. (15 U.S.C. § 1125)

Generative AI has also been used to create deepfakes containing sexually explicit content, again, almost always without the individual’s consent. A few states have enacted laws prohibiting sexually explicit deepfakes—California and New York state laws grant victims a civil claim; Georgia and Virginia state laws impose criminal liability upon the creators and distributors of sexually explicit deepfakes. Deepfakes may also give rise to liability under state defamation laws. If an individual uses a deepfake to disseminate reputation-damaging falsehoods about a person with a requisite degree of fault.

The federal government has some protections against deepfakes, and additional legislation has been introduced. Section 1309 of the federal Violence Against Women Act Reauthorization Act of 2022 (“VAWA 2022”) creates a civil claim for nonconsensual disclosure of “intimate visual depictions.” Also, legislators introduced the “Preventing Deepfakes of Intimate Images Act, H.R. 3106,” in May 2023 to further amend VAWA 2022 and create a separate civil claim for disclosing certain “intimate digital depictions” without the written consent of the depicted individual, as well as providing criminal liability for certain actual or threatened disclosures.

A bipartisan coalition of senators also recently introduced “The Protect Elections from Deceptive AI Act,” to address rising concerns that A.I. will be used to generate materially deceptive content to influence federal elections. Both these pieces of proposed legislation are still in their infancy, however, and it will be sometime before Congress passes any type of comprehensive federal protection against A.I. misuse, if Congress can pass it at all. Concerned individuals can reach out to their lawmakers at both the state and federal levels to inform them of the urgency for this type of protective legislation.

In contrast, the European Union has been quick in its efforts to regulate use of artificial intelligence. In February 2024, the European Council approved the “AI Act,” legislation that was designed to regulate the use of all artificial intelligence in Europe. The AI Act takes a risk-based approach to regulating AI, imposing strict restrictions and prohibitions as to use, and even outright banning artificial intelligence systems.

Up next, is litigation the solution to Generative AI woes? An overview of who is suing ChatGPT and why.


[1] “No A.I. FRAUD Act” (No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act), H.R. 6943;

[2] Congressional Research Service “Artificial Intelligence Prompts Renewed Consideration of a Federal Right of Publicity” CRS Legal Sidebar (Updated 2024) https://crsreports.congress.gov/product/pdf/LSB/LSB11052 (last visited Jul 9, 2024).

[3] Id. at 1-2.

[4] Id. at 4.


Stafford Rosenbaum LLP is a full-service law firm with two convenient office locations in Madison and Milwaukee, Wisconsin. 145 years of dedication to businesses, governments, nonprofits, and individuals has proven that effective client communication continues to be the heart of our practice.

The post Generative Artificial Intelligence 101: Federal Law Still Does Not Address Use of Generative AI first appeared on Stafford Rosenbaum LLP.