On September 25, the Federal Trade Commission (FTC) announced that it brought five actions against companies it accused of using “artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers.” These actions, which the FTC indicated are part of its new enforcement sweep called “Operation AI Comply,” reflect the FTC’s repeatedly stated intention to exercise its authority under the FTC Act and other rules in connection with AI-related products and marketing claims.
The five actions rely on a range of FTC authorities and target several different forms of conduct.
- DoNotPay: The FTC brought an action against DoNotPay, which purports to offer automated legal services, on the theory that it violated the FTC Act by making false claims that its product could substitute for the expertise of a human lawyer. A proposed settlement would require DoNotPay to pay $193,000, provide notices to past subscribers, and avoid making claims about its ability to substitute AI for professional expertise without proper evidence.
- Ascend Ecom: The FTC brought an action against Ascend and other defendants under the FTC Act and Business Opportunity Rule based on allegedly false claims that the company’s “AI-powered tools would help consumers quickly earn thousands of dollars a month in passive income by opening online storefronts,” and the company’s alleged failure to honor a money-back guarantee that it had extended to some customers. The FTC also alleged that the defendants had violated the Consumer Review Fairness Act by using non-disparagement clauses to restrict customers from leaving accurate reviews of its services.
- Ecommerce Empire Builders: The FTC brought an action against Ecommerce Empire Builders and other defendants under the FTC Act and Business Opportunity Rule based on alleged misrepresentations regarding the likely profitability of online stores “powered by artificial intelligence” that are sold by the defendants. The FTC also alleged that defendants had violated the Consumer Review Fairness Act by using non-disparagement clauses to restrict customers from leaving accurate reviews of its services.
- FBA Machine: The FTC brought an action against FBA Machine and other defendants under the FTC Act and Business Opportunity Rule based on allegations that the defendants “use deceptive earnings claims to convince consumers to shell out tens of thousands of dollars each to invest in what Defendants claim is a surefire business opportunity powered by artificial intelligence.” The FTC also alleged that defendants had violated the Consumer Review Fairness Act by using non-disparagement clauses to restrict customers from leaving accurate reviews of its services.
- Rytr: The FTC brought an action against Rytr, a service that it alleged enabled subscribers to create “AI-generated reviews featur[ing] information that would deceive potential consumers who were using the reviews to make purchasing decisions,” under Section 5 of the FTC Act. The FTC alleged that Rytr’s review-generation tool “furnish[ed] others with the means and instrumentalities to” “generate written content for consumer reviews that is false and deceptive,” which it argued was an unfair and deceptive trade practice. A proposed settlement would restrict Rytr and its partners from advertising or selling any service for generating reviews.
The FTC’s press release announcing the cases stressed that businesses must be transparent about their use of AI, and that “using an AI tool when you’re developing your product is not the same as offering your customers a product with AI inside.” The FTC also stated that businesses making AI-related claims about the profitability of a business opportunity must “[h]ave concrete data demonstrating what you’re promising is typical for your customers.”
Summarizing its ongoing investigative work to “combat AI-related issues in the marketplace,” the FTC explained that “[w]e’re checking to see whether products or services actually use AI as advertised, if so, whether they work as marketers say they will. We’re examining whether AI and other automated tools are being used for fraud, deception, unfair manipulation, or other harmful purposes. On the back end, we’re looking at whether automated tools have biased or discriminatory impacts.”