On this Cyber-Monday, we would like to address one of the hottest technology topics of the past year: artificial intelligence. Some have speculated that AI may help boost low response rates to settlement notices. But there are also concerns that AI could make it easier to file fake class-action suits.
Settlement Response Rates
A 2019 FTC study that examined 149 class-action settlements found that the overall claims rate was less than 10%. According to Law360, U.S. District Judge James Donato recently told a symposium that he expected AI “will soon ‘revolutionize’ the process of administering settlements.” (“AI Will ‘Revolutionize’ Class-Action Payouts, Judge Says,” Law360 11/3/2023.) According to the article, Judge Donato hopes AI will help increase response rates to class-action settlements by helping to find settlement class members.
But reaching settlement class members is often not the issue with settlement response rates. Using conventional methods, settlement administrators are often able to send notice of the settlement to a large portion of the settlement class. And there are many reasons settlement class members do not submit claims. For example, we have seen requests to be excluded from a class action settlement due to the class member’s philosophical or political opposition to class actions. In addition, a study published in tandem with the 2019 FTC study referred to above suggests that some class members are skeptical of settlement notices, something that may only increase with the broader use of AI.
Risks of Fraud
While AI may increase settlement response rates, it carries with it a greater risk of fake suits or other fraud. The same Law360 article reported that Judge Donato acknowledged that AI raised potential fraud concerns. Earlier this year, a New York attorney was sanctioned for filing an AI-generated brief that included citations to and quotations from non-existent court cases.
While most plaintiffs’ attorneys would not stoop to such levels, an attorney was recently sentenced to four years in prison for filing hundreds of fake suits alleging violations of the Americans with Disabilities Act. That fraudulent scheme was carried out before the advent of ChatGPT, Bing Chat, and others. Those services could enable a fraudster to multiply the number of fake suits filed. Indeed, Amazon announced in September that it was limiting authors to self-publishing three books per day in an attempt to curb the flood of AI-generated content.
The AI genie has been let out of the bottle, and developments in the law will affect how AI is used. For example, the tester-standing case the Supreme Court heard in October, and the responses to the decision once it is issued, could have ramifications for the risks of fake AI-generated suits.