You’ve probably read by now of ChatGPT’s legal acumen. ChatGPT averaged C+ when University of Minnesota law professors used it to generate answers in 95 multiple choice and 12 essay questions in exams in four courses. When compared blindly alongside actual students’ tests, ChatGPT scored C+ while the humans averaged B+. While low, if applied across the typical law school curriculum, the chatbot’s scores would be enough to earn a law degree, according to the researchers.
But lawyers who think they have found a shortcut to legal research should beware. On Thursday, June 22, a U.S. judge imposed sanctions on a New York lawyer and his firm who submitted a legal brief that, unknown to him, he said, included six fictitious case citations that had been generated by ChatGPT.
U.S. District Judge P. Kevin Castel ordered lawyer Steven Schwartz and his law firm to pay a $5,000 fine and for the case to be dismissed. The judge found the firm acted in bad faith, making “acts of conscious avoidance and false and misleading statements to the court” and “continued to stand by the fake opinions” even after the court and the other party questioned them. He also ordered the firm to notify all the real judges who were identified as authors of the fake cases.
The judge noted that there is nothing “inherently improper” in lawyers using AI “for assistance,” but that ethics rules require attorneys “ensure the accuracy of their filings.”
The firm argued that it “made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”
Most practitioners see ChatGPT as potentially making them more efficient by quickly producing a rough first draft. Obviously, that draft must be CAREFULLY checked for accuracy. Do not make the “good faith mistake” that got this firm into trouble.