For years, decades really, I’ve explained and argued why most “new big things” in legal tech are either not going to work or fill a need that doesn’t exist. Legal tech guys really hated me for calling bullshit on their baby. Some who paid me to “consult” thought their money would buy my endorsement of their mutt. They learned. I was regularly accused of being a tech hater, but as Keith Lee succinctly explained, “It’s not that lawyers are anti-technology, it’s that they are anti-bullshit.
AI is all the rage at the moment. Remember blockchain? Remember NFTs? Remember self-driving cars? Remember Google Glasses? The newest billion dollar baby is ChatGPT. How’s that working out for lawyers?
The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. When the circumstance was called to the Court’s attention by opposing counsel, the Court issued Orders requiring plaintiff’s counsel to provide an affidavit annexing copies of certain judicial opinions of courts of record cited in his submission, and he has complied. Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations. Set forth below is an Order to show cause why plaintiff’s counsel ought not be sanctioned.
That was from Judge Kevin Castel, SDNY. He was not amused by this tech faux pas. The lawyer on the case explained that he relied on the work of another lawyer with 30 years experience. The other lawyer explained that he relied on ChatGPT.
[7.] It was in consultation with the generative artificial intelligence website Chat GPT, that your affiant did locate and cite the following cases in the affirmation in opposition submitted, which this Court has found to be nonexistent: …
[8.] That the citations and opinions in question were provided by Chat GPT which also provided its legal source and assured the reliability of its content. Excerpts from the queries presented and responses provided are attached hereto.
[9.] That your affiant relied on the legal opinions provided to him by a source that has revealed itself to be unreliable.
[10.] That your affiant has never utilized Chat GPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibiiity that its content could be faise.
[11.] That is the fault of the affiant, in not confirming the sources provided by Chat GPT of the legal opinions it provided.
This was insufficient to assauge Judge Castel’s irritation at being fed papers with phony case names and cites, and there is an Order to Show Cause pending as to why the lawyer shouldn’t be sanctioned. The lawyer may not have intended to deceive the court, but as he admits, he failed to demonstrate any diligence as a lawyer before submitting papers, and that’s his fault entirely. That he was unaware that this new, shiny magic tech “solution” was a massive failure and just made shit up doesn’t diminish his duty, both to the court and to his client.
To the legal tech aficionados who can muster an excuse for any failure, the problem is that AI suffers delusions delightfully called “hallucinations,” because calling them massive fuck ups would be bad for business. Until then, knowing that AI is untrustworthy and just makes things up is good enough, shifting the burden on users to verify that anything ChatGPT does is accurate. The contention is that it’s less effort to check its cites than to find the cites in the first place, justifying using ChatGPT despite its failings.
Bullshit.
Initially, most people, and this is particularly true for lawyers, won’t do it. They won’t bother. Few enough lawyers care about anything more than getting words on paper and submitted to the court on time. Whether it’s good work, bad work, or phony work, just isn’t the critical factor for a lawyer who has a deadline coming and nothing to show for it. They won’t check. They won’t be bothered to check. So what if ChatGPT’s work is, at its very best, pedestrian, unimaginative and almost certainly ineffective. It’s words on paper submitted on time, and that’s all they care about.
Sure, some lawyers are lazy, and can bill 20 hours for an opposition to a summary judgment motion that took them less than a hour with ChatGPT. The incentives are obvious. The pressures are clear. The outcome is, well, hopefully not sanction worthy, or at least not unforgiveable. Stercus accidit, right?
Secondarily, even if the cites were legit and checked out, and the lawyer put in the time to run them through Lexis to make sure, at the very least, they existed, the work product is will crap. Sure, it may be better crap than the lawyer can produce, not because ChatGPT is any good at lawyering, but because the lawyer who relies on its is even worse than ChatGPT.
There’s a reason why we go to jump through hoops on the way to being entrusted as lawyers with other people’s lives and fortunes. We are supposed to be competent lawyers. We are supposed to think long and hard how to zealous represent our clients. We are supposed to put in the time, the effort, the thought and the imagination is takes to win cases, or at least give our clients every possible chance to prevail.
ChatGPT is not. It’s just a program that generates words, strung together to create the appearance of legitimate legal thought without the spark that distinguishes a lawyer from, well, a computer program. Would you turn to a computer engineer to represent you? Then why turn to a computer engineer’s product to do your lawyering for you?
Maybe there will be a place for AI in law, filling in the boilerplate that no one reads and even fewer care about, but it will not be a substitute for a good lawyer. It will never be a substitute for a great lawyer. It might be a substitute for a crap lawyer, but that’s not really an endorsement of AI as much as a condemnation of incompetent lawyers.