The legal implications of artificial intelligence, or AI, are vast. Many, no doubt, have read stories about lawyers being embarrassed by briefs drafted with AI. See, e.g., Lawyer Used ChatGPT In Court—And Cited Fake Cases. A Judge Is Considering Sanctions. What if AI is used to draft disclosure documents that are alleged to be false and misleading? It has been well established that to establish liability under § 10(b) and Rule 10b–5, a private plaintiff must prove that the defendant acted with scienter, “a mental state embracing intent to deceive, manipulate, or defraud.” Ernst & Ernst v. Hochfelder, 425 U.S. 185, 193–194, and n. 12 (1976). Thus, the question arises whether AI can act with the requisite scienter.
In the (in)famous footnote 12, the Supreme Court invited courts to consider the possibility of whether recklessness will suffice to constitute scienter. Since then, many courts have defined recklessness as a “highly unreasonable omission, involving not merely simple, or even inexcusable negligence, but an extreme departure from the standards of ordinary care, and which presents a danger of misleading buyers or sellers that is either known to the defendant or is so obvious that the actor must have been aware of it”. Franke v. Midwestern Okla. Dev. Auth., 428 F. Supp. 719, 725 (W.D. Okla. 1976), vacated on other grounds sub nom. Cronin v. Midwestern Okla. Dev. Auth., 619 F.2d 856 (10th Cir. 1980). Thus, the question of scienter may be whether it is reckless to use AI to draft disclosure documents in securities transactions.