For nearly as long as computers have existed, litigators have used software-generated machine output to buttress their cases, and courts have had to manage a host of machine-related evidentiary issues, including deciding whether a machine’s output, or testimony based on the output, can fairly be admitted as evidence and to what extent.
Today, as litigants begin contesting cases involving aspects of so-called intelligent machines–hardware/software systems endowed with machine learning algorithms and other artificial intelligence-based models–their lawyers and the judges overseeing their cases may need to rely on highly-nuanced discovery strategies aimed at gaining insight into the nature of those algorithms, the underlying source code’s parameters and limitations, and the various possible alternative outputs the AI model could have produced given a set of training data and inputs. A well-implemented strategy will lead to understanding *how* a disputed AI system worked and how it may have contributed to a plaintiff’s alleged harm, which is necessary if either party wishes to present an accurate and compelling story to a judge or jury.
Parties in civil litigation may obtain discovery regarding any non-privileged matter that is relevant to any party’s claim or defense and that is proportional to the needs of the case, unless limited by a court, taking into consideration the following factors expressed in Federal Rules of Civil Procedure (FRCP) Rule 26(b):
- The importance of the issues at stake in the action
- The amount in controversy
- The parties’ relative access to relevant information
- The parties’ resources
- The importance of the discovery in resolving the issues, and
- Whether the burden or expense of the proposed discovery outweighs its likely benefit.
Evidence is relevant to a party’s claim or defense if it tends “to make the existence of any fact that is of consequence to the determination of the action more or less probable that it would be without the evidence.” See Fed. R. Evid. 401. Even if the information sought in discovery is relevant and proportional, discovery is not permitted where no need is shown. Standard Inc. v. Pfizer Inc., 828 F.2d 734, 743 (Fed. Cir. 1987).
Early in a lawsuit, the federal rules require parties to make initial disclosures involving the “exchange of core information about [their] case.” ADC Ltd. NM, Inc. v. Jamis Software Corp., slip op. No. 18-cv-862 (D. NM Nov. 5, 2018). This generally amounts to preliminary identifications of individuals likely to have discoverable information, types and locations of documents, and other information that a party in good faith believes may be relevant to a case, based on each parties’ claims, counterclaims, facts, and various demands for relief set forth in their pleadings. See FRCP 26(a)(1). A party failing to comply with initial disclosure rules “is not allowed to use” the information or person that was not disclosed “on a motion, at a hearing, or at a trial, unless the failure was substantially justified or is harmless.” Baker Hughes Inc. v. S&S Chemical, LLC, 836 F. 3d 554 (6th Cir. 2016) (citing FRCP 37(c)(1)). In a lawsuit involving an AI technology, individuals likely to have discoverable information about the AI system may include:
- Data scientists
- Software engineers
- Stack engineers/systems architects
- Hired consultants (even if they were employed by a third party)
A company’s data scientists may need to be identified if they were involved in selecting and processing data sets, and if they trained, validated, and tested the algorithms at issue in the lawsuit. Data scientists may also need to be identified if they were involved in developing the final deployed AI model. Software engineers, depending on their involvement, may also need to be disclosed if they were involved in writing the machine learning algorithm code, especially if they can explain how parameters and hyperparameters were selected and which measures of accuracy were used. Stack engineers and systems architects may need to be identified if they have discoverable information about how the hardware and software features of the contested AI systems were put together. Of course, task and project managers and other higher-level scientists and engineers may also need to be identified.
Some local court rules require initial or early disclosures beyond what is required under Rule 26. See Drone Technologies, Inc. v. Parrot SA, 838 F. 3d 1283, 1295 (Fed. Cir. 2016) (citing US District Court for the Western District of Pennsylvania local rule LPR3.1, requiring, in patent cases, initial disclosure of source code and other documentation… *sufficient to show* the operation of any aspects or elements of each accused apparatus, product, device, process, method or other instrumentality *identified in the claims pled* of the party asserting patent infringement…”)) (emphasis added). Thus, depending on the nature of the relevant AI technology at issue in a lawsuit, and the jurisdiction in which the lawsuit is pending, a party’s initial disclosure burden may involve identifying the location of relevant source code (and who controls it), or they could be required to make source code and other technical documents available for inspection early in a case (more on source code reviews below). Where the system is cloud-based operable on a machine learning as a service (MLaaS) platform, a party may need to identify the platform service where its API requests are piped.
Written discovery requests
Aside from the question of damages, in lawsuits involving an AI technology, knowing how the AI system made a decision or took an action may be highly relevant to a party’s case, and thus the party seeking to learn more may want to identify information about at least the following topics, which may be obtained through targeted discovery requests, assuming, as required by FRCP 26(b), the requesting party can justify a need for the information:
- Data sets considered and used (raw and processed)
- Software, including earlier and later versions of the contested version
- Software development process
- Sensors for collecting real-time observational data for use by the AI model
- Source code
- Flow charts
- Other documentation
A party may seek that information by serving requests for production of document and interrogatories. In the case of document requests, if permitted by the court, a party may wish to request source code to understand the underlying algorithms used in an AI system, and the data sets used to train the algorithms (if the parties’ relevant dispute turns on a question of a characteristic of the data, e.g., did the data reflect biases? Is it out of date? Is it of poor quality due to labeling errors? etc.). A party may wish to review software versions and software development documents to understand if best practices were followed. AI model development often involves trial and error, and thus documentation regarding various inputs used, algorithm architectures selected and de-selected, and the hyperparameters chosen for the various algorithms–anything related to product development–may be relevant and should be requested. In a lawsuit involving an AI system that uses sensor data (e.g., cameras providing image data to a facial recognition system), a party may want to obtain documentation about the chosen sensor to understand its performance capabilities and limitations.
With regard to interrogatories, a party may use interrogatories to ask an opposing party to explain its basis for assertions, made in its pleadings or contentions regarding a challenged AI system, such as:
- The basis underlying a contention about the foreseeability by a person (either the system’s developer or its end user) of an AI system’s errors
- The basis for the facts regarding the transparency of the system from the developer’s and/or a user’s perspective
- The reasonableness of an assertion that a person could foresee the errors made by the AI system
- The basis underlying a contention that a particular relevant technical standard is applicable to the AI system
- The nature and extent of the contested AI system’s testing conducted prior to deployment
- The basis for alleged disparate impacts from an automated decision system
- The identities and their involvement in making final algorithmic decisions leading to a disparate impact, and how and how much they relied on machine-based algorithmic outputs
- The modeled feature space used in developing an AI model and its relationship to the primary decision variables at issue (e.g., job promotion qualifications, eligibility for housing assistance)
- Who makes up the relevant scientific community for the relevant technology and what are the relevant industry standards to apply to the disputed AI system and its output
Source code reviews
Judges and/or juries are often asked to measure a party’s actions against a standard, which may be defined by one or more objective factors. In the case of an AI system, judging whether a standard has been met may involve assessing the nature of the underlying algorithm. Without that knowledge, a party with the burden of proof may only be able to offer evidence of the system’s inputs and results, but would have no information about what happened inside the AI system’s black box. That may be sufficient when a party’s case rests on a comparison of the system’s result or impact with a relevant standard; but in some cases, understanding the inner workings of the system’s algorithms, and how well they model the real world, could help buttress (or undermine) a party’s case in chief and support (or mitigate) a party’s potential damages. Thus a source code review may be a necessary component of discovery in some lawsuits.
For example, assume a technical standard for a machine learning-based algorithmic decision system requires a minimum accuracy (i.e., recall, precision, and/or f1 score), and the developer’s documentation demonstrates that its model met that standard. An inspection of the source code, however, might reveal that the “test size” parameter was set too low (compared to what is customary), meaning most of the available data in the data set was used to train the model and the model may suffer from overfitting (and maybe the developer forgot to cross-validate). A source code review might reveal those potential problems. a source code review might also reveal which features were used to create the model and how many features were used compared to the number of data observations, both of which might reveal that the developer overlooked an important feature or used a feature that caused the model to reflect an implicit bias in the data.
Because of source code’s proprietary and trade secret nature, parties requested to produce their code may resist inspection over concerns about the code getting out into the wild. The burden falls to the requestor to establish a need and that procedures will safeguard the source code. Cochran Consulting, Inc. v. Uwatec USA, Inc., 102 F.3d 1224, 1231 (Fed. Cir. 1996) (vacating discovery order pursuant to FRCP 26(b) requiring the production of computer-programming code because the party seeking discovery had not shown that the code was necessary to the case); People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018) (concluding that the “black box” nature of software is not itself sufficient to warrant its production); FRCP 26(c)(1)(G) (a court may impose a protective order for trade secrets specifying how they are revealed).
Assuming a need for source code has been demonstrated, the parties will need to negotiate terms of a protective order defining what constitutes source code and how source code reviews are to be conducted. See Vidillion, Inc. v. Pixalate, Inc., slip. op. No. 2:18-cv-07270 (C.D. Cal. Mar. 22, 2019) (describing terms and conditions for disclosure and review of source code, including production at a secure facility, use of non-networked standalone computers, exclusion of recording media/recording devices by inspectors during review, and handling of source code as exhibits during depositions).
In terms of definitions, it is not unusual to define source code broadly, relying on principles of trade secret law, to include things that the producing party believes in good faith are not generally known to others and have significant competitive value such that unrestricted disclosure to others would harm the producing party, and which the producing party would not normally reveal to third parties except in confidence or has undertaken with others to maintain in confidence. Such things may include:
- Computer instructions (reflected in, e.g., .jupyter or .py files)
- Data structures
- Data schema
- Data definitions (that can be sharable or expressed in a form suitable for input to an assembler, compiler, translator, or other data processing module)
- Graphical and design elements (e.g., SQL, HTML, XML, XSL, and SGML files)
In terms of procedure, source code inspections are typically conducted at the producing party’s law firm or possibly at the developer’s facility, where the inspection can be monitored to ensure compliance with the court’s protective order. The inspectors will typically comprise a lawyer for the requesting party along with a testifying expert who should be familiar with multiple programming languages and developer’s tools. Keeping in mind that the inspection machine will not have access to any network, and no recordable media or recording devices will be allowed in the space where the inspection machine is located, the individuals performing the review will need to ensure they’ve requested all the resources installed locally to facilitate inspection testing, including applications to create virtual servers to simulate remote API calls, if that is an element of the lawsuit. Thus, the reviewers might request in advance that the inspection machine be loaded with:
- The above-listed files
- Relevant data sets
- A development environment such as a Jupyter notebook or similar application to facilitate opening python or other source code files and data sets.
In some cases, it may be reasonable to request a GPU-based machine to create a run-time environment for instances of the AI model to explore how the code operates and how the model handles inputs and makes decisions/takes actions.
Depending on the nature of the disputed AI system, the relevant source code may be embedded on hardware devices (e.g., sensors) that the party’s do not have access to. For example, in a case involving the cameras and/or lidar sensors installed on an autonomous vehicle or as part of a facial recognition system, the party seeking to review source code may need to obtain third-party discovery via a subpoena duces tecum, as discussed below.
Subpoenas (Third-Party) Discovery
If source code is relevant to a lawsuit, and neither party has access to it, one or both of them may turn to a third party software developer/authorized seller for production of the code, and seek discovery from that entity through a subpoena duces tecum.
It is not unusual for third parties to resist production on the basis doing so would be unduly burdensome, but often as likely they will resist production on the basis that its software is protected by trade secrets and/or is proprietary and disclosing it to others would put their business interests at risk. Thus, the party seeking access to the source code in a contested AI lawsuit should be prepared for discovery motions in the district where the third-party software developer/authorized seller is being asked to comply with a subpoena.
A court “may find that a subpoena presents an undue burden when the subpoena is facially overbroad.” Wiwa, 392 F.3d at 818. Courts have found that a subpoena for documents from a non-party is facially overbroad where the subpoena’s document requests “seek all documents concerning the parties to [the underlying] action, regardless of whether those documents relate to that action and regardless of date”; “[t]he requests are not particularized”; and “[t]he period covered by the requests is unlimited.” In re O’Hare, Misc. A. No. H-11-0539, 2012 WL 1377891 at *2 (S.D. Tex. Apr. 19, 2012). Additionally, FRCP 45(d)(3)(B) provides that, “[t]o protect a person subject to or affected by a subpoena, the court for the district where compliance is required may, on motion, quash or modify the subpoena if it requires: (i) disclosing a trade secret or other confidential research, development, or commercial information.” But, “the court may, instead of quashing or modifying a subpoena, order appearance or production under specified conditions if the serving party: (i) shows a substantial need for the testimony or material that cannot be otherwise met without undue hardship; and (ii) ensures that the subpoenaed person will be reasonably compensated.” FRCP 45(d)(3)(C).
Thus, in the case of a lawsuit involving an AI system in which one or more of the parties can demonstrate it/they have a substantial need to understand how the system made a decision or took a particular action, a narrowly-tailored subpoena duces tecum may be used to gain access to the third party’s source code. To assuage the producing party’s proprietary/trade secret concerns, the third party may seek a court-issued protective order outlining terms covering the source code inspection.
Armed with the AI-specific written discovery responses, document production, and an understanding of an AI system’s source code, counsel should be prepared to ask questions of an opponent’s witnesses, which in turn can help fill gaps in a party’s understanding of the facts relevant to its case. FRCP 30 governs depositions by oral examination of a party, party witness, or third party to a matter. In a technical deposition of a fact or party witness, such as a data scientist, machine learning engineer, software engineer, or stack developer, investigating the algorithm behind an AI model will help answer questions about how and why a particular system caused a particular result that is material to the litigation. Thus, the deposition taker would want to inquire about some of the following issues:
- Which algorithms were considered?
- Were they separately tested?
- How were they tested?
- Why was the final algorithm chosen?
- Did an independent third party review the algorithm and model output?
With regard to the data set used to create the AI model, the deposition taker will want to explore the following:
- What data sets were used for training, validation, and testing of the algorithm?
- How was testing and validation conducted, and were alternatives considered?
- What sort of exploratory data analysis was performed on the data set (or sets) to assess usability, quality, and implicit bias?
- Was the data adequate for the domain that the developer was trying to model, and could other data have been used?
With regard to the final model, the deposition taker may want to explore the following issues:
- How old is the model?
- If it models a time-series (e.g., a model based on historical data that tends to increase over time), has the underlying distribution shifted enough such that the model is now outdated?
- If newer data were not considered, why?
- How accurate is the model, and how is accuracy measured?
Finally, if written discovery revealed an independent third party reviewed the model before it was deployed, the deposition taker may want to explore the details about the scope of the testing and its results. If sensors are used as the source for new observational data fed to an AI model, the deposition taker may want to learn why those sensors were chosen, how they operate, their limitations, and what alternative sensors could have been used instead.
In an expert deposition, the goal of the deposition shifts to exploring the expert’s assumptions, inputs, applications, outputs, and conclusions for weaknesses. If an expert prepared an adversarial or counterfactual model to dispute the contested AI system or an opposing expert’s model, a litigator should keep in mind the factors in Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) and FRE 702, when deposing the expert. For example, the following issues may need to be explored during the deposition:
- How was the adversarial or counterfactual modeled developed?
- Can the expert’s model itself be challenged objectively for reliability?
- Was the model and technique used subject to peer review and/or publication?
- What was the model’s known or potential rate of error when applied to facts relevant to the lawsuit?
- What technical standards apply to the model?
- Is the model based on techniques or theories that have been generally accepted in the scientific community?
This post explores a few approaches to fact and expert discovery that litigants may want to explore in a lawsuit where an AI technology is contested, though the approaches here are by no means exhaustive of the scope of discovery that one might need in a particular case.