We are often asked if we incorporate artificial intelligence (“AI”) into our legal workflows and electronic discovery processes. This question is not surprising given the efficiencies and cost savings associated with AI.

 

Typically, these questions are followed by inquiries into how the AI tools work and their defensibility. That is, how the use of AI can be defended if its use is challenged by a judge or opposing party. Essentially, the defensibility of AI tools and their corresponding results boils down to the ability to explain the results in plain language.

 

As a starting point, we first need to establish what is meant when we say AI. Oftentimes, it is generically described without any meaningful distinction between available tools. AI is nuanced with several tools that do not necessarily relate to or depend on one another; therefore, we must understand the selected tools before we even get to defensibility.

 

Different Types of Artificial Intelligence

To better understand, let’s break down some commonly used AI tools in legal applications:

 

  • Models (Supervised Learning)
    • Models identify specific information or patterns from a dataset using machine learning algorithms, much like those on Tik Tok, Netflix, and Spotify which learn and push content based on the user’s selections.
    • Models can be trained by a human reviewer who codes files to improve the accuracy of a model.
    • In e-discovery, models can be tailored to a dataset such as Continuous Active Learning (CAL). CAL is typically used to identify relevancy. Its most popular use is to identify documents that are likely to be responsive for further coding by a human reviewer. CAL is continuously trained using coding decisions; this process continues until CAL reasonably identifies all relevant documents in a dataset.
    • Models are also used to find information or patterns that are common across multiple datasets. For example, the identification of documents and files containing social security numbers or credit card numbers.
  • Clustering (Unsupervised Learning)
    • Clustering is the process by which AI groups documents together based on the relationship of a document’s characteristics to others in a dataset.
    • Typically, clustering starts at a high level and is refined to identify patterns or other useful information in the documents. For example, one common type of clustering is by email participants. The high-level cluster shows the email traffic within a dataset. Let’s say we are interested in the emails of John Doe; we can explore his email cluster to see who he frequently emailed or rarely emailed.
    • Another common type of clustering is by keywords. Clustering groups words or phrases that are logically connected. 
  • Anomaly Detection (Unsupervised Learning)
    • Anomaly detection is the analysis of sentiment, emotion, and related patterns using natural language processing to determine where abnormalities exist within a dataset. We encounter natural language processing in several technologies we use every day, such as in Gmail to separate promotions and other common emails into buckets.
    • These abnormalities can be examined to potentially locate relevant information. For example, we are examining documents for an internal investigation regarding a workplace incident. Anomaly detection indicates that one of the employees involved sent a large number of after-hours emails around the date of the incident; which is atypical behavior for this employee.
  • Other Analytics
    • AI detection and translation of foreign languages. Many eDiscovery platforms are now incorporating translation features.
    • Image recognition uses machine vision to identify objects in images. The images are labeled with the identified content and become searchable.
    • AI-powered transcription of auto/video files that creates searchable text for the associated file.
    • AI analysis of email threads to identify emails that belong to the same thread. Thread analytics can be leveraged so only inclusive emails need to be reviewed. Inclusive means emails that contain unique content, both messages, and attachments, that are not included in another email from the same thread

While not exhaustive, the above list illustrates how varied the uses of AI can be. Given this, there is no one-size-fits-all for the defensibility of AI. However, the AI workflow outlined below sheds light on how we can make AI explainable and in turn defensible.

 

Record the Process

First, all decisions, processes, or procedures undertaken to use AI need to be documented. It is difficult to explain how AI tools work if the steps taken to achieve results are not readily apparent. Oftentimes, AI tools are proprietary so accounting for all decisions within your control is especially important. For example, we are not able to modify the algorithms that create models; however, we can determine the reasonable number of documents needed for training a new model.

 

Documentation also leads to repeatability, which can help validate results achieved from AI tools. For instance, let’s say that our use of CAL is questioned by opposing counsel. Opposing counsel is concerned that our processes may have missed a substantial number of responsive documents. Of course, we could point to the statistical significance of the sampling conducted on the unreviewed universe to show it contains a reasonable percentage of missed responsive documents. However, we could also conduct a new sample using the same parameters to bolster our conclusion that the reasonable percentage will not significantly vary if different documents are reviewed.

 

Considering the Sources

Next, consider the sources of data that the AI tools are analyzing. Are we missing a key source of data that could throw off the analytics? To illustrate, let’s return to our example from anomaly detection. If the employee used a secondary email address that was not analyzed, then the results could have identified a behavior that only existed because we did not review the entirety of the employee’s email data. Have we properly cleansed the data before AI tools are used? Data cleaning means issues in the dataset are fixed or removed to prevent negative outcomes with the AI tools. Data cleaning may include removing duplicates or fixing formatting errors with metadata.

 

Determining the Bias

Related to data considerations, we should look for any potential bias with the artificial intelligence used. One source of bias is derived from our data selection. If incomplete, the analysis may not be representative of the entire dataset we are trying to apply our AI tool. Bias related to the algorithms behind the AI tools may also exist. Consider how models can be influenced by the decisions of human reviewers. Bias could be introduced when a model is trained using incorrect coding decisions of human reviewers. To prevent this, subject matter experts should be used to train models. 

 

Keeping the Robots in Check

After using one of the above AI tools, we must validate that it generated the desired or expected result. When we say desired or expected result, we mean that the results are reasonable and reliable given the AI tool and its configuration. Validation varies greatly depending on the AI tool selected.

 

For example, validation of CAL is driven by metrics. Validating that CAL has reasonably identified all responsive documents requires elusion testing. An elusion test is a random sample of the portion of the document universe that won’t be reviewed by a human or produced. Elusion testing is used to show that the percentage of responsive documents in the unreviewed universe, or the rate of eluded documents, is reasonable and that further efforts are not necessary to satisfy a party’s discovery obligations.

 

We can compare the rate of eluded documents to other metrics, such as richness, to show that using CAL was a reliable method for finding responsive documents. We may need to go back to the documents depending on our analysis of these metrics.

 

On the other hand, some AI tools may be validated using expertise, such as AI translation. In that case, someone who is fluent may need to validate AI translation to confirm that the translated text accurately reflects the meaning of the document. Otherwise, the AI translation may not be reliable, or defensible, for human reviewers to use the translated text to code documents.

 

Finally, we need to succinctly describe the above processes if the issue of defensibility ever arises. Remember, the inability to show defensible practices for AI tools could result in added expense if the court requires you to go back into the documents.

 

Looking for assistance with AI in a legal matter?

Talk to Us hbspt.cta.load(2664666, ‘c0c89bce-e6d4-4d04-9974-eff9d3fd9d01’, {“useNewLoader”:”true”,”region”:”na1″});  

Other Articles You May Be Interested In

Must Lawyers Supervise the Robots? (The Legal Ethics of AI)

Statistical Sampling in Legal Document and Data Reviews

 

window.hsFormsOnReady = window.hsFormsOnReady || [];
window.hsFormsOnReady.push(()=>{
hbspt.forms.create({
portalId: 2664666,
formId: “ad7118d7-b543-47cf-993b-c90a49e416a2”,
target: “#hbspt-form-1656029186000-3272129859”,
region: “”,

})});