As an annual tradition, we compile a list of the most widely read Catalyst blog posts of the previous year to see what topics most interest our readers. Here are our top five most popular blog posts of 2018.
What’s more fun than running 57 simulations for a client investigation? Seeing the results.
We structured a simulated review on Insight Predict, our TAR 2.0 platform, to be as realistic as possible, looking at the client’s investigation from every conceivable angle. The results were outstanding, so we ran it again, using a different starting seed. We did 57 different simulations starting with relevant seeds (singularly with each relevant document), a non-relevant seed and a synthetic seed. Regardless of the starting point, Predict was able to locate 100% of the relevant documents after reviewing only a fraction of the collection.
In the aftermath of studies showing that continuous active learning (CAL) is more effective than the first-generation TAR 1.0 protocols, it seems like every e-discovery vendor claims to use CAL or somehow incorporate it into their own protocols.
Despite these claims, there remains a wide chasm between the TAR protocols on the market. As a consumer, how can you determine whether a vendor that claims to use CAL actually does? See basic questions to ask.
How do you know what you don’t know? This is a classic problem when searching a large volume of documents in litigation or an investigation.
In TAR, a key concern for some is whether the algorithm has missed important relevant documents, especially those that you may know nothing about at the outset of the review. This is because most modern TAR systems focus exclusively on relevance feedback, which means that the system feeds you the unreviewed documents that are likely to be the most relevant because they are most like what you have already coded as relevant.
But what about other relevant documents you didn’t find? Contextual diversity, which is part of Catalyst’s CAL process, is a powerful tool to find missing pockets of potentially relevant documents. Read about how it works and its important use cases and benefits in many types of reviews.
Much of the discussion around TAR focuses on recall, which is the percentage of relevant documents found in the review process. Recall is important because lawyers have a duty to take reasonable (and proportionate) steps to produce responsive documents. But recall is only half of the story.
Achieving any level of recall comes at a price. That price can be expressed in terms of precision, which is the ratio of relevant to non-relevant documents that must be reviewed to reach any level of recall. The cost of review is a function of the precision of your TAR process, just as it is driven by the level of recall you have to attain.
How many documents must be reviewed to find one relevant document in a TAR process? Here’s a look at three simulations and a dozen cases where our clients used Predict for their review. See what we learned.
What’s in your legal data warehouse? Don’t know? Or don’t have one? If you work at a law firm, there’s no imperative to create a mechanism to share data elsewhere—outside counsel concerns are winning cases. For corporate legal departments, however, it’s a problem: In-house counsel, legal operations professionals and even the c-suite are increasingly asking for more effective oversight and control over e-discovery spend.
How, in practical terms, are corporate legal departments measuring and managing? Learn about five steps corporate legal professionals can take to get the data and analytics needed for more effective oversight and control over e-discovery spend.
We welcome your suggestions for future topics.