On a special “on location” episode of The Geek in Review, Greg Lambert sits down with vLex’s Damien Riehl for a hands-on demonstration of the new generative AI tool called Vincent AI. While at the Ark KM Conference, Riehl explains that vLex has amassed a huge legal dataset over its 35 year history which allows them to now run their own large language models (LLM). The recent merger between vLex and Fastcase has combined their datasets to create an even more robust training corpus.
Riehl demonstrates how Vincent AI works by having it research a question on trade secret law and employee theft of customer lists. It retrieves relevant cases, statutes, regulations, and secondary sources, highlighting the most relevant passages. It summarizes each source and provides a confidence rating on how well each excerpt answers the initial question. Vincent AI then generates a legal memorandum summarizing the relevant law. Riehl explains how this is more trustworthy than a general chatbot like ChatGPT because it is grounded in real legal sources.
Riehl shows how Vincent AI can compare legal jurisdictions by generating memorandums on the same question for California, New York, the UK, and Spain. It can even handle foreign language sources, translating them into English. This allows for efficient multi-jurisdictional analysis. Riehl emphasizes Vincent AI’s focus on asking straightforward questions in natural language rather than requiring complex prompts.
Looking ahead, Riehl sees potential for Vincent AI to leverage external LLMs like Anthropic’s Claude model as well as their massive dataset of briefs and motions to generate tailored legal arguments statistically likely to persuade specific judges on particular issues. He explains this requires highly accurate tagging of documents which they can achieve through symbolic AI. Riehl aims to continue expanding features without requiring lawyers to become AI prompt engineers.
On access to justice, Riehl believes AI can help legal aid and pro bono attorneys handle more matters more efficiently. He also sees potential for AI assistance to pro se litigants to promote fairer outcomes. For judges, AI could help manage pro se cases and expedite decision-making. Overall, Riehl is optimistic about AI augmenting legal work over the next two years through ongoing improvements.
Riehl discusses vLex’s new Vincent AI system and its ability to efficiently research legal issues across jurisdictions and across languages. He provides insight into the technology’s development and potential while emphasizing understandable user interaction. The conversation highlights AI’s emerging role in legal services to increase productivity, insight, and access to justice.
vLex Vincent AI
Twitter: @gebauerm, or @glambert
Threads: @glambertpod or @gebauerm66
Music: Jerry David DeCicca
Greg Lambert 0:33
So welcome to The Geek in Review the podcast focused on innovative and creative ideas in the legal industry. And it’s at in Chicago at the Art cam conference. And I grabbed Damien RIehl, from vLex Fastcase, to give us a demonstration, or to talk about the Vincent AI tool that was released this week. So Damian, both Welcome back to the show.
Damien Riehl 0:57
Thank you so much for having me back. really thrilled to be here.
Greg Lambert 0:58
All right. So we’ve all seen some of the news out this week that you are releasing the Vincent AI. Vincent has been around for a little bit of time now. So can you just kind of give us a kind of overview the history of how this gets started and where we are right now?
Damien Riehl 0:59
Absolutely. So. So vLex has been around for with its component companies for 35 years. And of course, Fastcase has been around for 20 years, 23 years. And so the companies have been amassing a massive dataset. And you know, we’ve known for years, that data is the new oil. And we arguably have the largest broadest oil field in the world, across 100 plus countries worldwide. And it’s really hard to amass that oil field of 100 plus countries. But once you’ve amassed it like we have, then you can run Large Language Models. And so So really, it starts 35 years ago with a massive net oilfield. And then, fast forward to this past March when vLex and Fastcase merged the left, if you know Fastcase and fill in the ad, over the last 20 years have been collecting and democratizing law in the United States. All right, Luis and Angel in Spain have been doing that for with his opponent countries over 35 years, and not just in the United States, but worldwide. So now we’ve taken Edenfield US oil and combined it with Luis and angels worldwide oil. And now we have over a billion legal documents that we are now able to be able to run Large Language Models.
Greg Lambert 2:26
So you guys are the OPEC of legal information. That’s right,
Damien Riehl 2:30
without Without all any of the antitrust connotations that OPEC might might have. Yeah, so it’s people are finding that without the oil, you’re just a wrapper around GBT, right is that as the oil is necessary to refine that oil into the type of products that you’re looking for?
Greg Lambert 2:44
Okay, so in I know that you’ve had a good integration between the vLex and Fastcase, just with regular Legal Information Services, search. So how are you leveraging the Large Language Models to even improve upon that?
Damien Riehl 3:04
Sure. So, you know, stage one is to be able to run a search to be able to use a vector database to be able to do what is known in the industry as retrieval, augmented generation, that has to be also known as rank, to be able to take our billion documents and based on the query, be able to take those billions legal documents, and then be able to refine them to say, maybe 50 documents or 100 documents that are most most relevant to the query, and then taking those and then run the Large Language Models across that smaller dataset to be able to get the product answer that you’re looking for. So that’s really where you’re taking our oil, and then shrinking it down to a manageable size, and then doing Large Language Models on
Greg Lambert 3:44
that. Okay. And in doing that, then you’re getting rid of the issues with hallucinations and whatnot, that you hear a lot about that. And as you start to introduce these into law firms, it’s the first thing that we get hit with so. So that kind of grounds the the results back to the to the relevant documents themselves and cites to those documents.
Damien Riehl 4:10
That’s exactly right. So option one is to ask se ChatGPT Out of the box, give me a motion to dismiss for breach of contract in the Southern District of New York, and then give me cases that are relevant on ChatGPT Generally, and because ChatGPT focuses on the internet writ large, it may well lose, Nate, thanks. That’s option number one. Option number two, is to find the actual motion to dismiss for breach of contract in the Southern District of New York. And now say here are the 20 of those. Give me the arguments and a case is statistically most likely to win for those. And the odds of hallucination for that now constrained retrieval augmented dataset is very small, that you are more likely to have in practice solves the nation problem.
Greg Lambert 4:51
And let me ask you on this, how do you ensure that that initial query or prompt that you’re doing and you say with the retrieval retrieval augmented generation, that the 50 cases that it picks are relevant because a lot of we’re asking just common English are not even sure if there’s multiple languages at play with the legs, how, how is it that you tune the RAC model so that it can interpret the query the prompt, and get you the relevant cases. First,
Damien Riehl 5:27
there are two methods on that. And really the best ways of ensembles between those two methods. So method number one, is what most people are familiar with, is just regular symbolic AI. So the symbolic AI is knowing that this is a motion to dismiss for breach of contract in the Southern District of New York, you can take those things up using symbolic AI. And that’s what you Tiger already is somebody that we hired through judicata acquisition, where he has a symbolic AI to be able to have with 99.6% certainty that this is a motion to dismiss for breach of contract in the Southern District of New York that has been granted, for example. So you could do the retrieval augmented that way. Or another way is to do as through the vector database that I mentioned earlier, where you could say, give me all the motion to dismiss and the precision on this, the vector database is probably going to be less than the precision on the on the symbolic AI. It’s because there’s, there’s a error rates in the vector database that maybe there’s not symbolic,
Greg Lambert 6:26
but you said, there’s kind of a blending so to use.
Damien Riehl 6:29
So if you do the both, then you’re able to say, give me the motion to dismiss for breach of contract in the Southern District of New York that relate to this particular factual scenario. That’s where the vector database, you know, that involve a car accident, for example. And the vector database would be able to say, well, a car is in the same vector space as automobile, as van as truck as whatever. So because it knows syntactically and semantically that they live in the same vector space, it is able to then work with the symbolic AI to be able to take of these motions, give me the ones with this factual scenario, that even if I search for car, it’ll pull in band things to be able to pull those in.
Greg Lambert 7:06
Alright. Well, it sounds very exciting. So you’ve launched Vincent AI this week. That’s right, this past week. So how’s the reaction been so far, and I know you’ve had people testing it. But what’s kind of been the reaction?
Damien Riehl 7:23
So I last week, I spoke at legal apps.com. I give the keynote there. And I gave kind of a sneak preview last Monday. And then we launched it last Tuesday, and between last Tuesday, and today, I literally had back to back 30 minute demos, for 12 hours a day of everyone who’s excited about what we’re building, I in almost 100% of those have turned into Yes, I want to test this I want to be able to bring this into my firm. So okay, so so far, I have 100% of
Greg Lambert 7:51
people interested. Okay. And then if I’m at a firm, is this something for my litigators? Is it something for my transactional folks? Where do you see this having the the quickest impact?
Damien Riehl 8:03
I would say that anyone who needs research could be able to use our tool, obviously, the litigators will want to be able to know what are what are arguments that are most likely to win? And that’s something we do I want to get a lay of the land on. Is California law better or as New York law better do nothing or heard of analysis? That’s obviously good for litigators. But we also have with that worldwide data set, we currently have the UK data. And we also have Spanish data. So to the extent that our UK friends are saying, Hey, we have lawyers in the UK, that feel left out of the general AI boom, because the there’s no UK oil thus far in the tools that have existed to be able to go so. So I would say that anyone who wants to know, what are my privacy obligations in these 50 states plus the UK, plus in Spain, they might be interested to on the transactional
Greg Lambert 8:48
side. And you and I last night had a had a conversation on this and that it’s great that you have such a worldwide database of information. But I’ve heard people like Bob Ambrogi, say, but I’m just, you know, I’m just a poor lawyer from Massachusetts, and I only work working this. How do you address the issues with with people saying, well, that’s just more than I really need?
Damien Riehl 9:13
Sure. So I will say that there are a lot of firms that do multi border jurisdiction. That is, you know, you may have clients that do work in Mexico, or they do in clients. I’ve worked in multiple, so obviously, for the largest firms and those cross borders, we can help them. And then for those like Bob, who say, I’m just a, you know, just a Massachusetts lawyer that does Massachusetts law, I would wager that Bob actually has clients that have not only offices in the US, but maybe offices in Canada, or offices in the UK or offices in the EU, where if Bob would say hey, I now have access to these datasets. The clients might say, hey, I want to do a 50 state jurisdiction survey 50 states survey or maybe if the country survey where they may not have known that Bob had access to those things in the past, so I would say even those limited to jurisdiction may have clients that expand beyond that jurist
Greg Lambert 10:00
And our reach is probably further than we think it is. So well, do you want to jump in and give us a little demo? Yeah, that
Damien Riehl 10:06
sounds great. So the problem that we’re solving is the same problem that, that our poor friend Steve Schwartz from the New York has said it was on the archives that we’ve talked about him a few times for Steve. Yeah, that’s, I feel like, you know, he’s gotten the round of the deal. But if you ask GPT ChatGPT Out of the box, legal question, post Avianca, and now it’s kind of neutered, that is opening is kind of new to these results. So it’s going to give some things that are kind of directionally okay. But you’re not gonna see very good cases, statutes, not really gonna see things that you can enter into a case. So if you’re going to have to do the research anyway, why didn’t Why even go to GPT? In the first place, right? So instead of that, what you can do is you could be able to ask a new question, and you could be able to say, what is trade secret law regarding former employees, allegedly stealing customer lists? And what you see here is it’s asking this question across real non hallucinating cases, and real non blue state statutes and real non hallucinate regulations, and real non hallucinated secondary sources that is high quality secondary sources like the ABA. And what we’re doing with these secondary sources is favoring high quality ABA, and recency. So if it’s recent in high quality, that’s what’s included in this. So you can here you can see that Connecticut bar journal has come up. So what it’s doing right now, you can see that it’s already pulled up six cases, and it’s found 42 of them, and it’s reviewing 15 of those. And what you see here is the blue lines on the right is actual non hallucinated quotations block quotations from in this case coordinate, courtesy temporary service. And you find that those block quotations in every single one of the sources, so this is the thing that distinguishes us from our competitors where you can, the Large Language Models, the name of the game is trust, but verify that you have to trust the output. But you have to verify that it’s actually correct. And we make it the easiest to trust and verify because right here on the same page, it is you give verification to be able to see how well this text maps to this question that is right here. So it’s dead simple to trust and verify. What we also do that stage one is the quotation stage two is we asked the Large Language Models, how well does this quoted text answered this question? And what you see here is the Large Language Models answering how well or how poorly that Large Language Models answers this question. And then it provides a confidence score as to how well or how poorly and what you see here is 100%, 90%, 80%, 70%. And if it’s below 70%, that you see here, it, we don’t even show it to you. Because if it’s below 70, you don’t care. So that’s thing number two is that now it’s found, you’ve gone through maybe 100 cases, found the 14 that matter summarized each of those 14, and now you can see a memorandum. That is that’s provided much more nice. Whenever my associates, I would say, Hey, don’t make me wait to the bottom to give you the answer. And give me the answer right up top right. So now here’s a one paragraph the answer right up top. And then once you’ve given me the answer, then go into the case law discussion, that is provide one paragraph per case. And so here, you’re gonna see one paragraph per case. And it talks about these various cases that are here. And then what you’re going to see in a moment, is, you know that Large Language Models are sycophants. They tell you what you want to get, right. But we counteract that by prompting by saying, don’t tell us just what we want to hear, tell us what we need to hear. And I’m going to pull the old baking trick where I’m gonna go to an already baked thing, since open AI is going to take a bit to be able to make this this memorandum. But much like, don’t tell me just what I want to hear, tell me what I need to hear. Here, you’ll see the exceptions and legal caveats. So the legal give me the exceptions to the rules, I mean, the flies in my ointment. And so here, you can see that there are various exceptions, for example, it doesn’t provide you from announcing your change of employment. And if it contains something that’s easily obtained by public sources, it maybe you can’t do it. So these are the exceptions to the rule to counter that stigmatism of Large Language Models. And so if you if we go back to our original one, if you think about how long it would take an associate, to read through hundreds of cases, lands on 14, and six secondary sources, and then take those 20 sources and turn them into a memory memorandum, it’s probably gonna be longer than the two to three minutes that it’s going to take to do this right here.
Greg Lambert 14:17
Yeah. So one of the things that early models were having issues with was the cool thing about using a ChatGPT. Is the chat is that you can ask it follow up questions, you can kind of narrow in or expand upon the previous question, but it retains that information before. Is that something that you’re able to do here?
Damien Riehl 14:44
Sure. So here you can see the edit the question. So if I want to edit the question, I can say to include more things like this. So you’re the chat interface. We’ve thought a lot about the user experience, also known as UX. And one is it a chat experience. So here I’m going to show on my screen, you know, In opening eyes, ChatGPT is a blank thing that I can do anything with, right? The problem with that is that it gives you the paradox of choice. Like, I can do everything. So I, I’m paralyzed, I don’t know what to do, right? Especially if I’m a lawyer that doesn’t want to be prompted here, I just want an answer. And so this paradox of choice is something we’ve avoided, or at least try to avoid by by saying, Hey, if you want to answer a question, you don’t need to be a prompt engineer, all you have to do is provide a ask a question, much like you would ask the question of a first year associate to do that. And so then we give you our curated, well structured memo as the outside of them. And so the chat interface we may be adding later on. But we think that maybe that chat, that blank box, that gives you the paradox of choice is maybe not the right paradigm, but instead you just want to ask a question, get an answer, and then move on to the next question that you have, and can’t do it this way.
Greg Lambert 15:48
Do you? Have you found that there are better ways to prompt Vincent for the information? Or do you just suggest just ask, ask a regular question
Damien Riehl 16:00
We are leaning toward asking a question much like you would ask a first year associate so much like you would write an email, Hey, first year associate could you answer this legal question? And then say, you know, for example, what is the trade secret law for employees stealing customer lists? That is a straightforward question that gets a straightforward answer. If you provide a paragraph with a thing with lots of caveats, and that kind of thing, you can imagine that the vector space embedding search, if you provide a paragraph, the number of cases that will have that entire paragraphs with will be very slim. So I would say that lawyers are good at speaking in declarative sentences, and giving a clear instruction to a dumb first year associate that doesn’t know any better, right? Um, do the same thing for the lifetime.
Greg Lambert 16:41
And then can you restrict it to certain districts? Or courts or states? And? And can you do multiples of those? Or is it just one at a time?
Damien Riehl 16:53
It’s almost like you’re doing the discussion, but for me, yeah. So so. So for? For answer a question, I could be able to, you know, be able to do this very soon, you’re gonna see an ad federal to be able to say California plus Ninth Circuit, etc. But let’s go on to the next skill where I can actually compare jurisdictions, I can take this question that I’ve asked. And maybe I’m a litigator that wants to say that the other side has said that California applies. But I think that maybe New York is better. And then I also have clients in the United Kingdom, my client has a United Kingdom presence, they also have a Spanish presence. Now, this is unique in the industry, we think, to be able to announce running for memoranda simultaneously, one memoranda for California, one for New York, one for the United Kingdom, and one for Spain. And so we provide a memorandum for each of those. And very soon what you’re going to see is a fifth memorandum, saying take these four final memoranda and then take a fifth one and say, How are each of these jurisdictions similar? And how are they different? And for those distinguishing what are the aspects of their jurisdictions? And I’m particularly proud of the Spanish question that we’re answering here, because what we’re doing is we’re asking an English question. And then it’s, we’ve translated it into Spanish. And right now, it’s found 25 Spanish language authorities that is now reviewed, and very soon you’re going to see Spanish cases, Spanish statutes, and Spanish regulations. And then that stage two that we talked about, it’s going to take that and round, trip it into English and give you an English answer to the Spanish law. And then on the left hand side, you’ll see a memorandum in English about the Spanish language law. We think that is unique in the industry. That is, if you want to do you know, questions in German, and French and Portuguese, etc. We know, the system knows that my language settings right now are in English, but I could just as well be Italian,
Greg Lambert 18:43
So you can ask they’re prompted in Spanish or Portuguese or French?
Damien Riehl 18:49
That’s right. And then you’re able to ask the Portuguese question at on Spanish law, and then get a Portuguese answer to that.
Greg Lambert 18:57
Very interesting. Eventually, because in thinking back to not me taking Spanish in high school, because they didn’t offer it, but my kids taking Spanish in that, you know, there are differences in language between Spain and Mexico or Venezuela or Chile. That, you know, there’s different ways that things are said it’s kind of like in English, we don’t use as many US and labor neighbors as like that. Yeah. Does it adjust for the jurisdiction as well?
Damien Riehl 19:36
That’s a really good question. And so there’s, I think the language, the translation, two parts, two answers to your question is one on the translation side of it. So going from English does it translated to Spanish Spanish or Mexican Spanish, right? Maybe? Well, different. And I actually don’t know the answer to that. But then stage two is the the, as it pulls in the polls in the Spanish language question answer to that Does it distinguish Mexican Spanish from Spanish? Spanish? And yeah, and doing that vector search? And I also don’t know the answer to that question, but that’s a really good question for angel who is Archie? There we go. Take that back to him. Right. And especially because angel speaks Catalan at home, which is a billions of Spanish and yeah, there’s many different Yeah, completely different. Yeah. But, but they’ve been around for 20 plus years. And so they’ve been doing this kind of translation from language one to language two for many, many years now. And so this is a relatively solved problem.
Greg Lambert 20:29
Okay. So you talked about the phase one, phase two. Are there long range plans or ideas that you have on where you think this could could be in, you know, two or three years that we may not be thinking about?
Damien Riehl 20:45
Yeah, I would say I would shorten your timeframe from two or three years to to the next six months. Right. I think there’s lots of low hanging groups that that the team and I have been looking at thinking through, and one of them is taking our doctor alarm 775 million judicial opinions, briefs, pleadings, motions that are filed at the district court level, because that’s actually where most of the work is done. Almost none of the work is in the appellate level, almost all of us to the district court level. So imagine, I’m you know, right now, and today in Dhaka law, I could be able to say I’m in front of judge Smith, in the Southern District of New York. I do I’m doing a motion for summary judgment. I can right now find all of the motions for summary judgment that she’s granted in all of those cases. And through retrieval, augmented generation, maybe that’s 45 motions for summary judgment that she’s granted the contract cases, right now, today, I can copy and paste from those 45 successful motions and say, Your Honor, this is just like the case you decided yesterday, you should decide the same way today. Helpful today. But how helpful would it be to be able to take those 45 texts from those cases, put them into a Large Language Models, and say, now give me a new motion for summary judgment that has arguments in cases that are statistically likely to win for this judge for this cause of action. And then take your facts and say, Now, here are new facts, implement those into this motion for summary judgment, and make tell me how those facts allow me as a plaintiff to win. That is, I think, where the industry is going, but to be able to do that magic I just talked to you about, you need to be able to have all the motions for summary judgment that have been successful, being able to tag that up with high accuracy is something that’s very, very hard. And I think we maybe do it better than anyone.
Greg Lambert 22:25
And that wouldn’t be just relying on Large Language Models. To do this, you would have to have analytics, a huge corpus of information that has been analyzed. I mean, I guess you could you could use the the generative AI to help with the analytics. But yeah, so you’re kind of like merging multiple different types of tech into one output.
Damien Riehl 22:51
That’s exactly right. That’s exactly right into it to get put a finer point on it. So here’s the you know, today, right now, I could be able to say this is a trademark case, in the district of Minnesota in front of my friend, Judge Susan Richard Nelson, we’ve already extracted all of the complaints and answers all the motions and all the orders on the motions. We’ve done that for not five or 10 different motion types, like maybe some of our competitors, but for 225 different motion types. And once you’ve extracted whether they’ve been granted or denied or partially granted, now, I can be able to say, hey, I want Susan Richard Nelson, the judge to be able to grant my motion for summary judgment. And so now, here are 128 cases where Susan Richard Nelson has granted summary judgment. And now I can be able to say I don’t care about all those, I just care about contract cases. So I can say cool, here are the 45 contract cases, where literally, you could say this, just like the cases cited yesterday, copy and paste, copy and paste. Now you can take for these 45 documents, take all the text from those, throw them into a large language model, and be able to then say, give me statistically likely to win arguments and citations for this judge for this motion type for this cause of action. And to be able to do that you need this highly precise tagging. This is when we acquired judicata, we acquired a tiger already who was the CEO of judicata came over. And his precision rates on what you see right here is 99.6%. Precise, humans are about 96%. This is 99.6. Because if it weren’t, if it were, say 90% with a machine learning model, you wouldn’t trust this number. You wouldn’t trust that this number is correct, because there’s a 10% swing, alright, but because it’s almost 100% precise, you’re able to do that. And because it’s 100% precise, you also get the granted number that is now you can trust this, these are all successful motions for summary judgment, to be able to do this. None of this is generative AI all of this is symbolic AI that is a really smart person like you Tiger already spending years of his life. And after working at Google Scholar and taking all that experience that 99.6% precision on there. That is all symbolic AI and for others to be able to try to replicate that with generative AI. Good luck with that.
Greg Lambert 24:55
Is there anything that is still As you’re working on for the next phases of this, that are kind of stumping you now that any issues that you’re having, making the generative AI work the way you want, that you think will, will improve over time.
Damien Riehl 25:17
I really haven’t seen any, you know, it’s all about the prompting as you and everyone else who has experimented with his nose. And so with a few iterations of the prompting, you can get really high quality results, not only from GPT-4, but also Claude. And we’re looking at all the other models and, you know, good prompting is good prompting. And so so once you have that good prompting, you I can do on the back end of my software, so the user doesn’t have to do it on the front end. So lawyers do not have to be prompt engineers, we can be able to do the hard work for you, Sam Altman said, when asked to like his how important is prompt engineering going to be in the future, Sam Altman, the CEO of open AI, which makes ChatGPT Sampson Kai hope prompt engineering is not a thing and three years Yeah, or even one year that he on his open AI side, and Anthropic on their side and Google on their side and met on llama side. All those companies will do on the back end. And I will do on my back end. So the end user doesn’t have to prompt engineer on here.
Greg Lambert 26:12
Are you getting questions of how secure is the information that I’m putting in to Vincent?
Damien Riehl 26:18
Yes. And, you know, I worked before I’ve worked at Fastcase vLex, I worked in cybersecurity. And you might remember that Facebook hired me to investigate Cambridge Analytica. So the cybersecurity runs deep in my bones. And whenever I would advise clients on the cybersecurity side, I would say, what are the crown jewels that you’re looking to protect? That is, are the crown jewels? Or is it you asking what you had for lunch yesterday, or whether you want to go to lunch or not? Crown Jewels are very important, what you have for lunch is less important. So you want to protect the crown jewels more than what you have to watch. So when you think about the queries that are put into these systems, you’re not saying, Hey, we represents Company X in their merger agreement that’s coming up pretty soon. With company why? What’s going to happen with that? So you’re not putting that sensitive information in this research query? What you’re saying is, what is the antitrust implications of two companies? One and UK? One, the US merging? That is more like what are you having for lunch? That is less sensitive information? That’s the number one thing number two is now. Now that’s less sensitive information? Yes, of course, we keep it very secure. And of course, we have instances within our Large Language Models where those Large Language Models have told us that any input into those Large Language Models will not make its way back into the model itself. So yeah, security is, is deep and on top of by for us, and things are very secure.
Greg Lambert 27:31
Okay. Where do you kind of see this heading over the next? This is a variation of our crystal ball question, I guess. Where do you see this heading over the next two years,
Damien Riehl 27:45
I would say that there’s, there’s lots of low hanging fruit, that when I left the practice of law, I thought, in 2015, I thought computers are gonna be able to do language really well. And And now, computers are able to eat 90% of humans on the bar exam, right? So so it does language really well. So all of the product ideas I’ve had since 2014, are now within reach, and almost trivial to be able to do. So I would say that the next two years will be filled with all of that low hanging fruit that has all these products that I’ve had in my brain and the team at vLex had in their brain for the last years, that we’re going to be able to roll these out. seriatim very quickly. And then, and I think everyone in the industry, hopefully will be able to follow suit to take it from early.
Greg Lambert 28:30
Okay. Well, let me let me ask you a access to justice question, then. Will this improve access to justice? And if so, how do you see what’s the lowest hanging fruit with with adj?
Damien Riehl 28:42
Yeah, I’ll answer that in two parts. One of which is, you know, access to justice is closely related to AI, as a lawyer have a business model that is largely hourly, today. So how will affect the hourly business model? And then how will it affect the ATG model? So I’ll talk the hourly model. A lot of people are saying, my associates spends 10 hours asking this question that the tool that we just showed a minute ago, will now answer in a minute and a half? Where does my billable hour go? And my response to that is, you know, they spent 10 hours today on or yesterday on a question. Question number one, but if you can get the answer in a minute and a half, they can move on to question number two, and three and 20 and 30. And so in those 10 hours, you actually get 20 questions answered, rather than just the one question answered. Maybe you spend the same amount of time, but think about how much better your product is for your clients to be able to do the follow up. Because in data science, when I worked at Facebook’s data scientists, they The answer is almost never with question number one, you get the answer to question one, you’re like, oh, isn’t that interesting? Now you ask her question number two, number three. So I would say that the hourly business model may be in jeopardy, because it only takes a minute and a half. But maybe we’ll just fill it up with more work to do that. But then to the Access to Justice question. You can imagine that if you work for a legal aid organization, or if you’re a pro bono, maybe you start with question one, and you’ve answered it a minute and a half and get that out the door more quickly. May Can’t cheaper for you to provide those services to the 80% of humans that can’t do that can’t afford lawyers today, maybe we’ll shrink that from the 10 hour tasks to the one that tasks to make it more a bit able make us as a profession more able to serve the populace.
Greg Lambert 30:13
Okay. Let me ask you one last question then. And this just popped into my head. If I were a judge, how would I approach this change and how things are coming to me when people are going to be able to analyze my decisions I’m going to is, how would you suggest that a judge use AI tools like this? I would say
Damien Riehl 30:37
that, two answers to that I clerked for a state appellate judge and a federal district court judge. And I know that, especially with the discord level, judges have to deal all the time with pro se litigants. That is unrepresented litigants. And my job as a judge, it’s very difficult with those pro se litigants, because I have to essentially serve as their lawyer and then try to be kind of fair to the other side at the same time. So that is a very difficult proposition. And so pro se, clients are generally rare today. But you can imagine emboldened with ChatGPT, maybe as a pro se litigants might be might make a pretty good complaints with GPT. Right? And where in the past, either the judge might have had that pro se person and then saying, Oh, we have these things called procedural rules, right and dismiss the case early, maybe GPT is able to provide a really good answer to a motion to dismiss. So I can’t pat them on the head anymore and make them go away. So between more people filing pro se filings, and fewer of them going away a motion to dismiss, if we think we have a backlog today, right? We ain’t seen nothing yet. And so really, I talked to a bunch of judges, including a bunch of Article three judges that had me speak at their, at their retreat over three days. And I said, a way to be able to help this is to have use Large Language Models as a system to be able to assist that pro se litigant. So I don’t have to be able to serve as their lawyer to they just have essentially a system to be able to assist those pro se people so I can be fairer, and really provide that access to justice in a way that is truly fair. So I think that the upsides of judges being able to and then once you do that, I clerked for a judge, and mostly my my job as a clerk was to say, Your Honor, plaintiff said, defendant said, I think you should say, here are the 12 causes of action. And here’s how I do that analysis for all those, you can imagine a Large Language Models making that process much faster, where now I can be able to have Large Language Models do that connection, plaintiff says defendant says, and also saying, hey, there are three elements to this claim, defendant Miss number three, therefore defendant loses. And so to be able to make that process of deciding faster. So as I talked to judges, your question is like, are they going to be maybe worried that people can be able to fly spec them and be able to, you know, say, Oh, this judge is fairer than that judge. But I would say that the, at least from my experience with judges, that is that they’re way more excited about the thing that I just discussed, than they are worried about the thing you’re describing. All right.
Greg Lambert 32:59
Well, Damien RIehl, thank you very much for cutting out for lunch and meeting with me today. If someone wants to learn more about this, where do they need going?
Damien Riehl 33:11
vLex.com. That’s v l e x.com. And the tools Vincent AI and if you want to reach out to me either on Twitter or LinkedIn, I guess we’re Excellent. All right, so Damien RIehl is my ex handle and and I’m happy to talk to anyone who has yours.
Greg Lambert 33:26
Alright, thank you very much. And thanks, everyone for tuning in. To The Geek in Review, we can be reached. I can be reached online on exit G Lambert and glambertpod on threads. Or better yet, LinkedIn. I’m spending most of my time on LinkedIn. So thanks again, Damien. And our music is from Jerry David DeCicca. Thanks, Jerry. And see you later.