This week, I share an article written by private equity lawyers about how their private equity clients are using AI and an online course about RAG, a potential solution to AI hallucinations.

AI will change the world, but how will it change M&A? I want to focus on AI’s impact on M&A in this newsletter. I am not an expert on either M&A or AI, but I want to learn about both topics and how they intersect. I thought there might be others in my situation (or people who are experts in one field or the other) who would find information on M&A and AI helpful in their careers, so I created this newsletter to track and share what I learn.

Subscribe now

Private Equity Lawyers’ Perspective on AI’s Impact on Private Equity

I recently stumbled across an article in M&A The Magazine by Weil, Gotshal & Manges partners Arnie Fridhandler and Olivia Greer titled, “Private Equity and AI: Just the Beginning.” In the article, Fridhandler and Greer share how their PE clients have started to use AI and their views on how AI will change the private equity business.1

The authors first focus on how AI is currently used in PE firms. Now that AI is integrated into things like Microsoft Office and Zoom, PE firms have easy access to AI to enhance their efficiency on day-to-day tasks. Beyond the day-to-day, the authors report that more PE firms are using AI to track deal flow and portfolio company reporting.

Next, the authors point out some shortcomings of AI that PE firms are navigating, such as hallucinations, privacy concerns, and skepticism over AI’s ability to do nuanced work. We have talked a few times about hallucinations (see below) and when to use AI, but we have not talked a lot about privacy. Privacy is a huge concern for almost all professions, and especially for M&A. The authors report that there is hesitancy among PE professionals to use proprietary information in chatbots. This is good because of the uncertainty surrounding the privacy policies of several of the leading AI models, and as the authors mention, the lack of uniform regulations on AI privacy. I believe there is a solution to this problem, and it is something that I plan on writing about soon.2

Looking toward the future, the authors predict that PE firms will increasingly use AI in their business as more PE-specific AI tools are created. The authors think that PE firms will lean into innovation during 2024 and actively seek “high-impact AI use cases.”

I think this article gives a great industry-wide perspective on AI in PE firms. It’s awesome to see PE lawyers take an interest in how AI affects their PE clients’ work. This is important for many reasons, but I think it is especially important because it allows PE lawyers who understand AI to point out potential biases in AI models, double-check accuracy, and advise clients on appropriate uses for AI.3

RAG: How Retrieval Augmented Generation Can Prevent AI Hallucinations

Hallucination is a problem with LLMs and AI generally. Everyone knows the horror story of the “AI Lawyers” who were fined by courts after ChatGPT made up fake cases.

The question that many are asking is whether it is possible to stop AI from hallucinating and ensure accurate responses are safe for use in high-stakes settings like M&A. I set out to find the answer to this question by watching an online class about retrieval augmented generation made available by Nvidia. Below is my summary of the class.

You can sign up for several different AI courses through Nvidia by following this link.

Fine Tuning vs. RAG

The most obvious and basic solution to prevent hallucinations in gen AI is to fine-tune the model over and over until it is 100% accurate. Unfortunately, this is extremely costly and time-consuming. Additionally, it does not really solve the problem—it only “fixes” the hallucination after the fact.

RAG may be able to solve this problem. RAG optimizes LLM output without costly retraining and can prevent the hallucination from occurring in the first place.

How RAG Works

Here is how I understand RAG: Basically, once prompted by the user, the model searches its database (the internet, training data, etc.) for relevant text, sorts the text into “chunks,” and bases its output on the relevant text. The video describes RAG as the guardrails for the AI model that ensure that all outputs are appropriate. For example, guardrails can ensure that the model only looks at accurate sources, like a firm’s knowledge base.

Without RAG, models predict the most common output, which as the “AI Lawyer” in New York found out, can mean inaccurate responses.

Example of RAG

Here is an example from the training: The demonstrator created an AI chatbot equipped with a RAG system that answered questions about a car for sale. The model used a detailed spec sheet about the car as its relevant text to look towards when prompted. This meant that instead of making up an answer when the presenter asked it about the lights on the car, the model could “see” the accurate information from the spec sheet and then answer the question.

I wish the presenter had asked the model a question about the car that was not present on the spec sheet. If the model answered “I don’t know,” that would give me more confidence in a RAG system. AI is a prediction machine—all it does is predict the most common output based on the prompt. But with RAG, the AI model should be able to, (1) give an accurate prediction based on accurate information and (2) if the information is not present, refuse to answer a prompt.

Conclusion

The course did not touch on RAG’s shortcomings—which I am sure are out there—but either way, RAG seems like a good solution to the hallucination problem. It would certainly be a good thing to consider when selecting an AI system for high-stakes work.

About me

My name is Parker Lawter, and I am a law student pursuing a career as an M&A lawyer. I am in my last semester of law school, and with some extra time on my hands, I decided to create this newsletter. I hope it is informative and helpful to anyone who reads it! I am not an expert at either M&A or AI, but I am actively pursuing knowledge in both areas, and this newsletter is a part of that pursuit. I hope you’ll join me!

Connect with me on LinkedIn: www.linkedin.com/in/parker-w-lawter-58a6a41b

All views expressed are my own!

1

Here is a PDF version of the article, if that is easier to read!

2

Basically, if the AI model is run on an internal cloud system, there is very little privacy risk. However, I am unsure whether the data inputted into the model is kept private, or used to train the model. Click here for more information from a previous post.

Photo of Parker Lawter Parker Lawter

My name is Parker Lawter, and I am a law student pursuing a career as an M&A lawyer. I am in my last semester of law school, and with some extra time on my hands, I decided to create this blog. I hope…

My name is Parker Lawter, and I am a law student pursuing a career as an M&A lawyer. I am in my last semester of law school, and with some extra time on my hands, I decided to create this blog. I hope it is informative and helpful to anyone who reads it! I am not an expert at either M&A or AI, but I am actively pursuing knowledge in both areas, and this newsletter is a part of that pursuit. I hope you’ll join me!