A loophole in Microsoft’s Azure OpenAI Service terms of use could expose privileged information to third-party review. Lawyers need to undertake reasonable diligent vetting of vendors and their terms. Reliance on vendor assurances alone is not enough. But what is?
Last week, I ran across a good piece of reporting by Cassandre Coyer and Isha Marathe in law.com. The report highlighted an important issue.
Legal tech vendors have aggressively marketed Gen AI products over the last 18 months. To a vendor, they all assure potential customers that the inquiries and responses are protected, that they will not be used to train the system, and that third parties will not have access to confidential materials. In short, trust us. But can lawyers rely on these assurances, and to what extent? Do they need to do more?
Some Red Flags
The law.com article raises some red flags. According to the article, “More than a year after law firms and legal tech companies signed onto Microsoft’s Azure OpenAI Service, which gives users access to OpenAI’s generative artificial intelligence models via the Azure Cloud, many found out that a terms-of-use loophole could make privileged information susceptible to third-party review.”
Under its terms and conditions, Microsoft can retain and then have humans manually review certain materials if its “abuse monitoring” policy is triggered. According to the article, this policy “was tucked in a nexus of terms and conditions,” and many vendors and law firms just missed it. The potential for manual review, of course, could jeoparize the confidentiality of client information.
Is It Just a Matter of Reading the Terms and Conditions?
Well, you say it should be a simple matter: just read the terms and conditions more carefully. But who does that? I agree to app terms and conditions all the time without reading them. Why bother? There’s nothing I could change anyway.
And the fact that it was a Microsoft platform at issue is important. Microsoft is so ingrained and ubiquitous in the legal system that perhaps that led to lessened scurtiny. Indeed, Microsoft was offering programs with at least the implicit understanding of its customers that a) it understands legal needs for confidentiality and b) it would be sure that confidential materials were protected.
Some vendors whose products integrate with Microsoft may be making confidentiality assurances that aren’t necessarily correct
Microsoft does offer an exemption at least to some customers at the right price. However, the limited availability and cost may mean some vendors are not reading the fine print when using Microsoft tools for their legal customers. Or they are ignoring it. Either way, the lawyer could end up paying the price. I daresay that some vendors whose products integrate with Microsoft may be making confidentiality assurances that aren’t necessarily correct.
A Lawyer’s Duty of Due Diligence
This is a particular problem in legal now. So many vendors are offering Gen AI products that are closely integrated with other vendors like Microsoft. This makes the due diligence required of lawyers when using the products more problematic. While the duty of due diligence is not absolute, a lawyer must make reasonable efforts to ensure client information will be protected. But what is reasonable?
Make no mistake, those vetting requirements, at least under existing ethical opinions, are pretty significant. Most of the requirements grew out of questions concerning the use of the cloud and security concerns for email communications. There, as with Gen AI, lawyers must trust their data and their client’s data to someone else. So, both the extent of confidentiality and the supervisory duties of lawyers come into play.
The ABA Ethical Opinion
Formal Opinion 477R of the American Bar Association addressed these various issues concerning security associated with email communications and the use of vendors. According to the 2017 Opinion, several factors need to be examined by the lawyers when selecting a vendor:
- The education, experience, and reputation of the vendor;
- The nature of the services involved;
- The terms of the arrangement concerning the protection of the client’s information and
- The legal and ethical environments of the jurisdictions where the services are to be performed.
The ABA cited a previous Opinion for what lawyers should look at and do to satisfy their ethical obligations when selecting a vendor:
- Undertake feference checks
- Review vendors security policies and protocols
- Review vendor hiring practices
- Use of confidentiality agreements
- Assess the availability and accessibility of a legal forum for relief should the vendor agreement be breached.
The Opinion further provides that these things need to be periodically reassessed.
A State View
A number of states have also weighed in on a lawyer’s vetting responsibilities in other contexts. My state, Kentucky, for example, issued Opinion E-437 in 2014. Like the ABA Opinion, E-437 states the lawyer should investigate the vendor’s credentials, reputation, and longevity. It also sets out some questions a lawyer should ask:
- What protections does the provider have to prevent disclosure of confidential client information?
- Is the provider contractually obligated to protect the security and confidentiality of information stored with it?
- Does the service agreement state that the provider “owns” the data stored by the provider?
- What procedures, including notice procedures to the lawyer, does the provider use when responding to governmental or judicial attempts to obtain confidential client information?
- At the conclusion of the relationship between the lawyer or law firm and the provider, will the provider return all information to the lawyer or law firm?
- Where, geographically, is the server used by the provider for long-term or short-term storage or other services located?
Gen AI and Due Diligence: What’s Reasonable?
With Gen AI, the vetting obligations get complicated. The vendor supplying the product often depends on upstream providers, like Microsoft. These upstream providers supply underlying LLM data and the sophisticated tools for the systems to function and perform.
While a lawyer need not guarantee that the client will be protected throughout the ecosystem, the lawyer does have to take reasonable precautions to ensure that client information will be protected
While a lawyer need not guarantee that the client will be protected throughout the ecosystem, the lawyer does have to take reasonable precautions to ensure that client information will be protected. That diligence would seem to extend not only to the vendor but to 3rd parties the vendors uses that may have access to confidential data.
But the limits of what is reasonable in the new Gen AI world have not yet been established. Are the reasonable due diligence duties with Gen AI the same as with the cloud promulgated several years ago? Is the duty more with Gen AI providers? Or less?
Understandably, Bar Associations and legal ethicists have not yet caught up with the technology. They are only beginning to offer guidance about how and to what extent lawyers need to vet vendors providing Gen AI tools. And to what extent do lawyers need to vet the 3rd party providers that have products that integrate with the Gen AI tools a vendor is offering. Right now, we don’t have much guidance about our reasonableness duties other than those governing the cloud and email use.
One Clear Thing
One thing that’s clear, though, is that lawyers can’t just take the word of their vendors that client information will be protected. And you have to read the terms and conditions.
Carefully.
Photo by krakenimages on Unsplash