Skip to content

Menu

LexBlog, Inc. logo
NetworkSub-MenuBrowse by SubjectBrowse by PublisherBrowse by ChannelAbout the NetworkJoin the NetworkProductsSub-MenuProducts OverviewBlog ProBlog PlusBlog PremierMicrositeSyndication PortalsAbout UsContactSubscribeSupport
Book a Demo
Search
Close

Oregon’s AI Ethics Opinion: A Wake-Up Call for Lawyers

By NBlack on April 28, 2025
Email this postTweet this postLike this postShare this post on LinkedIn

Stacked3Here is my recent Daily Record column. My past Daily Record articles can be accessed here.

****

Oregon’s AI Ethics Opinion: A Wake-Up Call for Lawyers

In February, the Oregon Board of Governors approved Formal Opinion 2024-205, which addresses how Oregon lawyers can ethically use artificial intelligence (AI) and generative AI in their practices. 

The opening line of the opinion is notable: “Artificial intelligence tools have become widely available for use by lawyers. AI has been incorporated into a multitude of products frequently used by lawyers, such as word processing applications, communication tools, and research databases.” While that conclusion may be true today, it’s a relatively recent development—ChatGPT-3.5 was only publicly released at the end of November 2022, and AI was rarely used in legal software until approximately 2015, when it began appearing more often in legal research, contract analysis, and litigation analytics tools. 

This recent trend of increased AI adoption by legal professionals has resulted in an extraordinarily rapid response by ethics committees. Since 2023, more than 15 jurisdictions, including the American Bar Association, Florida, New York, Texas, Pennsylvania, and North Carolina, have issued ethics opinions addressing AI use by lawyers. Oregon adds to this growing body of guidance.

The Oregon opinion’s guidance aligns closely with the conclusions reached in ABA Formal Opinion 512 (2024) and addresses key ethical issues, including competence, confidentiality, supervision, billing, and candor to the court.  

Tackling competence, the Oregon Legal Ethics Committee explained: “(AI) competence requires understanding the benefits and risks associated with the specific use and type of AI being used,” and the obligation is ongoing.

Next, the Committee considered client disclosure, explaining that Oregon lawyers may be required to disclose AI use to clients. The decision to do so needs to be made on a case-by-case basis and factors to consider include “the type of case, similarities to and deviations from technology typically used, novelty of the technology, risks to client data, risks that incorrect information will be included in the lawyer’s work product, sophistication of the client, deviation from explicit client instructions or reasonable expectations, the scope of representation, the extent of the lawyer’s reliance on the technology, the existence of safeguards present in the technology and independently implemented by the lawyer, and whether the use of AI or other new technology would have a significant impact on attorney fees or is a cost passed on to the client.”

Turning to fees, the Committee joined many other jurisdictions in determining that lawyers may only charge clients for reasonable time spent using AI for “case-specific research and drafting” and cannot bill for time that would have been spent on the case but for the implementation of AI tools. Billing for time spent learning how to use AI may only occur with the client’s consent. If a firm intends to invoice clients for the cost of AI tools, clients must be informed, preferably in writing, and if a lawyer is unable to determine the actual costs of a specialized AI tool used in a client matter, prorated cost billing is impermissible in Oregon and the charges should be treated as overhead instead.

To protect client confidentiality, lawyers seeking to input confidential information into an “open” model, which allows the input to train the AI system, must obtain consent from their client. The Committee cautioned that even when using a “closed” AI tool that does not use input to train the model, lawyers must carefully vet providers to ensure that vendor contracts address how data is protected, including how it will be handled, encrypted, stored, and eventually destroyed.  According to the Committee, even when using a closed AI model, it may be appropriate “to anonymize or redact certain information that (clients deem) sensitive or that could create a risk of harm…”

Next, the Committee opined that managerial and supervisory obligations require firms to have policies in place that provide clear guidelines on permissible AI use by all lawyers and staff. Additionally, the Committee confirmed that lawyers must carefully review the accuracy of both their own AI-assisted work product and that prepared by subordinate lawyers and nonlawyers.

Finally, the Committee confirmed that Oregon lawyers must be aware of and comply with all court orders regarding AI disclosure. Additionally, they are required to carefully review and verify the accuracy of AI output, including case citations. Should an attorney discover that a court filing includes a false statement of fact or law, they must notify the court and correct the error, taking care to avoid disclosing client confidences.

For Oregon attorneys, this opinion is a “must read,” just as it is for lawyers in jurisdictions that have not weighed in on these issues. Regardless of your licensure, the release of this opinion, along with more than 15 others in such a short period of time, should be a wake-up call. The pace of change isn’t slowing. If you haven’t started learning about AI, now is the time. The technology is advancing quickly; failing to learn about it now will only make it harder to catch up. 

These opinions aren’t just academic—they’re a warning. To make informed, responsible decisions about how and when to use AI, lawyers need to start paying attention today.

Nicole Black is a Rochester, New York attorney, author, journalist, and Principal Legal Insight Strategist at MyCase, CASEpeer, Docketwise, and LawPay, practice management and payment processing tools for lawyers (AffiniPay companies). She is the nationally-recognized author of “Cloud Computing for Lawyers” (2012) and co-authors “Social Media for Lawyers: The Next Frontier” (2010), both published by the American Bar Association. She also co-authors “Criminal Law in New York,” a Thomson Reuters treatise. She writes regular columns for Above the Law, ABA Journal, and The Daily Record, has authored hundreds of articles for other publications, and regularly speaks at conferences regarding the intersection of law and emerging technologies. She is an ABA Legal Rebel, and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech. She can be contacted at niki.black@mycase.com.

Tags: AI
  • Posted in:
    Technology
  • Blog:
    Sui Generis
  • Organization:
    MyCase
  • Article: View Original Source

LexBlog, Inc. logo
Facebook LinkedIn Twitter RSS
Real Lawyers
99 Park Row
  • About LexBlog
  • Careers
  • Press
  • Contact LexBlog
  • Privacy Policy
  • Editorial Policy
  • Disclaimer
  • Terms of Service
  • RSS Terms of Service
  • Products
  • Blog Pro
  • Blog Plus
  • Blog Premier
  • Microsite
  • Syndication Portals
  • LexBlog Community
  • Resource Center
  • 1-800-913-0988
  • Submit a Request
  • Support Center
  • System Status
  • Resource Center
  • Blogging 101

New to the Network

  • Tennessee Insurance Litigation Blog
  • Claims & Sustains
  • New Jersey Restraining Order Lawyers
  • New Jersey Gun Lawyers
  • Blog of Reason
Copyright © 2025, LexBlog, Inc. All Rights Reserved.
Law blog design & platform by LexBlog LexBlog Logo