Skip to content

Menu

LexBlog, Inc. logo
NetworkSub-MenuBrowse by SubjectBrowse by PublisherBrowse by ChannelAbout the NetworkJoin the NetworkProductsSub-MenuProducts OverviewBlog ProBlog PlusBlog PremierMicrositeSyndication PortalsAbout UsContactSubscribeSupport
Book a Demo
Search
Close

Seven Major U.S. Tech Organizations Voluntarily Commit to A.I. Safeguards

By Christopher Escobedo Hart on July 28, 2023
Email this postTweet this postLike this postShare this post on LinkedIn

Ed Note:  Thank you to Summer Associate Nicole Onderdonk for her significant contributions to this post.

On July 21, 2023, the White House announced that seven leading A.I. organizations (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) agreed on and committed to immediately implementing voluntary safeguards for the development of artificial intelligence (A.I.) technology. Although not legally binding, these “voluntary commitments” mark one of the first steps in what could develop into a U.S. regulatory regime for A.I.

With this announcement, the U.S. joins other governments around the world trying to keep pace with A.I.’s rapid development, spurred on by recent advances in generative A.I. technology that can create text and imagery based on human prompts. These advances have raised concerns regarding A.I.’s potential to reinforce existing systems of oppression, spread disinformation, and serve as a tool to commit cybercrimes. The safeguards address these concerns by bolstering safety, security, and trust in the A.I. space. Specifically, the seven organizations committed to the following safeguards:

  • Conducting internal and external (i.e., independent) security testing of A.I. products for the most significant risks posed, including biosecurity, cybersecurity, and other potential societal harms before release to the public;
  • Sharing learnings and best practices with government, society, and academia on managing A.I. risks;
  • Investing in cybersecurity and insider threat mechanisms;
  • Facilitating third-party discovery and reporting of vulnerabilities in A.I. products;
  • Publicly reporting their A.I. product’s capabilities, limitations, and appropriate/inappropriate uses, highlighting security (e.g., technical vulnerabilities) and societal (e.g., bias) risks;
  • Developing technical mechanisms (e.g., watermarking) to ensure users can easily identify A.I. generated content;
  • Conducting research on the societal risks of A.I. products, including bias, discrimination and invasion of privacy, and how to avoid such risks; and
  • Developing and deploy A.I. technology to address broad societal challenges, such as climate change and cancer.

Even before agreeing to these voluntary safeguards, the tech industry has explored self-regulatory governance mechanisms. For example, Google implemented teams to test its systems and publicized some information on certain A.I. models. In addition, Meta and OpenAI invited external teams to test the vulnerability of their models. Both measures are included in the list of voluntary safeguards, which calls into question how much these commitments will change the operation of these organizations in the near term.

Potential Future Efforts to Regulate A.I.

While all seven organizations committed to these measures publicly, and the Biden-Harris administration stated that it “will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe,” there is no formal accountability mechanism accompanying these commitments to ensure effective implementation by these organizations. In addition, several of the commitments call for external, independent oversight, but the party responsible for performing these audits is not specified.

Despite these gaps, these voluntary commitments appear to be only an initial step in a long-term regulatory push by the White House, which is envisioned to include “some new laws, regulations and oversight,” as outlined by President Biden in his live remarks on the matter. Organizations can expect future measures that will go farther than these commitments, including a forthcoming executive order and a bipartisan legislative effort in Congress which is currently underway. Additionally, the FTC’s investigation of OpenAI for alleged unfair and deceptive practices may indicate that government agencies could play a larger role in developing law enforcement mechanisms in this space, even without immediate executive or legislative action.

U.S. government and A.I. organization officials believe these safeguards can serve as a model for an international framework that enables responsible innovation while protecting the rights and safety of individuals. The Biden-Harris administration stated that it consulted with several international governments on the voluntary safeguards and hope that future regulatory efforts will “help America lead the way toward responsible innovation” in the A.I. space, said the President in his remarks.

Steps Organizations Can Take to Prepare for Future A.I. Regulations

While the regulatory future in this space is still uncertain, organizations can take steps in the near term to prepare for what might come next. As signaled by the White House, the voluntary commitments will likely serve as a basis for Congress and agencies tasked with regulating this space. Organizations using A.I. tools should conduct board-level conversations regarding the role of A.I. in their current and future business plans and develop generative A.I. acceptable use policies for their organizations. In addition, clients who operate global businesses should be aware of other government’s regulatory developments. For example, the voluntary commitments contrast the strict requirements contemplated under the EU A.I. Act, which is in the process of being finalized and would impose mandatory compliance requirements based on the risk categorization of A.I. tools. However, both frameworks call for external audits (also mentioned in the White House’s Blueprint for an A.I. Bill of Rights, published last year), which appears to be a preferred method for governmental oversight in the A.I. space. For this reason, organizations subject to multiple international jurisdictions looking to proactively implement A.I. controls may be prudent to start with enacting independent oversight over their A.I. development activities.

In sum, while the efforts in the U.S. and around the globe are nascent, organizations would be well-advised to start preparing for future A.I. regulations.

 

The post Seven Major U.S. Tech Organizations Voluntarily Commit to A.I. Safeguards first appeared on Security, Privacy and the Law.

  • Posted in:
    Privacy & Data Security
  • Blog:
    Security, Privacy and the Law
  • Organization:
    Foley Hoag LLP
  • Article: View Original Source

LexBlog, Inc. logo
Facebook LinkedIn Twitter RSS
Real Lawyers
99 Park Row
  • About LexBlog
  • Careers
  • Press
  • Contact LexBlog
  • Privacy Policy
  • Editorial Policy
  • Disclaimer
  • Terms of Service
  • RSS Terms of Service
  • Products
  • Blog Pro
  • Blog Plus
  • Blog Premier
  • Microsite
  • Syndication Portals
  • LexBlog Community
  • Resource Center
  • 1-800-913-0988
  • Submit a Request
  • Support Center
  • System Status
  • Resource Center
  • Blogging 101

New to the Network

  • Tennessee Insurance Litigation Blog
  • Claims & Sustains
  • New Jersey Restraining Order Lawyers
  • New Jersey Gun Lawyers
  • Blog of Reason
Copyright © 2025, LexBlog, Inc. All Rights Reserved.
Law blog design & platform by LexBlog LexBlog Logo