Skip to content

Menu

LexBlog, Inc. logo
CommunitySub-MenuPublishersChannelsProductsSub-MenuBlog ProBlog PlusBlog PremierMicrositeSyndication PortalsAboutContactResourcesSubscribeSupport
Join
Search
Close

NIST Publishes an Initial Draft AI Risk Management Framework and Guidance to Address Bias in AI

By Kate M. Growley, CIPP/G, CIPP/US, Adelicia R. Cliffe, Michael Atkinson, Jonathan M. Baker, Laura J. Mitchell Baker & Michelle Coleman on March 29, 2022
Email this postTweet this postLike this postShare this post on LinkedIn

On March 17, 2022, the National Institute of Standards and Technology (“NIST”) published an initial draft of its Artificial Intelligence (AI) Risk Management Framework (“AI RMF”) to promote the development and use of responsible AI technologies and systems.  When final, the three-part AI RMF is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.  NIST has only developed the first two parts in this initial draft:

  • In Part I, Motivation, the AI RMF establishes the context for the AI risk management process.  It provides three overarching risks & characteristics that should be identified and managed related to AI systems: technical, socio-technical, and guiding principles.
  • In Part II, Core and Profiles, the AI RMF provides guidance on outcomes and activities to carry out the risk management process to maximize the benefits and minimize the risks of AI.  It states that the core comprises three elements: functions, categories, and subcategories.  The initial draft examines how “functions organize AI risk management activities at their highest level to map, measure, manage, and govern AI risks.”

The forthcoming Part III will provide guidance on how to use the AI RMF—like a practice guide—and will be developed from feedback to this initial draft.

Overall, the goal of the AI RMF is to be used with any AI system across a wide spectrum of types, applications, and maturity, and by individuals and organizations, regardless of sector, size, or level of familiarity with a specific type of technology.  That said, NIST cautions that the AI RMF will not be a checklist and should not be used in any way to certify an AI system.  Similarly, it may not be used as a substitute for due diligence and judgment by organizations or individuals in deciding whether to design, develop, and deploy AI technologies.

Along with the AI RMF, the NIST also released Special Publication 1270 outlining standards to address bias in AI, titled “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” (“Guidance”).  NIST’s stated intent in releasing the Guidance is “to surface the salient issues in the challenging area of AI bias, and to provide a first step on the roadmap for developing detailed socio-technical guidance for identifying and managing AI bias.” Specifically, the Guidance:

  • describes the stakes and challenges of bias in AI and provides examples of how and why it can chip away at public trust;
  • identifies three categories of bias in AI—systemic, statistical, and human—and describes how and where they contribute to harms; and
  • describes three broad challenges for mitigating bias—datasets, testing and evaluation, and human factors—and introduces preliminary guidance for addressing them.

The Guidance provides a number of helpful recommendations that AI developers and risk management professionals may consider to help identify, mitigate, and remediate bias throughout the AI lifecycle.

At the direction of Congress, NIST is seeking collaboration with both public and private sectors to develop the AI RMF.  NIST seeks public comments by April 29, 2022, which will be incorporated in the second draft of the AI RMF to be published this summer or fall.  In addition, from March 29-31, 2022, NIST is holding a two-part workshop on the AI RMF and bias in AI.

Photo of Kate M. Growley, CIPP/G, CIPP/US Kate M. Growley, CIPP/G, CIPP/US
Read more about Kate M. Growley, CIPP/G, CIPP/USEmail
Photo of Adelicia R. Cliffe Adelicia R. Cliffe

Adelicia Cliffe is a partner in the Washington, D.C. office, a member of the Steering Committee for the firm’s Government Contracts Group, and a member of the International Trade Group. Addie is also co-chair of the firm’s National Security practice. Addie has been…

Adelicia Cliffe is a partner in the Washington, D.C. office, a member of the Steering Committee for the firm’s Government Contracts Group, and a member of the International Trade Group. Addie is also co-chair of the firm’s National Security practice. Addie has been named as a nationally recognized practitioner in the government contracts field by Chambers USA.

Read more about Adelicia R. CliffeEmail
Show more Show less
Photo of Michael Atkinson Michael Atkinson
Read more about Michael AtkinsonEmail
Photo of Jonathan M. Baker Jonathan M. Baker
Read more about Jonathan M. BakerEmail
Photo of Laura J. Mitchell Baker Laura J. Mitchell Baker
Read more about Laura J. Mitchell BakerEmail
Photo of Michelle Coleman Michelle Coleman
Read more about Michelle ColemanEmail
  • Posted in:
    Administrative, Corporate Compliance
  • Blog:
    Government Contracts Legal Forum
  • Organization:
    Crowell & Moring LLP
  • Article: View Original Source

LexBlog, Inc. logo
Facebook LinkedIn Twitter RSS
Real Lawyers
99 Park Row
  • About LexBlog
  • Careers
  • Press
  • Contact LexBlog
  • Privacy Policy
  • Editorial Policy
  • Disclaimer
  • Terms of Service
  • RSS Terms of Service
  • Products
  • Blog Pro
  • Blog Plus
  • Blog Premier
  • Microsite
  • Syndication Portals
  • LexBlog Community
  • 1-800-913-0988
  • Submit a Request
  • Support Center
  • System Status
  • Resource Center

New to the Network

  • Pro Policyholder
  • The Way on FDA
  • Crypto Digest
  • Inside Cybersecurity & Privacy Law
  • La Oficina Legal Ayala Hernández
Copyright © 2022, LexBlog, Inc. All Rights Reserved.
Law blog design & platform by LexBlog LexBlog Logo