On January 26, 2023, the U.S. National Institute of Standards and Technology (NIST) released the Artificial Intelligence (AI) Risk Management Framework (AI Risk Management Framework 1.0), a voluntary guidance document for managing and mitigating the risks of designing, developing, deploying, and using AI products and services. NIST also released a companion playbook for navigating the framework, a roadmap for future work, and mapping of the framework to other standards and principles, both at home and abroad. This guidance, developed in a consensus-based approach across a broad cross section of stakeholders, offers an essential foundation and important building block toward responsible AI governance.

The AI Framework

We stand at the crossroads as case law and regulatory law struggle to keep up with technology. As regulators consider policy solutions and levers to regulate AI risks and trustworthiness, many technology companies have adopted self-governing ethical principles and standards surrounding the development and use of artificial and augmented intelligence technologies. In the absence of clear legal rules, these internal expectations guide organizational actions and serve to reduce the risk of legal liability and negative reputational impact.

Over the past 18 months, NIST developed the AI Risk Management Framework with input from and in collaboration with the private and public sector. The framework takes a major step toward public-private collaboration and consensus through a structured yet flexible approach allowing organizations to anticipate and introduce accountability structures. The first half of the AI Risk Management Framework outlines principles for trustworthy AI, and the remainder describes how organizations can address these in practice by applying the core functions of creating a culture of risk management (governance), identifying risks and context (map), assessing and tracking risks (measure), and prioritizing risk based on impact (manage). NIST plans to work with the AI community to update the framework periodically.

Specifically, the framework offers noteworthy contributions on the pathway toward governable and accountable AI systems: 

  • Moves beyond technical standards to consider social and professional responsibilities in making AI risk determinations
  • Establishes trust principles, namely that responsible AI systems are valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair, with harmful bias managed
  • Emphasizes context (e.g., industry, sector, business purposes, technology assessments) in critically analyzing the risks and potential impacts of particular use cases
  • Provides steps for managing risks via governance functions; mapping broad perspectives and interdependencies to testing, evaluation, verification, and validation within a defined business case; measuring AI risks and impacts; and managing resources to mitigate negative impacts
  • Rationalizes the field so that organizations of all sizes can adopt recognized practices and scale as AI technology and regulations develop

The Playbook

This companion tool provides actionable strategies for the activities in the core framework. As with NIST’s Cybersecurity and Privacy Frameworks, the AI Risk Management Framework is expected to evolve with stakeholder input. NIST expects the AI community will build out these strategies for a dynamic playbook and will update the playbook in Spring 2023 with any comments received by the end of February.

The Roadmap

The roadmap for the NIST AI Risk Management Framework identifies the priorities and key activities that NIST and other organizations could undertake to advance the state of AI trustworthiness. Importantly, NIST intends to grapple with one of the more complex issues in implementing AI frameworks, namely balancing the trade-offs among and between the trust principles to consider the use cases and values at play. NIST seeks to showcase these profiles and case studies that highlight particular use cases and organizational challenges. NIST also will work across the federal government and on the international stage to identify and align standards development.

Mapping to Other Standards

The AI Risk Management Framework includes a map that crosswalks AI principles to global standards, such as the proposed European Union Artificial Intelligence Act, the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, and the Biden administration’s Executive Order 13960 and Blueprint for an AI Bill of Rights. The crosswalk enables organizations to readily leverage existing frameworks and principles.

Conclusion

AI is a rapidly developing field and offers many potential benefits but poses novel challenges and risks. With the launch of the framework, NIST also published supportive stakeholder perspectives  from business and professional associations, technology companies, and thinktanks such as the U.S. Chamber of Commerce, the Bipartisan Policy Center, and the Federation of American Scientists. Having the NIST AI Risk Management Framework’s foundational approach that evolves as our understanding of the technology and its impact evolves provides flexibility and a starting point to help regulators improve policy options and avoids a more prescriptive approach that may stifle innovation. The AI Risk Management Framework and its accompanying resources articulate expectations and will help AI stakeholders implement best practices for managing the opportunities, responsibilities, and challenges of artificial intelligence technologies.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Photo of Amy Leopard Amy Leopard

Amy Leopard is a partner and leader in Bradley’s Health Information Technology, Privacy & Security practice. Amy advises clients on complex health IT matters at the intersection of healthcare, technology, and law. She is a Fellow in HIMSS and serves on the Board…

Amy Leopard is a partner and leader in Bradley’s Health Information Technology, Privacy & Security practice. Amy advises clients on complex health IT matters at the intersection of healthcare, technology, and law. She is a Fellow in HIMSS and serves on the Board of the American Health Law Association, where she chaired the AHLA Health IT Practice Group. Amy is nationally ranked in Chambers USA for Healthcare Privacy and Data Security. She is a regular thought leader and is a blog editor for Bradley’s Online and On Point blog.

Photo of Elizabeth M. Boone Elizabeth M. Boone

Elizabeth Boone

Elizabeth advises clients on business transactions and compliance matters domestically and internationally, including contract negotiation, establishment and maintenance of legal entities, establishment of terms and conditions for the sale of goods, privacy compliance matters, employment matters and real estate transactions. She…

Elizabeth Boone

Elizabeth advises clients on business transactions and compliance matters domestically and internationally, including contract negotiation, establishment and maintenance of legal entities, establishment of terms and conditions for the sale of goods, privacy compliance matters, employment matters and real estate transactions. She regularly assists clients with ensuring compliance with GDPR, EU ePrivacy Directive (cookie law), CCPA, and other state-specific privacy laws.