RESPONSIBLE AI

Exploring Responsible AI Standards and Frameworks: NIST AI 100–1 (AI RMF)

Sam Mokhtari
3 min readJun 25, 2024
Photo by Steve Johnson on Unsplash

In the previous post, we reviewed key frameworks and standards such as BS ISO/IEC 42001 and The Montreal Declaration for Responsible AI that can be used to assess the trustworthiness of AI products. This post focuses on NIST AI 100–1 or AI Risk Management Framework (AI RMF) designed to help AI actors — organizations and individuals — increase the trustworthiness of AI systems. The audience for this framework is individuals such as developers, deployers, users, and those involved in the governance and oversight of AI systems.

AI RMF was released in January 2023 by the National Institute of Standards and Technology (NIST). This framework is a use-case/industry-agnostic framework that provides flexibility to businesses of all sizes and sectors to manage the risks associated with AI products and systems. This framework is divided into two main parts: “Foundational Information” and “Core and Profiles”. So let’s dive in….

Foundational Information

The first part of NIST AI RMF provides foundational information that is needed to understand and manage AI risks. Concept such as Risk in AI and its challenges such as risk measurement, risk tolerance and risk prioritization are defined. This section also outlines the characteristics of trustworthy AI systems, which include:

  • AI systems should be valid and reliable in different contexts.
  • AI systems should be safe for human life, health, property, and the environment.
  • AI systems should be resilient to adversarial attacks and secure against unauthorized access.
  • AI systems should be Transparent and enable accountability for building trust.
  • AI systems should be Explainable for their decisions and operations.
  • AI systems should protect user privacy.
  • AI systems should be fair.

AI RMF Core

The AI RMF Core includes four primary domains to help AI actors to manage AI risks effectively:

Govern

This domain aims to help build a risk management culture within organizations. This includes processes, documentation, and organizational schemes to identify and manage risks.

Map

This function aims to help build context for AI system. This includes understanding the intended purposes, potential impacts, and assumptions about AI systems. This also includes engaging with internal and external stakeholders to enhance contextual understanding and risk identification.

Measure

This function provides controls and practices required to enable the analysis and monitoring of AI risks using quantitative and qualitative methods. This includes tools to assess, benchmark, and monitor AI risks.

Manage

This domain includes required strategies for AI risk treatment and mitigation.

AI RMF Profiles

AI RMF Profiles are tailored implementations of AI RMF core functions for specific contexts, use cases, or sectors. For example, an AI RMF hiring profile offers insights into how risk can be managed at various stages of the AI system for hiring. There are three types AI RMF profiles:

Use-Case Profiles which is custom implementations based on particular use cases such as hiring or fair housing.

Temporal Profiles describe the current and target states of AI risk management within a given sector or application context.

Cross-Sectoral Profiles which includes risks common across various sectors or use cases, such as large language models or cloud-based services.

Conclusion

The NIST AI 100–1 offers a flexible framework for understanding and managing the risks associated with AI systems. This framework has two parts including foundational information and core domains — Govern, Map, Measure, and Manage. Incorporating this framework into organizational practices improves accountability and transparency of AI system development.

--

--

Sam Mokhtari

Technology thought leader with 15+ years in cloud, data analytics, and AI @ AWS | PhD | Author & Speaker | Life Mentor & Coach