• Content Type

Research and analysis item

AI risk management framework

Abstract

NIST is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST Artificial Intelligence Risk Management Framework (AI RMF or Framework) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The Framework is being developed through a consensus-driven, open, transparent, and collaborative process that will include workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others. An initial draft of the AI RMF is available for comment through April 29, 2022. NISTā€™s work on the Framework is consistent with its broader AI efforts, recommendations by the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in AI Standards and Related Tools. Congress has directed NIST to collaborate with the private and public sectors to develop the AI RMF. The Framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The Framework should consider and encompass principles such as transparency, accountability, and fairness during pre-design, design and development, deployment, use, and test and evaluation of AI technologies and systems. These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services.

Key Information

Type of organisation: Government

Date published: 6 Oct 2024

Categorisation

Domain: Horizontal
Type: Report

Type of organisation:

Discussion forum

  • Author
    Posts
  • Up
    0
    ::

    Share your thoughts on this item here.

You must be logged in to contribute to the discussion

Login