Recommended Practice for Defining and Evaluating Artificial Intelligence (AI) Risk, Safety, Trustworthiness, and Responsibility
Last updated: 11 Jun 2025
Development Stage
Pre-draft
Draft
Published
Abstract
This recommended practice provides a comprehensive framework for understanding, defining, and evaluating AI risks, AI safety, AI trustworthiness, and AI responsibility in order to address and manage them while preserving the benefits of innovation. This recommended practice takes into consideration the global context, advocating for responsible AI adoption, governance, and collaboration. The recommended practice offers a principles-based framework that looks at the role of AI in information generation, decision-making, human agency, and responsibilities associated with AI usage while considering the full life cycle of AI application development, deployment, and operation. Ā© Copyright 2024 IEEE ā All rights reserved.