• Content Type


Information technology — Artificial intelligence — Objectives and approaches for explainability of ML models and AI systems

Last updated: 11 Oct 2022

Not under review
Open for comment
Under Revision

Development Stage




11 Feb 2021


This document describes approaches and methods that can be used to achieve explainability objectives of stakeholders with regards to ML models and AI systems‘ behaviours, outputs, and results. Stakeholders include but are not limited to, academia, industry, policy makers, and end users. It provides guidance concerning the applicability of the described approaches and methods to the identified objectives throughout the AI system’s life cycle, as defined in ISO/IEC 22989. ©ISO/IEC 2022. All rights reserved.


When AI is used to help make decisions that impact people’s lives, it is important that people understand how those decisions are made. Achieving useful explanations of the behaviour of AI systems and their components is a complex task. Industry and academia are actively exploring emerging methods for enabling explainability, as well as scenarios and reasons why explainability might be required.

While the overarching goal of explainability is to improve the trustworthiness of AI systems, at different stages of an AI life cycle, different stakeholders will have more specific objectives in support of the goal. To illustrate this point, several examples are provided. For developers, it would be improving the safety, reliability, and robustness of an AI system by making it easier to identify and fix bugs. For users, explainability will help to decide how much to trust an AI system by uncovering potential sources of bias or unfairness. For service providers, explainability will be essential for demonstrating compliance with laws and regulations. For policy makers, understanding the capabilities and limitations of different explainability methods would help to develop effective policy frameworks that best address societal needs while promoting innovation.

The proposed working item will describe the applicability and the properties of existing approaches and methods for improving explainability of ML models and AI systems. As more methods for enabling human understanding of AI systems are developed and refined, the proposed working item will help with guiding the stakeholders through the important considerations involved with selection and application of such methods. While methods for explainability of ML models play a central role in achieving the explainability of AI systems, other methods (such as data analytics tools and fairness frameworks) can contribute to the understanding of AI systems behaviour and outputs. The description and classification of such complementary methods are out of scope for the proposed working item. If necessary, the proposed working item will refer to other publications (potentially, including those by ISO/IEC) on the topic. ©ISO/IEC 2022. All rights reserved.

Let the community know


Domain: Horizontal

Key Information

Organisation: ISO/IEC
Committee: ISO/IEC JTC 1/SC 42
  • Author
  • Up

    Share your thoughts on this standard with the AI Standards Hub community here.


    Hello. This ISO/IEC AWI TS 6254 standard is for me one of the most relevant. But like everything, a standard or protocol is useless if it is not capable of reaching as far as possible in terms of use by millions of organizations around the world. An improvement in mobilization is essential to generate a brainstorm, but above all an adequate channel to establish a pre-strategy on the use of AI. All efforts must be directed towards ML first and AI later, since they are two highly complementary fields but it will be the AI ​​that ultimately rules.

You must be logged in to contribute to the discussion

Rated 0 out of 5
0 out of 5 stars (based on 0 reviews)
5 star0%
4 star0%
3 star0%
2 star0%
1 star0%

Post a review