• Content Type

BS EN ISO/IEC 12792 – A standard for transparency taxonomy of AI systems
Blog post by:
Ivan Serwano, AI Consulting Manager, BSI
A group of wooden figures with transparent blue comment bubbles above their heads.

Transparency is a central facet of trustworthy AI. AI transparency provides information for relevant stakeholders to better understand how an AI system is developed and used, and thereby informs AI stakeholders how to manage the risks associated with the deployment of AI systems. In this post, we discuss a new standard for AI transparency – BS EN ISO/IEC 12792, which is currently open for public comment. BS EN ISO/IEC 12792 was one of two draft standards at the centre of an AI Standards Hub workshop held last year, which brought together a diverse coalition of stakeholders to discuss interpretations and challenges around transparency with a view to shaping the development of these standards.

What is BS EN ISO/IEC 12792?

BS EN ISO/IEC 12792 is a technical standard that provides a taxonomy of information elements to assist AI stakeholders in identifying and addressing the needs for transparency of AI systems. It is applicable to any kind of organization and application involving an AI system. The standard describes the semantics of the information elements and their relevance to the various objectives of different stakeholders.

What are the objectives of BS EN ISO/IEC 12792?

The key objectives of BS EN ISO/IEC 12792 are:

  1. Improving trustworthiness, accountability, and communication among different AI stakeholders by establishing a consistent terminology around transparency of AI systems.
  2. Providing AI stakeholders with information about different elements of transparency with their relevance and possible limitations to different use cases and target audience.
  3. Serving as a basis for developing technology-specific, industry-specific, or region-specific standards for transparency of AI systems.

Why is a standardized transparency taxonomy important?

A standardized transparency taxonomy of AI systems helps people with different backgrounds to better understand each other by using the same terminology. This supports an improved understanding of the AI systems and provides a foundation for developing interoperable and coherent transparency-related standards.

How is the draft standard structured?

The draft standard lays out an overview of the document and defines the concept of transparency for AI (clause 5). It discusses how transparency needs can vary depending on the AI system, the context of its application and use, and the stakeholders involved (clause 6). It introduces the transparency items that describe the context of the system (clause 7) and discusses how to document the way AI system interact with the environment (clause 8). The standard also offers guidance on how to document the internal functioning of the AI system (clause 9), and guidance on the documentation of datasets as standalone items (clause 10).

The BS EN ISO/IEC 12792 standard is intended as a starting point for further discussions and developments for AI transparency. It is not intended to provide a detailed analysis of AI transparency. For example, the standard does not cover the effects that society and the environment have on the performance of an AI system. Contextual factors can affect AI systems in several ways such as the introduction or reinforcement of bias, poor governance and use that leads to poor AI output / outcome, formation of unwanted feedback loops etc. While these are important items for consideration, they are not covered in the standard.

The draft BS EN ISO/IEC 12792 standard is now open for comments. The standard is horizontally applicable to any kind of organization and application that involves an AI system.


Submit a Comment