• Content Type

ISO/IEC TS 8200:2024
ISO/IEC NP TS 8200

Information technology — Artificial intelligence — Controllability of automated artificial intelligence systems

Last updated: 18 Jul 2024

Development Stage

Pre-draft

Draft

Published

11 Aug 2021
29 Oct 2023
10 Apr 2024
published

Scope

This document defines a basic framework with principles, characteristics and approaches for the realization and enhancement for automated artificial intelligence (AI) systems’ controllability. The following areas are covered: ― State observability and state transition ― Control transfer process and cost ― Reaction to uncertainty during control transfer ― Verification and validation approaches This document is applicable to all types of organizations (e.g. commercial enterprises, government agencies, not-for-profit organizations) developing and using AI systems during their whole life cycle. ©ISO/IEC 2022. All rights reserved.

Purpose

1. Artificial intelligence (AI) techniques have been applied in applications in domains and markets, such as health-care, education, clean energy, sustainable living, etc. Despite being used to enable systems to perform automated predictions, recommendations or decisions, the use of AI has raised a wide range of concerns. Some characteristics (e.g. lack of full explanation) of AI (e.g. deep neural network based learning and inference) can introduce uncertainty to systems’ behaviour. This could bring risks to end-users with unpredictable hazards. For this reason, controllability of AI systems becomes very important. This document is primarily intended as a guidance for AI system design and use, in terms of controllability realization and enhancement.

2. To embrace the gains from AI in a sustainable and responsible way, controllability characteristics and principles of AI system can be identified. By investigating the needs of controllability in domainspecific contexts, the understanding of AI systems’ controllability can be strengthened. Controllability is an important fundamental characteristic supporting AI systems’ safety for end-users.

3. Automated systems (ref. ISO/IEC DIS 22989, Table 1 in 5.12) can be realized by using AI. The degree of external control or controllability is an important characteristic of automated systems. Heteronomous systems range over a spectrum from no external control to direct control. The amount of external control or controllability can be used to guide or manipulate systems at various levels of automation to behave as intended and within functional safety limits. Such requirements can be satisfied considering controllability features or taking specific preventive actions within each step of the AI system life cycle as defined in ISO/IEC DIS 22989, clause 6.

4. Bad consequences are possible if an AI system is permitted to take wrong decisions or actions without any external intervention, control or oversight. To realize controllability, key points of system state observation and state transition should be identified. Intervention calls for “transfer of control” between an AI system and a human or another external agent. The exact points where transfer of control has to be enabled should be considered during the design and implementation of an AI system.

5. The transfer of control for an intervention should be easily executable, within reasonable time, space, energy and complexity limits, while at the same time minimizing the interruption to both sides(i. e. the AI system and the external agent). Stakeholders consider the cost of control transfer of automated AI systems, which determines effectiveness of controllability realization in AI systems. Moreover, uncertainty during control transfer can exist on both sides.

Thus it is important to carefully design the control transfer processes, so that uncertainty and other undesired consequences are minimized or mitigated.

6. Given the design and implementation of control transfer, the effectiveness needs to be tested. This calls for principles and approaches for validation and verification of AI systems’ controllability.

©ISO/IEC 2022. All rights reserved.

Let the community know

Categorisation

Domain: Horizontal

Key Information

Organisation: ISO/IEC, BSI
Committee: ISO/IEC JTC 1/SC 42
Relevant UK committee: ART/1

Discussion Forum

  • Author
    Posts
  • Up
    0
    ::

    Share your thoughts on this standard with the AI Standards Hub community here.

You must be logged in to contribute to the discussion

Login