• Content Type

Research and analysis item

A Life Cycle for Trustworthy and Safe Artificial Intelligence Systems

Abstract

This work presents a life cycle for trustworthy and safe artificial intelligence systems (TAIS). We consider the application of a risk management framework (RMF) in the TAI life cycle, and how this affects the choices made during the various phases of AI system development. The emerging requirements for AI systems go beyond the traditional software engineering (SE) development cycle of design, develop and deploy (DDD), where test, evaluation, verification and validation (TEVV) plays a crucial role. In particular, the additional challenge in SE is that AI systems are, in general: (i) data-driven, (ii) can learn and (iii) are adaptive. Moreover, considerations of the impact of AI systems on users and wider society are essential to their trustworthy and safe deployment. In order to achieve a TAIS we also consider the measurement aspect, which manifests itself in the specification of quantifiable metrics capturing both technical and socio-technical principles of TAI that allow us to assess the trustworthiness of an AI system. The TAI life cycle is an iterative and continuous process, so that the AI system being developed is evolving and corrective action can be taken to deal with arising issues at any stage of the life cycle.

Key Information

Name of organisation: National Physical Laboratory
Type of organisation: Research institution

Standards Hub Partner publication: Yes

Language: English

Date published: 5 Sep 2024

Categorisation

Domain: Horizontal
Type: Report

Discussion forum

You must be logged in to contribute to the discussion

Login