A Life Cycle for Trustworthy and Safe Artificial Intelligence Systems
Abstract
This work presents a life cycle for trustworthy and safe artificial intelligence systems (TAIS). We consider the application of a risk management framework (RMF) in the TAI life cycle, and how this affects the choices made during the various phases of AI system development. The emerging requirements for AI systems go beyond the traditional software engineering (SE) development cycle of design, develop and deploy (DDD), where test, evaluation, verification and validation (TEVV) plays a crucial role. In particular, the additional challenge in SE is that AI systems are, in general: (i) data-driven, (ii) can learn and (iii) are adaptive. Moreover, considerations of the impact of AI systems on users and wider society are essential to their trustworthy and safe deployment. In order to achieve a TAIS we also consider the measurement aspect, which manifests itself in the specification of quantifiable metrics capturing both technical and socio-technical principles of TAI that allow us to assess the trustworthiness of an AI system. The TAI life cycle is an iterative and continuous process, so that the AI system being developed is evolving and corrective action can be taken to deal with arising issues at any stage of the life cycle.
External Links
Key Information
Standards Hub Partner publication: Yes
Language: English
Date published: 5 Sep 2024