Blog post by
Tim McGarr
Sector Lead (Digital), BSI
In an earlier post on Trustworthy AI, we highlighted some of the key areas of risk posed by the deployment of AI systems and the importance of managing risk for maintaining trust in such systems. In this post we discuss a new standard for AI risk management – ISO/IEC 23894, to be published in February 2023.
What does ISO/IEC 23894 cover?
ISO/IEC 23894 offers strategic guidance to organisations across all sectors for managing risks connected to the development and use of AI. It also provides guidance on how organisations can integrate risk management into their AI-driven activities and business functions.
Notably, ISO/IEC 23894 offers concrete examples of effective risk management implementation and integration throughout the AI development lifecycle and provides detailed information on AI-specific risk sources. A key benefit of this standard is that application of the guidance can be customized to any organisation and its business context.
What’s the purpose of risk management and how does ISO/IEC 23894 align with existing standards?
Risk management enables the creation and protection of value. Existing international risk management standards such as ISO 31000:2018 provide established guidance for those who carry out risk management to implement strategies that proved to be successful across all sectors.
ISO/IEC 23894 uses ISO 31000:2018 as its reference point in terms of principles, frameworks, and processes. Standards makers considered it appropriate to rely on generic risk management principles for approaching AI risks and deemed it unnecessary to develop new, bespoke versions for AI. The standard’s guidance relies on these principles to provide an international perspective on how to manage risks and on associated best practices in the context of AI.
In addition, ISO/IEC 23894 references ISO Guide 73:2009 Risk Management Vocabulary and ISO/IEC 22989 (Information Technology – Artificial Intelligence – Concepts & Technology).
How can we manage risk in the complex environment of the AI lifecycle?
Whilst generic principles can be fully relied upon, we need to flag key considerations for risk in the AI lifecycle. AI systems operate on a far more complex level than other technologies, resulting in a greater number of sources of risk. They will introduce new or emerging risks for organisations, with positive or negative implications for strategic objectives, and changes to existing risk profiles.
All of this requires specific consideration. ISO/IEC 23894 addresses this at Annex C, a comprehensive functional mapping of risk management processes across the AI system lifecycle (see ISO/IEC 22989). This is undoubtedly the primary tool in the box of risk management guidance provided by the standard. It sets out the vertical and horizontal pathways for implementing the principles, processes, and frameworks that can be adapted to any organisation.
You can read more about ISO/IEC 23894 and follow the standard for updates in the Observatory section of the AI Standards Hub. We would also be interested in hearing how this standard plays a role in your work with AI, so why not share your thoughts and reflections in the comments below.
Thank you for sharing this, Tim. I will be interested in reading ISO/IEC 23984 from an AI as a medical device (AIaMD) perspective to see how the principles align with ISO 14971. Was this sector-specific risk management standard considered during the development of 23984?
Thanks for the introduction!
Note that in the context of EU AI regulation, theres is a standardisation request on “Risk Management system for AI systems” from the European commission. Has this ISO/IEC 23894 done the job? Any gaps to be filled in the contex of EU AI Act?
Would be good to see some discussions here : )
Lee, it is hard to be sure given that the text of the EU AI Act isn’t finalised yet, but I think it is likely that 23894 will met that requirement.