• Content Type

Lessons from businesses on using ISO/IEC 42001 to identify and manage AI risk
Blog post by:
Julian Adams, British Standards Institution

This is the second part of our blog series on Lessons from Business in Adopting ISO/IEC 42001:2023—Information technology—Artificial intelligence—Management system. For part 1, see Five key benefits of implementing ISO/IEC 42001Ā 

Early adopters argue that risks from AI systems and technologies should be treated like any other risk, although AI-related risks are more subjective. Encouragingly, ISO/IEC 42001 provides organizations with a formal structure to be used to manage and mitigate risk.

In this blog we discuss organizations’ experiences of each stage of the risk assessment process. We start by looking at risk identification.

Organizations are mindful that it can be a challenge to identify possible risks of the organization’s assets and AI systems. This is largely attributed to the speed in which AI systems are evolving, concerns about a lack of transparency in how AI systems function and the exponential rise in use cases. Organizations are keen to understand risk from within the business and from suppliers using AI systems as part of their offer. It is argued that it is the suppliers’ responsibility to demonstrate that the AI systems they are using are safe, rather than assuming this to be the case.

Notwithstanding the above, organizations cite the following risks that they were looking to put controls in place for.

  1. Bias

Organizations cite risks associated with training data that might amplify bias and result in discrimination. They suggest that this could be a product of using the wrong training data. Organizations can refer to ISO/IEC TR 24027:2021 Information technology – Artificial Intelligence – Bias in AI systems and AI-aided decision making, to help manage this type of risk.

  1. Explainability

Organizations are worried that it is difficult to explain what is happening in the ā€˜black box’ / how the AI system is working.

  1. Blind trust

Organizations are keen to avoid the risk of assuming that outputs are valid and reliable without proper checks and balances.

  1. Hallucination

Organizations are concerned with generative AI outputs that contain non-sensical data that is presented as fact.

  1. Intellectual property

Organizations are keen to understand who owns what data and how that can be processed legitimately. Organizations can refer to BS ISO 56005:2021 Innovation management – Tools and methods for intellectual property management guidance.

  1. Commercial risk

Organizations are concerned about staff entering data in AI systems that might expose their business. It was noted that this could result in data leaks. For example, there are concerns about staff entering data into ChatGPT that is commercially sensitive.

 

Organizations are mindful that, after identifying risks, it can be a challenge to effectively quantify risk. For instance, the consequences of specific use cases on the business, customers or wider society may not be immediately clear. Some organizations we engaged had set up risk committees and working groups to help assess the likely risks of AI systems. Those from a technical background suggested that looking at training data and assessing model outcomes would help in quantifying risk. Many organizations were piloting AI systems prior to the release of their AI products and service to the public to better understand real-world risks. ISO 42001 can support with many of these activities, as it discusses risk assessment in relation to planning, support and the operation of the AI management system. The standard BS ISO 31000:2018 Risk management. Guidelines provides further guidance on the selection and application of techniques to assess risk, as cited in 42001.

All organizations acknowledge the importance of implementing controls to mitigate the risks highlighted above. For many, it is still early in their implementation of the standard and, as such, they are planning to introduce controls in the future. That said, organizations compliant with BS ISO/IEC 27001:2022 Information Security, cybersecurity and privacy protection and BS ISO 9001:2015 Quality Management System have guardrails in place already, albeit not specifically designed to address AI risk. In such instances, organizations suggested simply updating existing controls would suffice rather than starting from scratch. It is argued that adding to existing controls would require only limited investment, whereas establishing new controls would require significant investment.

Organizations note that controls need to reflect the risk appetite of the organization and the risk environment of both the organization and the wider industry. However, controls should be reasoned and proportionate so as not to stifle innovation. For instance, the level of risk of certain AI systems will mean that no controls are needed. As such, it is important that organizations can identify what an acceptable risk is. Moreover, it is felt that controls need to be dynamic to respond to evolving AI risks.

Organizations commented that, while the front of the standard provides a useful overview of how to implement the standard, Annex A provides more technical detail including guidance on AI control objectives and how to implement controls. This covers areas such as impact assessment of AI systems, systems life cycle management, data for AI systems, provision of information for third parties, use of AI systems and third party relationships and accountability.

With many organizations in the early stages of implementation, few have established monitoring structures beyond existing protocols. Consequently, some organizations are monitoring AI systems on an ad-hoc basis. Understandably, there are concerns about how best to ensure that monitoring keeps pace with the evolving nature of AI systems. In Clause 9, Performance Evaluation, the standard details actions relating to monitoring, measurement, analysis, and the evaluation of the AI management system.

 

What’s next in this blog series?

The next blog in the series will look at how organizations are implementing ISO/IEC 42001 inĀ practice.

0 Comments

Submit a Comment