This is theĀ fourthĀ part of our blog series onĀ Lessons from Business in AdoptingĀ ISO/IEC 42001:2023āInformation technologyāArtificial intelligenceāManagement system. ForĀ the precedingĀ parts see:
- Lessons from businesses adopting the worldās first AI Management System standard: A blog series on ISO/IEC 42001
- Lessons from businesses on using ISO/IEC 42001 to identify and manage AI risk
- Lessons from businesses on implementing ISO/IEC 42001 in practice
Early adopters wereĀ asked whetherĀ implementing the standardĀ wouldĀ help theirĀ organization use AI systems responsibly.Ā Overall, organizationsĀ argueĀ thatĀ implementing the standard will help drive responsible AI.Ā Ā
Letās explore some of the reasons why this is the case:
Ā Ā Ā 1.Ā Align to Corporate Social Responsibility
Many of the organizations that we spoke to have formal Corporate Social Responsibility (CSR) and ethics policies. As such, implementing the standard is seen as key to supporting these policies. It is argued that many of the core notions of responsible AI, such as transparency, bias and fairness, align with CSR principles of positive social impact.
2. Identify use cases
Organizations acknowledge that it can be difficult to identify use cases. The standard is seen as key to providing a structured approach to identifying and understanding use cases across the business. Organizations note that the standard helped the business focus on all types of AI systems and technologies. Organizations can refer to BS ISO/IEC TR 24030:2024 Information technology ā Artificial Intelligence (AI) ā use cases, which offers a broad perspective on AI applications in different domains.
Ā Ā Ā 3. Provide assurance
Organizations feel that implementing the standard will help to demonstrate to the organization, partners, suppliers, customers and third parties that the business is developing and using AI responsibly.
Ā Ā Ā 4. Establish accountability
Organizations emphasize the importance of ensuring that staff within the business are aware of their responsibilities and are held accountable. Moreover, organizations that are using AI systems from suppliers recognize the importance of ensuring that suppliers have in place procedures for the responsible use of AI. The standard details controls for responsibility and accountability for organizations, partners, suppliers, customers and third parties.
5. Set risk controls
A key aspect of managing responsible AI is setting appropriate risk controls to mitigate any negative impact of AI systems and technologies. It is noted that while the front of the standard provides a useful overview of how to implement the standard, Annex A provides more technical detail including guidance on AI control objectives and how to implement controls. This includes impact assessment of AI systems, system lifecycle management, supplier management and continuous improvement of the effectiveness of the AI management system.
Responsible AI systems with human oversight
Organizations are cognisant of the need for human oversight throughout the AI system life cycle, from training data to outputs. However, they acknowledge that this is a challenge given AI models continuously earn and evolve. This underlines the importance of ensuring that there are human checks and balances feeding into continuous improvement of the AI model. Organizations suggest that human oversight is important to ensure accountability for product and system owners alike.
Although organizations are looking for efficiencies from the adoption of the AI systems and technologies, conversely some noted that oversight of the AI systems will potentially be resource heavy.
That said, organizations suggest that having a robust AI management system and effective governance will address the need for AI system oversight. Indeed, one organization made specific reference to the controls in Annex A, A.6.1 āManagement guidance for AI system developmentā and A.6.2 āTo define the criteria and requirements for each stage of the AI system life cycle.ā
0 Comments