Blog post by
Sector Lead (Digital), BSI
To reach the full potential of artificial intelligence, end users need to have trust in all layers of AI technology. But to be trustworthy, these technologies must accurately and consistently reflect key characteristics. In this blog post we will focus on three of these key characteristics: safety, security, and resilience.
The AI Standards Hub’s online platform includes an observatory where stakeholders can access standards that support how we address ethical, legal, and societal implications of AI, including the safety, security and resilience of systems and solutions. The Hub has also already initiated engagement in the development of AI standards, organizing a workshop through which members of the AI Standards Hub community can contribute to shaping relevant ISO/IEC standards.
However, standards development is not the only tool necessary for advancing trustworthy AI. Advancing trustworthy AI requires a multifaceted approach, including R&D programmes addressing key technical challenges, development of metrics, and risk assessment to measure and evaluate AI trustworthiness.
Current AI standards
AI standards in development so far (for example ISO/IEC 42001 Information technology — Artificial intelligence — Management system) are only starting to address questions in the wide-ranging area of safety, security and resilience. Additional standards development efforts will be needed to mitigate the significant cyber security risks society faces each day. For example, much is covered by established IT standards (e.g., ISO/IEC 27001 in Cyber Security), but it is likely that we will need a bespoke version of 27001 for the AI domain. To be trustworthy, an AI system or solution needs to operate successfully at the confluence of safety, security, and resilience standards.
Global collaboration on trustworthy AI
Given the important impact that AI is having across all sectors of society and the economy, AI has been a recurring theme at G7 and G20 discussions. These forums have established international collaborations and partnerships that recognize the benefits and opportunities AI brings, and the importance of AI for our future economic growth and societal benefits in diverse areas ranging from health to national security. The Joint Declaration by the USA & UK on Cooperation in Artificial Intelligence Research & Development is an example for how we can collaborate on global risks. Going forward, the US and UK intend to discuss measurement and evaluation tools as well as activities to assess the technical requirements for trustworthy AI.
Your experiences of safe and trustworthy AI
Activities for the trustworthy AI topics ‘safety, security and resilience’ will be facilitated by the BSI team, one of the strategic partners for the AI Standards Hub, and will include a mix of research, online discussion, and events.
Together with you, the members of this community, we will be looking at some of the key challenges in this space and how these could be addressed by existing or potential new standards. We would welcome your thoughts and experiences on procuring and/or developing AI systems that are safe, secure, and resilient, as well as ideas and issues to address, or standards relevant to your work with AI. We invite you to share your experiences in the Safety, Security and Resilience forum.