Blog post by:
This year marked the 18th annual meeting of the Internet Governance Forum (IGF), a global UN forum dedicated to advancing multi-stakeholder dialogues on public policy issues related to the internet. Its overarching theme for 2023, “The Internet We Want – Empowering All People”, was structured around eight critical areas and delved into topics such as internet fragmentation, data governance, and cybersecurity.
AI emerged as a prominent theme throughout this year’s IGF programme. The increasing role of AI in society was seen both as a tool for empowerment, and a policy challenge to be navigated carefully, especially in the context of powerful foundational models.
Key messages from the “AI and Emerging Technologies” sub-theme highlighted the necessity for global cooperation in realising AI’s potential, emphasising the need to incorporate diverse views from across the world and especially the Global South. Multi-stakeholder panels advocated for transparent and inclusive processes and stressed that AI development must align with human rights, democratic values, and the rule of law. Notably, several AI-focused sessions recognised the urgency in building on discussions on ethical guidelines to start implementing practical governance tools and regulatory frameworks in order to effectively manage AI’s growing risks.
Within this context, the AI Standards Hub led a workshop on the role that multistakeholder participation and international cooperation must play to unlock the potential of standards as effective AI governance tools and innovation enablers. Drawing in a diverse panel of experts from the UK, Canada, Singapore, and Australia, the session presented regional perspectives on the multifaceted challenges and opportunities around AI standardisation.
The UK government’s approach to AI standardisation
Nikita Bhangu, the Head of Digital Standards policy in the UK government’s Department for Science, Innovation and Technology (DSIT), started off the panel discussion by providing an overview of the UK government’s policy approach to standards in the context of AI.
Referring to the recent AI white paper, Ms Bhangu highlighted the important role that standards, and other non-regulatory governance mechanisms such as assurance techniques, can play in creating a robust set of tools for advancing responsible AI. Elaborating on the complexity of the standardisation ecosystem, she noted the barriers that stakeholders face in meaningfully engaging with AI standards, and that it is vital for governments to support diverse stakeholder participation in standards development processes.
Reflecting on DSIT’s policy thinking that led to the creation of the AI Standards Hub, Ms Bhangu noted that key aims guiding the initiative were to increase adoption and awareness of standards, create synergies between AI governance and standards, and provide practical tools for stakeholders to engage meaningfully with the AI standards ecosystem.
Regional perspectives from Canada, Australia, and Singapore
Ashley Casovan, the Executive Director of the Responsible AI Institute, provided insights on Canada’s AI and Data Governance Standardisation Collaborative from the perspective of civil society. She explained that the initiative aims to bring together multiple stakeholders to reflect on AI standardisation needs across different contexts and use cases.
Wan Sie Lee, the Director for Data-Driven Tech at Singapore’s Infocomm Media Development Authority (IMDA), stressed that there is a widespread recognition of the importance of international cooperation around AI standards in Singapore. This is exemplified by Singapore’s active engagement in ISO processes and close collaborations with other countries. Elaborating on the city-state’s efforts to achieve international alignment on AI standards, Ms Lee pointed to Singapore’s AI Verify initiative, which closely aligns with NIST’s recently published Risk Management Framework.
Aurelie Jacquet, Principal Research Consultant on Responsible AI for CSIRO-Data61, the data and digital specialist arm of Australia’s national science agency, highlighted several Australian initiatives centred on advancing responsible AI. These included Australia’s AI Standards Roadmap, the work of the National AI Centre’s Responsible AI Network, and the development of the NSW AI assurance framework. These initiatives are dedicated to developing education programmes around AI standards, strengthening the role of standards in AI governance, and leveraging existing standards to provide assurance of AI systems in the public sector and beyond.
Identifying stakeholder challenges and needs
The panellists collectively acknowledged challenges certain stakeholder face in meaningfully engaging with AI standards. Nikita Bhangu pointed to the challenge of a lack of available resources and dedicated standards expertise within SMEs, civil society, and governments, which often leads to these groups being underrepresented in AI standards development processes. The panellists from Canada and Singapore echoed these concerns, noting challenges in resource allocation and advocating for the inclusion of diverse perspectives. Aurelie Jacquet also emphasised the complexity of the standardisation ecosystem, highlighting Australia’s efforts in demystifying these processes through various white papers and guidance documents.
Priorities for international cooperation
The discussion concluded in emphasising the importance of international cooperation for unlocking the potential of AI standards. Panellists agreed that understanding different countries’ approaches to AI standardisation is crucial in order to avoid fragmentation of efforts and build international synergies. Multilateral forums like the Organisation for Economic Co-operation and Development (OECD) and IGF were recognised as vital platforms for facilitating these discussions.
Additionally, initiatives like the AI Standards Hub were highlighted as important avenues for building networks internationally, identifying shared goals and challenges across different stakeholder groups, and jointly devising strategies to build an inclusive environment around AI standards.
If you are interested in watching the full workshop, the recording is now available on the Hub website and can be accessed here. To stay up to date with all our future workshops and webinars, sign up to the Hub’s newsletter and follow us on Twitter/X and LinkedIn.