• Content Type

Event review: Towards transparent and explainable AI
Image of explainability

Transparency and explainability are key characteristics for the realisation of trustworthy AI. Two recent events withinĀ our work programme on trustworthy AI brought together leading experts in the field to discuss the most prominent international standardisation initiatives on AI transparency and explainability. The webinar focused on giving an overview of the current standardisation landscape to a broad audience, while the workshopĀ provided a unique opportunity to an engaged and diverse group of stakeholders to shape early-draft standards currently being developed within ISO/IEC, namelyĀ ISO/IEC AWI 12792Ā andĀ ISO/IEC AWI TS 6254.

Towards transparent and explainable AI: The current standardisation landscape

The first event, a webinar held on 8 December, featured speakers directly involved in the development of relevant IEEE and ISO/IEC standards. The aim of the event was to provide an overview of the current standardisation landscape for AI transparency and explainability to a broad audience and encourage active engagement within this space.

Professor Alan Winfield, Chair of IEEE SA working group P7001, presented an overview of the recently published standard IEEE P7001-2021 which sets out measurable and testable levels of transparency for autonomous systems. In his presentation, Professor Winfield outlined the scope and structure of P7001 and highlighted the importance of transparency in autonomous systems for ensuring accountability. This segment also included a brief introduction to the five stakeholder groups and the different levels of transparency identified in the standard.

The second part of the webinar was dedicated to ongoing standards development work within ISO/IECā€™s Joint Technical Committee 1, Subcommittee 42 and provided an opportunity to get an insight into two early-draft standards dedicated to transparency (ISO/IEC 12792) and explainability (ISO/IEC TS 6254), respectively. Setting the scene, Dr David Filip, the convener of Working Group 3 (Trustworthy AI) within Subcommittee 42, discussed the role of AI standardisation in the context of international regulatory demand and gave an overview of current projects within Subcommittee 42. This was followed by detailed presentations covering the two mentioned early-draft standards by Dr Rania Wazir, the project editor for ISO/IEC 12792, and Dr Lauriane Aufrant, a contributing expert for ISO/IEC TS 6254.

Dr Rania Wazir highlighted the importance of common language around transparency as a means to enable communication and collaboration between different stakeholders and as a foundation for future transparency-related standards that build on and go beyond the questions of taxonomy that form the focus of ISO/IEC 12792. Dr Wazir also explained the scope and structure of ISO/IEC 12792 and discussed transparency requirements throughout the AI system life cycle and across different stakeholders.

Moving on to the topic of explainable AI (XAI), Dr Lauriane Aufrant outlined the purpose and scope of the ISO/IEC TS 6254 working draft, which provides a survey of existing methods for XAI and a taxonomy of method properties. Providing wider context on XAI, Dr. Aufrant also gave an overview of different objectives for XAI and shared examples of explanations for different stakeholder groups such as AI specialists, domain experts and laypersons.

The webinar also featured a presentation by Elena Hess-Rheingans from the UK Governmentā€™s Central Digital and Data Office (CDDO) that gave an overview of the Algorithmic Transparency Standard, developed to help public sector bodies in the UK share information on their use of algorithmic tools with the general public. The presentation introduced the context behind the initiative and gave an overview of the standardā€™s content, including the two tiers of transparency and the relevant requirements for each. The most recent version of the standard is freely available on GitHub alongside guidance to support organisations using it.

The final segment of the webinar provided an opportunity for attendees to ask questions and get additional input from the speakers. The recording of the webinar is available on the AI Standards Hub website and can be viewed here.

Towards transparent and explainable AI: Workshop to inform ISO/IEC standards development

Our second event was an in-person workshop hosted at The Alan Turing Institute on 11 January and was dedicated to gathering input from AI Standards Hub community members to inform the further development of ISO/IEC AWI 12792 and ISO/IEC AWI TS 6254. Guided by Adam Leon Smith and Lauriane Aufrant, two experts directly involved in the development of these standards, the session brought together a diverse group of 21 participants from across the private sector, government and regulatory authorities, civil society, and academia.

Most of the discussions during the workshop focused on defining transparency and explainability. In particular, participants considered current draft definitions and discussed possible amendments. This resulted in a new set of revised definitions, reflecting participantsā€™ collective views, which has now been posted on the AI Standards Hub online forum. The purpose of this open discussion thread is to share the workshop outcomes with the wider AI Standards Hub community and invite additional comments and feedback to shape the final text. The discussion thread will be open until Wednesday, 1 February, at which point we will incorporate any additional comments received and produce a final version of the text that will be shared with ISO/IEC JTC 1/SC 42/Working Group 3 to feed into the development of ISO/IEC AWI 12792 and ISO/IEC AWI TS 6254.

To view the write-up and submit your comments, please click here.

0 Comments

Submit a Comment