• Content Type

Output from workshop on ISO/IEC standards for AI transparency and explainability

  • Author
    Posts
  • Up
    3
    ::

    On 11 January, the AI Standards Hub hosted an in-person workshop dedicated to shaping two early-draft standards on AI transparency and explainability – ISO/IEC AWI 12792 and ISO/IEC AWI TS 6254. Guided by Adam Leon Smith and Lauriane Aufrant, two experts directly involved in the development of these standards in Working Group 3 within ISO/IEC’s Joint Technical Committee 1, Subcommittee 42, the session gathered input from a diverse group of stakeholders from across the private sector, government and regulatory authorities, civil society, and academia.

    Most of the discussions during the workshop focused on defining transparency and explainability. In particular, participants considered current draft definitions and discussed possible amendments. This resulted in a new set of revised definitions, reflecting participants’ collective views. The summary below captures these proposed definitions.

    This is an open discussion thread to share the workshop outcomes with the wider AI Standards Hub community and to continue the discussion. We welcome any additional input from workshop participants as well as comments and feedback from the wider AI Standards Hub community. The discussion thread will be open until Wednesday, 1 February, at which point we will produce a final version of the text below that takes into account any comments received by then and will be shared with ISO/IEC JTC 1/SC 42/Working Group 3 to feed into the development of ISO/IEC AWI 12792 and ISO/IEC AWI TS 6254.

    If you are interested in learning more about our recent events in the trustworthy AI programme, check out our latest blog – Event review: Towards transparent and explainable AI


    Collective output from the workshop “Towards transparent and explainable AI: Workshop to inform ISO/IEC standards development”

    Proposed definition of ‘transparency’

    availability in relation to stakeholders of meaningful, faithful, comprehensive, accessible and understandable information about a relevant aspects of an AI system

    Note: Relevant aspects may include the system’s life cycle, functionality, operation and impact on AI subjects.

    It was noted that ‘faithful’ might be replaced with ‘truthful’, but the former was deemed more appropriate.

    It was also discussed that there may be definitions of transparency that relate to disclosure where the information involved may not be considered meaningful, accessible, or understandable due to its “full” nature

    Proposed definition of ‘interpretability’

    < algorithms > ease with which a stakeholder can comprehend in a timely manner the objective of an AI system, the reasons for the system’s behavior, and whether it is working given its purpose and in line with stakeholder expectations, and how different inputs could lead to different outcomes

    Note 1 to entry: Technical approaches to achieve interpretability include explainability methods as well as other analysis or visualization methods.

    Note 2 to entry: Training may be required for a stakeholder to comprehend the information. Consideration of the stakeholder is important when designing interpretability.

    It was also discussed that impact on the end-user is relevant to interpretability.

    Proposed definition of ‘explainability’

    < policy > ability to provide stakeholders of an AI system with concise, accessible, sufficient and useful explanatory information beyond the AI system’s results

    Note 1 to entry: Information can include model behaviour rationales, but also documentation on the
    development process, statement of system’s limitations, or offering basic algorithmic training to nonspecialist users.

    It was discussed that ‘accessible’ is meant in the general sense, not related to e.g. disability.

    Proposed definition of ‘explainability’

    < algorithms > capability of an AI system to correctly produce the reasons for its own behavior in a timely manner, allowing scrutiny of whether it is working given its purpose and in line with stakeholder expectations, and how different inputs could lead to different outcomes

    • This topic was modified 1 year, 10 months ago by Ana.
    Up
    2
    ::

    A standard for AI transparency and explainability should also include or refer to accuracy, resiliency, reliability, safety, and accountability.

    I prefer the term ‘trustworthy’ in stead of ‘faithful’ or ‘truthful’.

    Up
    2
    ::

    There should be a clear distinction between the two proposed definitions of the interpretability and explainability, particularly about understanding model’s behavior. What is important to note is that models are interpretable, if their inner working mechanisms, and hence the outcomes, are understandable by subject matter experts, i.e. an interpretable/transparent model should possess the property that for each given input data, the corresponding output of the model should be relatively easily predictable by (human) subject matter experts.
    On the other hand, explainability focuses on developing secondary interpretable algorithms to analysis the outputs of a black box model with the aim to extract information and insight about what the system has learned from historical data and its outputs. It is also known as post-hoc analysis, and as mentioned in one of the notes, it can contain various tools and methods such as visualization, documentation, interactive plots or plain language to discover and communicate findings from complex systems to subject matter experts.
    Since human domain experts, as the end users of the outcomes, are an integral part of the context when it comes to interpretable (or transparent) modeling and post hoc explainability analysis, the choice between the two would then require the contextualization, which is the mapping exercise of the model properties based on a domain of expertise to system properties, or the context.

    References:
    https://www.pnas.org/doi/abs/10.1073/pnas.1900654116
    https://arxiv.org/abs/1811.10154

    Up
    0
    ::

    Hi Nawal,

    In terms of accuracy, resiliency and reliability of any explaination – these were topics discussed that will be fed back informally to the working group. In terms of accuracy, resilience, reliability and (functiona) safety of the overall AI system – these are being addressed in ISO/IEC 25059 – Quality Model for AI Systems. 25059 is being published quite soon.

    Thanks!
    Adam L Smith

    Up
    0
    ::

    Hi Saeid,

    That’s an interesting way of looking at it, that an SME should be able to predict the outcome for it to be interpretable. I’ll represent that view in the working group, however I suspect that experts will suggest doesn’t hold true in all cases, and there is *something* that is about an explainability being understable that is different to whether a prediction can be anticipated by an SME.

    Thanks,
    Adam

    Up
    1
    ::

    1. I think it would be important to document why the word faithful vs truthful was proposed, and what exactly is intended to be communicated by each. At the time a case was made but on further reflection, I think that truthful carries a greater burden to make apparent the truth/facts than simply to represent faithfully or honestly. Perhaps both can be used – faithful and truthful. There is after all a note (which also needs qualifying) that on some occasions this might be challenging. I do not believe the word trustworthy should be used because fidelity/honesty/truthfulness are just some of the values required to inspire trust, and as such should not be viewed as comprehensive. In any case, I agree with Prof Winfield’s comments that confidence is more appropriate a term than trust. We should have assurance in a system, rather than formulating beliefs about it.

    2. Is the distinction between the <algorithm> and <policy> definitions the right words? I don’t remember this having had much discussion during the working group, and they feel quite narrow. Is what is meant more loosely something like <technical system> and <system strategy>, <policy and strategy> or just <strategy>? Either way the categories need to be defined and we need to explain why there is no <policy> definition for transparency and interoperability.

    3. Very minor – “availability in relation to stakeholders of meaningful, faithful, comprehensive, accessible and understandable information about a relevant aspects of an AI system” has an extra “a”. But on the whole I think this was a good reflection of where the group got to. Perhaps “convenient to gain access to”, or “on-hand and available” could help disambiguate this particular use of the word “accessible”.

    Up
    0
    ::

    Hi Tania,

    I’ve changed to truthful for now with a note about this debate. Note that the term trustworthy isn’t actually used in the proposed text – discussions on the term trustworthy in ISO/IEC have led to work trying to create ontologies, but actually defining it is very hard.

    Changing <algorithm> to <AI system> seems like a good change, and is consistent with other documents where it is defined. Will discuss with ISO/IEC the phrase <policy>.

    Thanks, Adam

You must be logged in to contribute to the discussion

Login