• Content Type

What trustworthy AI characteristics would you like to see the hub focus on next?
  • Author
    Posts
  • Up
    0
    ::

    Since the AI Standards Hub officially launched in October 2022, our initial work programme on Trustworthy AI has focused on three specific areas, in line with stakeholder priorities from our pre-launch engagement: explainability and transparency; safety, security, and resilience; and upcoming activities on uncertainty quantification. If you would like to catch up on events from any of these areas, you can find recordings on our events page.

    At this stage we would like to reach out to you – our community – again, to understand what other characteristics of trustworthy AI you would like the AI Standards Hub to prioritise for future activities, where the biggest challenges are for you and your organisation, and how the Hub can help you to address these challenges.

    For the purposes of the Hub’s work, relevant considerations in addressing trustworthy AI include both technical (e.g., engineering choices) and non-technical (e.g., effective communication with users and affected communities) aspects of the ways in which AI systems are designed, developed, and used.

    With this in mind, please share with us:

    What does trustworthy AI mean for you, what trustworthy AI characteristics should the Hub focus on next, and how can the Hub’s activities support you in the challenges you face regarding trustworthy AI?

    Up
    0
    ::

    I would be personally keen to see more on responsible AI use cases and the inclusion of all affected stakeholders that comes with that.

    Up
    0
    ::

    I would like to see sharing of best practice in the area of making code more trustworthy.

You must be logged in to contribute to the discussion

Login