-
AuthorPosts
-
Since the AI Standards Hub officially launched in October 2022, our initial work programme on Trustworthy AI has focused on three specific areas, in line with stakeholder priorities from our pre-launch engagement: explainability and transparency; safety, security, and resilience; and upcoming activities on uncertainty quantification. If you would like to catch up on events from any of these areas, you can find recordings on our events page.
At this stage we would like to reach out to you ā our community ā again, to understand what other characteristics of trustworthy AI you would like the AI Standards Hub to prioritise for future activities, where the biggest challenges are for you and your organisation, and how the Hub can help you to address these challenges.
For the purposes of the Hubās work, relevant considerations in addressing trustworthy AI include both technical (e.g., engineering choices) and non-technical (e.g., effective communication with users and affected communities) aspects of the ways in which AI systems are designed, developed, and used.
With this in mind, please share with us:
What does trustworthy AI mean for you, what trustworthy AI characteristics should the Hub focus on next, and how can the Hubās activities support you in the challenges you face regarding trustworthy AI?
I would be personally keen to see more on responsible AI use cases and the inclusion of all affected stakeholders that comes with that.
I would like to see sharing of best practice in the area of making code more trustworthy.
AI TRISM. Simply the increase in AI use requires an increase in controls and with this a framework such as AI TRISM (AI Trust, Risk and Security Management). Im all for standards but looking for practical ways in which to enforce them. https://www.gartner.com/en/articles/what-it-takes-to-make-ai-safe-and-effective
If you look at Gartners Hype cycle for 2023, then AI TRISM is right up there.As the person who has developed the responsible & ethical AI principles for my company I welcome the AI Standards Hub and will be making full use of its resources to support our AI journey. would be great to know how to get more involved?
If AI Standards are guidelines for safe, ethical and effective use of AI technologies then AI TRISM could be seen as a method to help enforce these standards.
If you ask some AI tools about standards then it mentioned the main standards coming from BSI, NPL and the Alan Turing institute as the 3 main bodies (so its great that these are the 3 bodies you are aligned to!).
One of the main challenges we face regarding trustworthy AI is that some of the mega-vendors cant actually explain how their AI solutions work for new AI features they embed in their tooling. We have developed a set of vendor questions to help understand how these AI & machine learning features work (which align to our responsible AI principles). Id like to see vendors be more open & transparent around how the plan to use AI for more than just a marketing buzzword and say what standards they have developed to. If you look at some vendors like Microsoft they are very open about their Responsible AI practices but other mega vendors are not. Be great to have vendors “subscribe” or align their products to this AI Standards Hub so that when we select a new vendor we know they have designed a product which is aligned to a central set of standards..
-
AuthorPosts