• Content Type

Standards as a foundation for success with AI

Blog post by:

Tim McGarr, AI Market Development Lead, BSI

Cubes being stacked on a solid foundation

In previous posts we have explored how a standards-based approach to AI can help an organisation deploy systems which are safe, secure, and resilient.Ā  As we have explored these topics in workshops and interviews, we have also learned how standards are helping create wider foundational successes for organizations beyond their AI deployment.

In this post we share some of these insights, including how organizations have been able to improve how they communicate about AI and machine learning, as well as how standards both form a foundation to approaching AI and a scaffolding on which to refine and further develop their approach. We also explore some of the barriers to approaching standards, and potential solutions to improving awareness and access.

Please note that, as the interviews were conducted on the basis of anonymity, all names of interviewees and names of companies have been removed.

Standardizing language and terminology

One of the most reported successes shared by stakeholders in our January workshop was how standards created a ā€œcommon languageā€ needed to talk about rapidly evolving subjects with both consistency and confidence.

In our research, a stakeholder from a large telecommunications provider shared how internal teams responsible for building and purchasing AI systems benefitted from a standardized terminology, while another stakeholder shared how their supply chain communication became much more efficient with a particular set of AI-related terms and definitions.

This benefit could also extend to AI developers and buyers. A stakeholder from a software development company told us that one of their key challenges lies in finding ā€œa better-informed buyerā€, well aware of the standards and evidence frameworks which support it. These buyers, in turn, create the demand for systems that are demonstrably more safe, secure, and resilient, and therefore more trustworthy. This challenge is also reflected in CDEIā€™s Roadmap to an effective AI assurance ecosystem, where common language can help overcome the ā€œinformation problemā€ of reliability evaluating evidence, as well as the “communication problemā€ of assuring users of this trustworthiness.

Several stakeholders also shared with us how consistent language, in turn, could make standards more discoverable and accessible. Several interviewees noted how one of their troubles in locating the standards they needed was in knowing which terms and concepts to search for. Using standards which define the terminology and frameworks in use, such as ISO/IEC 22989 and ISO/IEC 23053 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML), are a useful starting point for those looking to better understand the landscape of AI standards.

Creating internal alignment and building confidence

A trend amongst the larger organizations weā€™ve spoken to has been the creation of internal best practices in relation to developing and/or acquiring AI systems. This ensures a consistent internal approach and better adherence to internal requirements. According to an interview with a regulator, this practice while the technology is in its relative infancy ā€œmight be why we arenā€™t seeing any real-world issuesā€ in relation to AI.

When formal standards are adopted, organizations can use these to scaffold their own internal practice and further build on these foundations. One business interviewee shared how standards are their ā€œcheck step [to see] if everythingā€™s doneā€ in their own approach.Ā  Similarly, another interviewee from a larger corporation shared how his company was looking forward to ISO/IEC 42001 (the forthcoming Management System standard for AI) to ā€œlink the pieces in the puzzleā€ of their existing best practice and help guide his organization toward a more strategic management of their AI implementation. This scaffolded approach, as weā€™ve seen with other organizations such as ACCAā€™s implementation of ISO/IEC 27001 for cybersecurity, not only builds resiliency into the process, but also promotes confidence with internal and external stakeholders.

The result of standardizing this internal alignment helps the organization take a more standard approach to procuring AI, as those looking to buy into an AI system understand what questions they should be asking of those selling AI products or services. For sellers, these informed buyers create a clearer set of requirementsā€”and demandā€”for products with higher standards of safety, security, and resilience built in.

Ensuring wider success through standards

Our conversations with stakeholders allowed the opportunity to explore many of the challenges and potential solutions to standards usage. For manyā€”and particularly in a field as rapidly developing as that of AIā€”it is hard to know what standards are relevant and useful to the challenge they face.Ā  To overcome these challenges, we aim to test out several potential solutions, including creating ā€œsolution packsā€ for common use cases of AI, finding and sharing stories and case studies of standards usage, and adding in greater context to standards searches, such as here on the AI Standards Hub.

As more organizations prepare to adopt and develop AI deployments, we are interested in continuing to explore both the challenges being faced and the successes organizations are having through taking an approach grounded in standards and best practices.Ā  Weā€™re especially interested in hearing your experiences as this will help us incorporate more useful resources into the AI Standards Hub. Feel free to share your thoughts and experiences in the comments below.

0 Comments

Submit a Comment