• Content Type

Standards as a foundation for success with AI

Blog post by:

Tim McGarr, AI Market Development Lead, BSI

Cubes being stacked on a solid foundation

In previous posts we have explored how a standards-based approach to AI can help an organisation deploy systems which are safe, secure, and resilient.  As we have explored these topics in workshops and interviews, we have also learned how standards are helping create wider foundational successes for organizations beyond their AI deployment.

In this post we share some of these insights, including how organizations have been able to improve how they communicate about AI and machine learning, as well as how standards both form a foundation to approaching AI and a scaffolding on which to refine and further develop their approach. We also explore some of the barriers to approaching standards, and potential solutions to improving awareness and access.

Please note that, as the interviews were conducted on the basis of anonymity, all names of interviewees and names of companies have been removed.

Standardizing language and terminology

One of the most reported successes shared by stakeholders in our January workshop was how standards created a “common language” needed to talk about rapidly evolving subjects with both consistency and confidence.

In our research, a stakeholder from a large telecommunications provider shared how internal teams responsible for building and purchasing AI systems benefitted from a standardized terminology, while another stakeholder shared how their supply chain communication became much more efficient with a particular set of AI-related terms and definitions.

This benefit could also extend to AI developers and buyers. A stakeholder from a software development company told us that one of their key challenges lies in finding “a better-informed buyer”, well aware of the standards and evidence frameworks which support it. These buyers, in turn, create the demand for systems that are demonstrably more safe, secure, and resilient, and therefore more trustworthy. This challenge is also reflected in CDEI’s Roadmap to an effective AI assurance ecosystem, where common language can help overcome the “information problem” of reliability evaluating evidence, as well as the “communication problem” of assuring users of this trustworthiness.

Several stakeholders also shared with us how consistent language, in turn, could make standards more discoverable and accessible. Several interviewees noted how one of their troubles in locating the standards they needed was in knowing which terms and concepts to search for. Using standards which define the terminology and frameworks in use, such as ISO/IEC 22989 and ISO/IEC 23053 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML), are a useful starting point for those looking to better understand the landscape of AI standards.

Creating internal alignment and building confidence

A trend amongst the larger organizations we’ve spoken to has been the creation of internal best practices in relation to developing and/or acquiring AI systems. This ensures a consistent internal approach and better adherence to internal requirements. According to an interview with a regulator, this practice while the technology is in its relative infancy “might be why we aren’t seeing any real-world issues” in relation to AI.

When formal standards are adopted, organizations can use these to scaffold their own internal practice and further build on these foundations. One business interviewee shared how standards are their “check step [to see] if everything’s done” in their own approach.  Similarly, another interviewee from a larger corporation shared how his company was looking forward to ISO/IEC 42001 (the forthcoming Management System standard for AI) to “link the pieces in the puzzle” of their existing best practice and help guide his organization toward a more strategic management of their AI implementation. This scaffolded approach, as we’ve seen with other organizations such as ACCA’s implementation of ISO/IEC 27001 for cybersecurity, not only builds resiliency into the process, but also promotes confidence with internal and external stakeholders.

The result of standardizing this internal alignment helps the organization take a more standard approach to procuring AI, as those looking to buy into an AI system understand what questions they should be asking of those selling AI products or services. For sellers, these informed buyers create a clearer set of requirements—and demand—for products with higher standards of safety, security, and resilience built in.

Ensuring wider success through standards

Our conversations with stakeholders allowed the opportunity to explore many of the challenges and potential solutions to standards usage. For many—and particularly in a field as rapidly developing as that of AI—it is hard to know what standards are relevant and useful to the challenge they face.  To overcome these challenges, we aim to test out several potential solutions, including creating “solution packs” for common use cases of AI, finding and sharing stories and case studies of standards usage, and adding in greater context to standards searches, such as here on the AI Standards Hub.

As more organizations prepare to adopt and develop AI deployments, we are interested in continuing to explore both the challenges being faced and the successes organizations are having through taking an approach grounded in standards and best practices.  We’re especially interested in hearing your experiences as this will help us incorporate more useful resources into the AI Standards Hub. Feel free to share your thoughts and experiences in the comments below.


Submit a Comment