Introduction
In this ongoing project, the Turing’s AI Standards Hub research team are assessing the role and relevance of standards for supporting UK regulators to interpret and apply the UK cross-sector AI principles in regulatory guidance. This blog explains the motivations for the project and provides an overview of the research.
Why are we doing this work?
Under the context specific regulatory approach set out in the UK’s AI white paper, regulators will be tasked with setting out guidance for organisations developing or deploying AI within their respective regulatory remits. To enable the development of consistent guidance between UK regulators, the AI white paper sets out five cross-sector principles (henceforth the principles), representing what “well governed AI should look like on a cross cutting basis”. These principles are safety, security, and robustness; adequate transparency and explainability; fairness; accountability and governance; and contestability and redress.
The principles are designed to provide a shared understanding of the overarching goals of the regulatory approach, to ensure coherence and consistency in guidance across the regulatory system. The challenge for regulators, however, is understanding how these abstract principles can and should be operationalised in specific use cases and sectoral contexts; this is where standards can help.
The UK government has promoted the use of consensus-based standards, developed by SDOs such as ISO/IEC, IEEE, and Cen/Cenelec, to support the development and delivery of regulatory guidance based on the principles. SDO standards offer an existing repository of guidance for developing and deploying ethical and responsible AI systems, which can be used to support regulators to interpret and operationalise the principles.
However, there is currently a gap in knowledge around which standards are relevant for this task. Currently, regulators’ awareness of the standards landscape is still low and their ability to use standards is being held back by a lack of work focused on interpreting the relationship between standards and the principles. This piece of research aims to fill this knowledge gap, assessing the relevance of AI standards to the principles, to help regulators to understand where standards can play a role in developing and delivering effective regulatory guidance.
Mapping AI standards to the UK’s cross-sector principles
To lay the foundation for the standards mapping, the research team are carrying out an initial analysis of the principles, firstly exploring how they are defined in the white paper and based on these definitions, how the principles relate to one-another i.e., where one principle might enable or conflict with another. For example, transparency requirements are an important enabler of accountability and governance but may conflict with security requirements. These considerations are important to the development of consistent and coherent guidance.
The team are also exploring how definitions of the principles in the white paper interact with existing definitions and interpretations of the principles under existing regulatory guidance and legislation. For example, the principle of fairness is subject to several distinct legislative in the UK, two prominent examples being the Equalities Act (2010) and the UK DPA (2018). Similarly, safety is subject to myriad contrasting definitions by different regulators operating in contexts with distinct risk profiles.
Building from this foundation, the team have devised a methodology to assess the relevance of standards for operationalising each of the principles. The team are categorising the relationships of relevant standards – selected based on an initial key-word screening – to the principles, across their descriptive and prescriptive content. The categories include the type of relationship between a principle and a standard – whether the standard is directly or indirectly relevant to a principle, whether the standard impacts the principle positively or negatively; the extent to which a standard covers the requirements of the principle – fully or partially; and the nature of the directives a standard provides – recommendations or requirements.
The standards mapping and categorisation is supplemented by case studies, which offer an in-depth analysis of how certain significant standards relate to the principles, and how they could be used in practice. Drawing on insights from the mapping and the in-depth case studies, the report discusses trends and gaps in the standardisation landscape, relevant to the development of regulatory guidance. Finally, to support stakeholders to utilise AI standards in the development and delivery of AI regulation in the UK, the report will conclude with a set of recommendations.
Next steps
The team are currently hard at work on the research and will be presenting interim findings at a webinar for Bridge AI on 29 February. In the meantime, don’t forget to sign up for our newsletter to stay tuned on relevant project updates and to access the report when it’s published.
0 Comments