• Content Type

The UKā€™s National
AI Strategy

The following information presents the UK governmentā€™s position on AI as set out in the UK’s National AI Strategy, and highlights how the AI Standards Hub will contribute to the delivery of the UKā€™s vision and outcomes for AI.

1. Overview of the National AI Strategy

1.1 What is the National AI Strategy and why is it relevant?

Last year the government published its National AI Strategy, setting the direction of travel for how the UK will retain its position as a leading AI nation over the next ten years. The National AI Strategy builds on the UKā€™s strengths but also represents the start of a step-change for AI in the UK, recognising the power of AI to increase resilience, productivity, growth, and innovation across the private and public sectors.

The UK is a global superpower in AI and is well placed to lead the world over the next decade as a genuine research and innovation powerhouse, a hive of global talent and a progressive regulatory and business environment. Given the rapid growth of AI and its potential to rewrite the rules of many industries and areas of life, this strategy is necessary and timely for three reasons:

    1. To remain a top tier AI nation, the UK needs to invest and secure access to the skilled people, data, computing resources and private capital necessary to drive research, development and commercialisation of AI.
    2. To secure our strategic economic advantage in AI, we must ensure that as AI comes to underwrite ever more economic activity, that the UK has both a strong domestic AI supply chain, and that businesses and individuals are supported to adapt to this new opportunity and competitive pressure.
    3. To remain a leader as the move towards governance and regulation grows internationally, the UK must put in place a pro-innovation governance regime, guided by the principles in the Plan for Digital Regulation, that protects our fundamental values at home and projects them around the world.

1.2 Three Pillars of the National AI Strategy

The AI Strategy sets out the UKā€™s strategic intent to guide action over the next ten years across three pillars:

    1. Investing in and planning for the long term needs of the AI ecosystem to continue our leadership as a science and AI superpower;
    2. Supporting the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring AI benefits all sectors and regions;
    3. Ensuring the UK gets the national and international governance of AI technologies right to encourage innovation, investment, and protect the public and our fundamental values.

The main policy actions under each pillar are summarised in this table.

1.3 Delivering the National AI Strategy

Whilst the National AI Strategy offers a ten-year vision, how we deliver it will adapt to changing circumstances to ensure the most effective delivery in the wider context of the fast-moving development of AI technologies.

As we implement the National AI Strategy, we work closely with other government departments to ensure the strategy brings together the governmentā€™s ambitions for AI into a single, coherent narrative and action plan.

The National AI Strategy supports and amplifies other interconnected work of government. To find out more about other relevant policy initiatives, please visit the policy database.

The AI Action Plan

In July 2022, the UK government published the first AI Action Plan to show how it is delivering against the National AI Strategy vision. The Action Plan provides an overview of progress since we published the National AI Strategy, in the context of a rapidly evolving AI ecosystem within a similarly evolving global context. This Action Plan will be updated each year to show how government is:

 

    • delivering against its vision and strategic goals to strengthen the UKā€™s position as an AI leader;
    • building the evidence base to better monitor and assess progress;
    • making sure the UK’s approach is future-proofed and that we are responding effectively to the latest and most impactful AI developments.Ā 

2. The AI Standards Hub in the National AI Strategy

2.1 Why do we need an AI Standards Hub in the UK?

The National AI Strategy sets out the governmentā€™s vision for the role of global digital technical standards in supporting AI technologies – actively promoting innovation, while ensuring AI products and services perform safely, efficiently, and consistently.

Global technical standards set out best practices such as guidelines, business requirements and specifications which can be consistently applied to ensure products, processes and services perform as intended.

The UK is taking a global approach to shaping technical standards for AI trustworthiness, seeking to embed accuracy, reliability, security, and other facets of trust in AI technologies from the outset. Thanks to the UKā€™s strength in AI research, innovation, and governance, we are in a position to make a unique and impactful contribution to the development of global AI standards, and grow UK thought leadership. Accordingly, the AI Standards Hub will aim to increase the UK’s contribution to the development of global AI technical standards, and grow international collaboration with like-minded partners to ensure that global AI standards are shaped by a wide range of experts, aligned with shared values.

Given the impact that standards will have in shaping technology markets over the world, this work is critical to the pursuit of the UKā€™s objectives of ensuring that AI is ethical and trustworthy, promoting innovation, and expanding the industrial strength of the UKā€™s AI sector.

2.2 Delivering the AI Toolkit through the Hub

The National AI Strategy also highlighted the governmentā€™s intention to explore the development of an AI technical standards engagement toolkit to support the AI ecosystem to engage in the global AI standardisation landscape. The toolkit is being delivered through the AI Standards Hub.

The emerging standards landscape is rapidly evolving and multi-faceted. Such a landscape can be difficult to navigate and, while challenges in traversing this terrain apply to all stakeholders, they are particularly pronounced for certain groups such as SMEs and civil society organisations.

The AI Standards Hub aims to grow the UK AI ecosystem’s contribution to global AI standardisation by providing a platform for multiple stakeholders to connect and inform themselves about using and shaping AI standards, contributing to leading-edge developments in global AI standardisation. In particular, it will create practical tools for businesses, bring the UKā€™s AI community together and develop educational materials to help organisations develop and benefit from global standards.

The AI Standards Hub aims to benefit a wide range of people and organisations, including standards users, regulators, consumers, SMEs and think tanks who are interested in understanding and/or shaping global AI standards, with the aim to expand the UKā€™s contribution and thought leadership in this field.

To find out more about the resources available to stakeholders to grow their technical standards engagement, visit the training section of the website.

3. Role of technical standards in the Effective Governance of AI

3.1 The UK’s emerging pro-innovation approach to AI governance

In the National AI Strategy, the government committed to develop a pro-innovation national position on governing and regulating AI. In July 2022, the UK government published the AI regulation policy paper, which sets out the governmentā€™s emerging approach to regulating AI in the UK.

Establishing clear, innovation-friendly and flexible approaches to regulating AI will be core to achieving our ambition to unleash growth and innovation while safeguarding our fundamental values and keeping people safe and secure. Accordingly, the AI regulation policy paper proposes to establish a pro-innovation framework for regulating AI which is:

    • Context-specific. We propose to regulate AI based on its use and the impact it has on individuals, groups and businesses within a particular context, and to delegate responsibility for designing and implementing proportionate regulatory responses to regulators. This will ensure that our approach is targeted and supports innovation.
    • Pro-innovation and risk-based. We propose to focus on addressing issues where there is clear evidence of real risk or missed opportunities. We will ask that regulators focus on high risk concerns rather than hypothetical or low risks associated with AI. We want to encourage innovation and avoid placing unnecessary barriers in its way.
    • Coherent. We propose to establish a set of cross-sectoral principles tailored to the distinct characteristics of AI, with regulators asked to interpret, prioritise and implement these principles within their sectors and domains. In order to achieve coherence and support innovation by making the framework as easy as possible to navigate, we will look for ways to support and encourage regulatory coordination – for example, by working closely with the Digital Regulation Cooperation Forum (DRCF) and other regulators and stakeholders.
    • Proportionate and adaptable. We propose to set out the cross-sectoral principles on a non-statutory basis in the first instance so our approach remains adaptable – although we will keep this under review. We will ask that regulators consider lighter touch options, such as guidance or voluntary measures, in the first instance. As far as possible, we will also seek to work with existing processes rather than create new ones.

Over the coming months, we will be considering how best to implement and refine our approach to drive innovation, boost consumer and investor confidence and support the development and adoption of new AI systems. This will include considering the role of technical standards and assurance mechanisms as potential tools for implementing principles in practice, supporting industry, and enabling international trade.

The UK government welcomes stakeholders’ views on the proposed approach to regulating AI. Whilst the formal consultation period has closed, the Office for AI is keen to receive further reflections on this approach. If you would like to share your view, please email [email protected].

    3.2 Technical standards in the National AI Strategy: Tools supporting AI governance

    The UKā€™s National AI strategy sets out our ambition to use standards to support the government’s aim Ā to build the most trusted and pro-innovation system for AI governance in the world. The huge potential of AI technologies demonstrates the need for tools to govern its development, ethics, and use, including through globally developed technical standards.

    The integration of standards in our model for AI governance and regulation is crucial for unlocking the benefits of AI for the economy and society and will play a key role in ensuring that the principles of trustworthy AI are translated into robust technical specifications and processes that are globally recognised and interoperable.

    As set out in the Plan for Digital Regulation, non-regulatory tools can complement or provide alternatives to ā€˜traditionalā€™ regulation. This includes technical standards developed at formal Standards Development Organisations (SDOs), which benefit from global technical expertise and best practice.

    AI standards and Assurance:

    Standards play a key role in AI governance by enabling a range of AI assurance tools and services, ranging from impact assessment to audit and certification. AI standards and assurance are critical to build justified trust in the development and use of AI systems. Standards act as an assurance enabler by making subject matter more objectively measurable, which can help to deliver mutually understood and scalable AI assurance. This is because standards enable clear communication of a systems trustworthiness i.e., ā€œthis standard has been met/hasnā€™t been metā€. Without agreed technical standards or common understanding around acceptable performance or levels of risk, a disconnect between the values and opinions of different actors can prevent assurance from building justified trust in these systems. Maintaining governable, trustworthy AI systems across their lifecycle and avoiding unsafe operating conditions will require the use of technical standards, setting out ongoing requirements for testing and monitoring. The CDEIā€™s AI Assurance Roadmap makes a significant, early contribution to shaping the AI assurance ecosystem by setting out the roles that different groups will need to play, and the steps they will need to take, to move to a more mature ecosystem of assurance products and services to ensure trusted and trustworthy AI.

      3.3 The role of standards in the UK pro-innovation approach to AI regulation

      The UK wants an effective AI governance regime that supports scientists, researchers and entrepreneurs to innovate while ensuring consumer and citizen confidence in AI technologies. In line with the governmentā€™s recently published Plan for Digital Regulation, we want a proportionate and agile approach, removing unnecessary burdens wherever we find them and offering clarity and confidence to businesses and consumers.

      By adopting a pro-innovation approach to AI regulation, we want to ensure that:

        • The UK has a clear, proportionate and effective framework for regulating AI that supports innovation while addressing actual risks and harms;
        • UK regulators have the flexibility and capability to respond effectively to the challenges of AI; and
        • Organisations can confidently innovate and adopt AI technologies with the right tools and infrastructure to address AI risks and harms.

        Technical standards can complement ā€˜traditional regulationā€™ by embedding accuracy, reliability, and security in AI technologies while benefiting our citizens, businesses, and the economy by:

         

        • Supporting R&D and Innovation. Technical standards should provide clear definitions and processes for innovators and businesses, lowering costs and project complexity and improving product consistency and interoperability, supporting market uptake.
        • Supporting trade. Technical standards should facilitate digital trade by minimising regulatory requirements and technical barriers to trade.
        • Giving UK businesses more opportunities. Standardisation is a co-creation process that spans different roles and sectors, providing businesses with access to market knowledge, new customers, and commercial and research partnerships.
        • Delivering on safety, security and trust. The Integrated Review set out the role of technical standards in embedding transparency and accountability in the design and deployment of technologies. AI technical standards (e.g., for accuracy, explainability, and reliability) should ensure that safety, trust and security are at the heart of AI products and services.
        • Supporting conformity assessments and regulatory compliance. Technical standards should support testing and certification to ensure the quality, performance, reliability of products before they enter the market.

        4. International engagement

        4.1 International engagement objectives in the National AI Strategy

        In order to realise our ambition to strengthen the UK’s position as a leading AI nation, we must get the national and international governance of AI technologies right. We have the opportunity to reinvigorate international cooperation amongst our research and innovation institutions, working closely with our partners to use the power of AI to tackle global challenges head on.

        Internationally, the government is:

          • Increasing bilateral engagement with partners, including strengthening coordination and information sharing.
          • Bringing together conversations at standards developing organisations and multilateral fora. The British Standards Institution (BSI) and the government are members of the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS), which unites global Standards Development Organisations, businesses, and research institutes.
          • Promoting the 2021 Carbis Bay G7 Leadersā€™ CommuniquĆ©, on supporting inclusive, multi-stakeholder approaches to standards development, by ensuring our UK approach to AI standards is multidisciplinary, and encourages a wide set of stakeholders in standards developing organisations.

        The UK is also playing a leading role in international discussions on AI ethics and potential regulation, such as work at the Council of Europe, the OECD, UNESCO, and the Global Partnership on AI. The government will continue to work with our partners around the world to shape international norms and standards relating to AI, including those developed by multilateral and multi-stakeholder bodies at a global and regional level.

          4.2 UK engagement in global SDOs

          The UK government, working closely with UK stakeholders, is engaging with global Standards Developing Organisations (SDOs) and international partners to shape the development of global technical standards in the priority areas that matter most for upholding our democratic values, ensuring our cyber security, and advancing UK strategic interests through science and technology. Failure to grow UK engagement in global AI standards development risks standards being developed and adopted in ways which conflict with UK values. It is important that as a world leader in AI, we are helping to shape the future of this technology. Accordingly, in the National AI strategy, we also set out some of the work on AI standards that the UK is taking forward and contributing to in SDOs.