• Content Type

What does the government’s white paper consultation response mean for UK AI regulation?

Blog post by: 

Christopher Thomas & Alex Krook

Person typing at a laptop with speech bubbles coming out

With the capabilities of AI advancing at pace, how we regulate these technologies – ensuring they are safe, ethical and societally beneficial – is increasingly important.

This week, the government set out its latest thinking following a consultation on its AI regulation white paper published last year and reflecting significant developments in the AI space since then, including the AI Safety Summit, the establishment of the UK’s AI Safety Institute and the rapid commercialisation of foundation models.

In this piece, we consider some important developments emerging from the consultation response, and highlight the work being done by The Alan Turing Institute that is shaping the white paper approach and supporting the delivery of effective AI regulation.


What has changed for the UK’s approach to AI regulation?


New funding announced

One of the headline announcements is the commitment of over £100 million “to help realise new AI innovations and support regulators’ technical capabilities”. Several initiatives have been announced, including £80 million committed to launch nine new AI research hubs across the UK to propel innovation across sectors including healthcare and chemistry, and a £9 million commitment to a bilateral partnership with the US focusing on safe, responsible and trustworthy AI.

Of particular interest for regulators will be a £10 million package to help recipients develop research and practical tools to build their expertise and address AI risks. This funding commitment acknowledges a gap in support for the regulators being asked to deliver the context-specific approach, although further funding would be needed to address the significant resourcing needs across the regulatory system.

Previous Turing research on ‘Common regulatory capacity for AI’ (2022) identified common resource and capacity needs required to advance AI readiness across the regulatory landscape. This research informed the government’s approach to regulatory capacity for AI and was cited in its white paper on AI regulation. The work is currently being updated to create a new readiness self-assessment tool for regulators, reflecting the specific expectations of the white paper approach, as well as offering a deep dive into regulatory readiness requirements for large language models (LLMs).


Initial guidance for regulators

Alongside the consultation response, the government has published initial guidance for regulators on ‘Implementing the UK’s AI regulatory principles’, aiming to support a consistent approach to the interpretation of cross-sector principles across regulatory remits.

For now, the guidance remains high-level, highlighting key considerations and questions rather than providing more prescriptive instruction to regulators looking for clearer direction. Additional support from the proposed central function will be crucial for supporting regulators in this task.

The consultation further drives forward the regulatory agenda, explaining that the government has asked “key regulators” to publish strategic updates on their approaches to AI by 30 April 2024, including an assessment of risks and capability gaps within their remits, the actions they are taking to address these gaps, and the steps they are taking to align with white paper expectations across the next 12 months. The consultation response doesn’t set out a timeline for regulators falling outside of the ‘key’ category, which may affect prioritisation in packed regulatory agendas.

Turing research can provide further support to regulators who are required to interpret the white paper’s cross-sector principles and develop regulatory strategies and guidance. We have published foundational work on the operationalisation of ethical principles and effective governance tools in the UK’s national public sector guidance, ‘Understanding AI ethics and safety’ (2019).

More recently, the publication of the ‘AI ethics and governance in practice’ workbooks (2023) is already helping stakeholders to operationalise several of the cross-sectoral principles found in the white paper.

Turing researchers have also collaborated directly with regulatory bodies to support work on AI, producing a range of high-profile outputs such as a report on ‘AI in financial services’ (2021) commissioned by the UK financial services regulator (the FCA), and ‘Explaining decisions made with AI’ (2020), the most cited guide on AI explanation to date, co-developed with the UK’s data protection regulator (the ICO).


A longer-term view towards binding rules

Significantly, in the consultation response the government notes that “AI technologies will ultimately require legislative action in every country once understanding of risk has matured”, signalling, in the long term, a likely departure from the white paper’s non-statutory and primarily voluntary approach.

For the time being, while noting that many respondents in the consultation highlighted the benefits of a statutory duty on regulators, the government response maintains that a non-statutory approach offers “critical adaptability” at this early stage. It is crucial that this pro-innovation approach is balanced with a robust and precautionary approach to mitigating present risks, beyond monitoring and understanding risks to inform future legislation.

The consultation response does, for the first time, set out some initial thinking from the government of future binding requirements for developers building the most advanced AI systems, although no commitment is made to these in the short term. This choice maintains a key distinction with the European Union’s legislative, horizontal approach to regulating AI, which reached a political agreement in December 2023.

While proceeding with a non-statutory approach, standards can be useful tools for supporting the AI regulatory framework both domestically and on an international level. Standards offer guidance for the development and deployment of AI systems, and so can help both regulators and businesses achieve the objectives defined by the UK’s AI white paper principles.

On an international level, standards will be crucial for enabling interoperability between diverging approaches to regulation. Highlighting the importance of standards, the government’s consultation response notes that “Many respondents praised the UK AI Standards Hub” and its role in supporting stakeholders to understand and engage with the complex international AI standards landscape.

The AI Standards Hub, delivered in a partnership between the Turing, the British Standards Institution and the National Physical Laboratory and funded by DSIT, is dedicated to the role of international standards as AI governance tools and innovation enablers. See here for ongoing research exploring how the UK’s AI regulation principles can be operationalised through standards.


What’s next for AI regulation?


For organisations developing or deploying AI in the UK, the consultation response represents a further detailing of the regulatory approach set out in the AI regulation white paper, rather than a step-change on any key issues.

New funding and new guidance for regulators are both welcome developments from an innovation perspective and, albeit only first steps, offer resources to improve coherence and certainty across the regulatory system.

UK regulators have vast experience and sector-level expertise, so building on their contribution is a positive move. Ensuring effective coordination via the UK’s proposed central function will be key to ensuring that the expertise of individual regulators can be leveraged to ensure success across the regulatory system.

The consultation response offers signals that the regulatory approach will likely be subject to change in years to come, as regulators and policy makers understand more about AI risks. It is vital that, when maximising the benefits of these technologies through an adaptable regulatory approach, risks are taken seriously, and mitigated and minimised in a timely manner.

The Turing will continue to enable a thriving UK AI ecosystem, supporting the government and regulators to help make this approach a success, as well as taking forward a wide range of Turing-led work on AI standards, governance and regulatory innovation.

Find out more about how the Turing is advancing governance and regulatory practices for AI.

Top image: Monster Ztudio



Submit a Comment