• Content Type

What role do standards play in the EU AI Act? Looking at the implications of the European Parliament’s proposed amendments

Blog post by

Arcangelo Leone de Castris and Chris Thomas

Researchers, The Alan Turing Institute

European Union flag

On 14 June 2023, the European Parliament adopted its negotiating position on the EU AI Act (AIA), proposing several potentially significant amendments to the draft text, which now enters the final stage of negotiations – also known as “trilogue” – taking place between the European Commission, the Council of the EU, and the European Parliament. One significant development is the increased prominence of standards and discussion of the standards development process throughout the draft text. 

In this blog – the first in a short series considering role of standards in the EU AIA – we briefly discuss the role of standards in the AIA and the work of the European Standards Organisations (ESOs). We then provide an overview of the significance of the Parliament’s amendments for the role that standards will play in supporting the EU AIA.  

The role of standards in the EU AIA and their development to date 

In the European Union, standards are referred to as ‘harmonised standards‘ when they are adopted by one of the recognised ESOs (CEN, CENELEC, or ETSI) following a formal request from the European Commission. Harmonised standards must be accepted by the Commission and published in the Official Journal of the European Union. Harmonised standards typically support the implementation of EU regulations and are used by organisations to demonstrate that products, services, or processes comply with relevant EU law. 

On 5 December 2022, the European Commission issued a draft standardisation request in support of the AIA. The draft request sets out the preliminary views of the Commission on the standards required to support the implementation of the regulation (The next blog in this series will offer a deep dive into the draft standardisation request). Based on the draft, a list of ten proposed European standardisation deliverables are to be developed by CEN-CENELEC, by 31 January 2025. These include standards covering risk management, transparency, and conformity assessment.  These standards will play a key role in providing operational guidance for the implementation of the AIA and facilitating compliance with the regulation’s requirements for high-risk systems.  

What has changed, and why?  

The consolidated text adopted by the European Parliament introduces several substantive amendments to the draft AIA. These include a new definition of artificial intelligence aligned with the definition provided by the OECD; the addition of new AI applications to the list of prohibited practices; and the introduction of new, more prescriptive requirements to conduct human rights impact assessments of high-risk systems. 

Interestingly, the new text also broadens the scope of several provisions related to the role that standards will play in supporting the implementation of the AIA. To begin with, references to standards are more prominent in the new text. A simple keyword search shows that the European Parliament’s version mentions the term ‘standards’ 27 times more than the original text – including recitals, articles and annexes. Largely overlooked by most commentators, this development is mainly due to a combination of the high pace at which AI technologies have been evolving since the European Commission’s initial proposal in 2021 and the European Parliament’s vision of how to improve the legitimacy of standards development processes.  

More specifically, the Parliament has introduced new language on standards primarily around two broad issues: (1) standards for foundation models and (2) multistakeholder participation in standards development.  

Standards for foundation models  

The new AIA draft defines foundation models as AI models “trained on broad data at scale, […] designed for generality of output, and can be adapted to a wide range of distinctive tasks.”  As such, foundation models typically constitute the basis for a large variety of downstream applications, raising challenging questions about the accountability and liability of providers throughout the value chain. 

Neither the original AIA provisions nor the more recent draft standardisation request were designed to account for the complexity and uncertainty of outcomes that are characteristic of foundation models. As stated by the new Recital 60(g), due to their unique characteristics “foundation models should be subject to more specific requirements and obligations, and that such requirements and obligations should be accompanied by ad hoc standards.” Accordingly, specific requirements for foundation models are introduced by the new Article 28(b) and include, among other things, risk mitigation measures, data governance measures, as well as various requirements for the design and development stages tackling issues of safety, security, accuracy, accountability, energy efficiency, waste reduction, etc.  

Read in combination with the new Art. 40(1a), which requires the European Commission’s standardisation request to cover all requirements of the Regulation, the introduction of the new requirements for foundation models broadens the range and scope of the harmonised standards the ESOs will have to develop to support the implementation of the AIA.  

Multistakeholder participation  

The European Parliament has also introduced new provisions on the participation of multiple stakeholder groups in standards development processes and on the adequate degree of transparency that shall inform such processes. The main objectives of these measures are to “ensure the effectiveness of standards as policy tools for the Union” and to achieve “a balanced representation of interests by involving all relevant stakeholders in the development of standards” (Recital 61).  

Following this reasoning, Article 40 specifies that, while drafting the standardisation request, the European Commission shall consult with the AI Office and in particular with the Office’s Advisory Forum – a body responsible for providing the AI Office with inputs from different stakeholder groups. The Article provides guiding principles for the actors involved in standardisation, to ensure balanced representation of interests, effective participation, and global cooperation. The article further specifies that actors consider the six ethical principles introduced by the European Parliament in Article 4(a), applicable to all AI systems.1 The proposed principles would be incorporated in the European Commission’s standardisation request and used as outcome-based objectives by ESOs in developing standards for AI. 


The European Parliament has introduced amendments to the draft AIA that respond to challenges posed to standardisation by the emergence of AI foundation models and concerns around stakeholder participation in European standards development processes. The proposed amendments broaden both the substantive scope of proposed European standards development initiatives, as well as the procedural requirements for standards development. The proposed amendments represent a step forward in developing effective standards to support the implementation of the AIA. However, the ultimate outcome of these proposed amendments will be decided upon in the coming months, through the trilogue negotiations.  


Submit a Comment