• Content Type

Updates from our AI in recruitment and employment workshop series: Stakeholders’ views on the role of standards

Blog post by:

Nala

In previous blogs, we introduced the AI Standards Hub’s work on the role of standards in governing the use of AI in recruitment and employment, focusing on a workshop series organised to gather input from stakeholders about the kind of work we could conduct in future in this area.

One of our key workshop aims was to learn about stakeholders’ views on:

• Two prominent normative functions that standards can play within a broader governance ecosystem – filling substantive gaps in law and regulation, or providing technical and process guidance to support compliance with existing legal and regulatory requirements.

• The comparative merits of horizontal (cross-sectoral) and vertical (sector-specific) standards.

In particular, we wanted to gauge whether stakeholders saw greater potential in standards that play one of these roles or take on one of these forms over another, and to capture some of the reasons behind their views.

This blog contains our analysis of the potential role that standards can play within a wider AI governance ecosystem for recruitment and employment, taking into account the regulatory environments in the UK and the EU as well as stakeholders’ views gathered during our exploratory workshop series.

The role of AI standards in the UK and the EU

In the context of the UK’s AI regulation white paper, AI standards are currently positioned to serve a primarily interpretive role, supporting stakeholders to interpret and apply the white paper’s five cross-cutting principles. In the EU, harmonised standards are set to take on a stronger policy function, in that they will often present the easiest way for companies to comply with requirements in the AI Act. AI standards will remain formally voluntary in both jurisdictions, but in the EU there will be a particularly strong incentive for the use of certain standards to demonstrate regulatory compliance.

Workshop participants expressed diverse opinions about the role of standards in AI policy and regulation in both jurisdictions. Many had favourable views about the potential for standards to operationalise high-level governance objectives and encourage responsible practices, particularly where precise guidance is absent from existing regulation. In anonymous surveys conducted live during Workshop 2, participants indicated that standards might be useful in “clarifying broad or unclear terms” in regulation and that, while not equivalent to statutory measures, standards might help to “build the muscle for better practice”, perhaps “as part of the maturation process” toward stronger laws and regulations.

Others, however, highlighted important challenges for the standards approach. Participants at both sessions were in broad agreement about the importance of multistakeholder involvement in AI standardisation, but some reported facing difficulties in getting involved. A trade union representative in Workshop 2, for instance, expressed concerns about the limited input that trade unions have had into AI standardisation processes in the UK.

In the EU context, some participants cautioned that the AI Act’s approach to identifying systems that are considered high-risk (and therefore subject to the Act’s Essential Requirements) could limit its enforceability, thus potentially undermining sincere and effective adoption of AI standards. A trade union researcher suggested that the GDPR can represent an important regulatory resource for AI governance in both recruitment and employment.

Identifying the appropriate function for standards in different legal and regulatory contexts

When it comes to standards that will be used to govern corporate practices – particularly in line with certain ethical or social objectives – the normative function of standards is likely to depend on the nature of the laws and regulations that govern these practices. In a robustly regulated domain, standards are more likely to be sought to provide detailed technical and process guidance in relation to the relevant regulatory requirements. In an area where fewer laws and regulations apply, the function of providing guidance in areas that are not addressed by legal and regulatory requirements (filling normative gaps) may be more prominent.

During Workshop 1, participants noted that the relationship between recruiters and job candidates is subject to fewer UK laws and regulations compared to the relationship between employers and employees. Presentations and discussions during Workshop 1 highlighted several areas of UK law applicable to the employment domain, including common law, equalities law, employment law, privacy law, and data protection law. During Workshop 2, a trade union stakeholder observed that health and safety law may also have implications for the use of AI in employment. Generally, employees also better have access to institutional resources like trade union representation, collective agreements, and employment tribunals. This dynamic appears to characterise relevant laws and regulations in the EU, as well: although the AI Act designates workplace AI applications as high-risk, hiring tools do not fall within this scope.

In the light of these legal and regulatory differences, stakeholders suggested that different normative functions may be appropriate for standards used to govern the use of AI in recruitment on the one hand, and in employment on the other.

1. Filling normative gaps in governing the use of AI in recruitment

Because recruiters in both jurisdictions have fewer obligations under the law, standards applied to the recruitment context may be well placed to take on more of a gap-filling role. Stakeholders were divided as to whether they saw this as an appropriate function for standards. Some viewed this relative openness as presenting an opportunity to codify best practice for recruitment at a faster pace than might be achievable within regulation, especially since many organisations have already begun to develop best practice guidance for procurers/users of AI-enabled hiring tools.

Others expressed greater uncertainty about whether standards are an appropriate venue to engage in substantive normative work. One comment, collected during an anonymous survey we conducted live during Workshop 2, expressed the concern that a standards-forward governance approach could involve “displacing value-laden decisions with technocratic decisions”, which would be problematic “without involvement of…impacted persons” in decision-making.

2. Helping employers to comply with regulation

Given the robust infrastructure of laws and regulations that apply to employers, standards in this context may provide more targeted implementation guidance to facilitate compliance with these requirements. Pointing to widely recognised problems around enforcement of GDPR compliance, an academic stakeholder suggested that standards might provide an easier route for companies to comply with similarly complex digital regulation, although noting that mechanisms like strategic litigation would also need to play a role.

Some concerns were raised around the enforceability of the AI Act. One industry stakeholder argued in Workshop 2 that robust enforcement mechanisms would be difficult to implement in practice. A trade union researcher stressed the importance for trade unions and advocates to act now to strengthen national laws and negotiate collective agreements with employers.

Horizontal and domain-specific AI standards

Finally, we were interested to see whether stakeholders thought horizontal AI standards – which codify implementation guidance for cross-cutting AI governance principles – should be applied to vertical or domain-specific governance concerns, or whether there might be a need to develop targeted, domain-specific standards to codify best practice guidance for a particular lifecycle phase or use case.

Although some other participants were less familiar with procedural distinctions between horizontal and domain-specific standards, in general, their perceptions about the potential benefits of standards were more closely aligned with the nature and function of horizontal standards, especially in the light of the AI Act’s harmonised standards. Participants expressed interest in standards that could resolve contested understandings of concepts like fairness, codify procedures for transparency and reporting, and in general provide guidance for responsible development and use of AI systems.

One workshop participant who has participated in human resource management (HRM) standards development noted that a potential advantage of developing targeted standards for the use of AI in hiring or management is that it allows for this horizontal guidance to be translated specifically to the organisational processes implemented by recruiters and employers.

Some workshop participants expressed a preference against the development of domain-specific AI standards – largely because of the degree to which such standards would need to codify ethical judgments about best practices for recruiters and employers, an activity they argued would not be appropriate for standards bodies to undertake.

Next steps

It is clear that AI standards are poised to play an important regulatory function in both the UK and EU, and that a shared interest exists among stakeholders across both jurisdictions in governing the use of AI in recruitment and employment specifically. Our preliminary findings on the appropriate role for standards to play in shaping AI governance in this domain crucially set the stage for future work in this area. In our next blog, we outline our ideas for potential future directions for our work.

0 Comments

Submit a Comment