• Content Type

Updates from our AI in recruitment and employment workshop series: Refining the scope of future work
Blog post by:
Nala
Multicoloured post-it notes are arranged on a glass window. Two hands are visible, one holding a pen as though to write on one of the notes.

In February and March 2024, the AI Standards Hub held two workshops examining the role of standards in governing the use of AI in recruitment and employment, altogether convening more than 40 participants from the UK and EU. This exploratory workshop series aimed to identify strategic priorities for the AI Standards Hub, to study or shape the role of standards in governing AI in recruitment and employment.

Workshops sought stakeholder insights within three broad categories: (1) use cases and phases of the employment cycle, (2) risks, and (3) normative functions of standards. This blog provides a summary of our findings.

1. Use cases and phases of the employment lifecycle

Workshop aim

One key question we sought to address was whether (1) future work should focus on a single high-profile AI use case in the recruitment or employment context (e.g., CV screening, wage setting), (2) take a broader view to look at several key use cases within a particular phase of the employment lifecycle (e.g., recruitment, employment), or (3) attempt to consider use cases across the employment lifecycle in its entirety. We anticipated some of the deciding factors for this question would include the degree of industry adoption of various use cases and the level of risk they pose.

Workshop outcome

Stakeholders recommended that choices about whether to focus on uses in recruitment, employment, or both, should be determined by factors like the scope of existing work, what seems achievable given resource and time constraints, and a further determination about the appropriate (or preferred) normative function for standards. Specific use cases of interest include CV screening and matching (within recruitment), monitoring and surveillance, and algorithmic decision-making (across the lifecycle), due to high levels of both industry adoption and the potential for harm.

2. Risks

Workshop aim

We also wanted to understand whether work should focus narrowly on specific areas of risk in relation to specific, relevant ethical concerns (e.g., fairness or transparency) or consider the full range of risks in relation to ethical AI holistically. Here, we were primarily looking to ascertain whether the various risks these applications pose could be considered in isolation, or whether future research would need to take an integrative approach from the outset.

Workshop outcome

Stakeholders indicated that future work should take all four types of risks we initially identified – fairness, privacy, transparency, and dignity and autonomy – into account. We also found that digital exclusion should be considered as a kind of fairness risk, alongside bias and discrimination; that transparency should be understood as a foundational, cross-cutting construct; and that environmental risks may need to be considered as well. Feedback further highlighted that more work is needed to refine our conceptual approach to defining and assessing risks and harms in this space.

3. Normative functions of standards

Workshop aim

Finally, we were interested in stakeholders’ views on the different normative functions that standards can play within a broader governance ecosystem. Stakeholders considered the role of standards for (1) filling substantive gaps in law and regulation, and (2) providing technical and process guidance to support compliance with existing legal and regulatory requirements. We also inquired about the comparative merits of horizontal and vertical standards. We wanted to gauge whether stakeholders saw greater potential in standards that play one of these roles or take on one of these forms over another, and to capture some of the reasons behind their views.

Workshop outcome

Standards are likely a better fit for a gap-filling role in recruitment, where fewer laws, regulations, and institutional resources apply than in employment, where standards are likely to be useful in facilitating compliance with existing legal and regulatory requirements. This is the case as  far as such requirements exist in the employment context – where there are normative gaps in employment law, standards are likely to take on the same gap-filling role. Stakeholders were divided about which role might be more ethically appropriate or practically feasible to focus on, and for now we remain open to work that considers standards in terms of either (or both) of their typical normative functions.

Next steps

In future blogs, we will share a more detailed analysis of the potential role of standards in governing use cases in this area and share our ideas for potential future projects.

0 Comments

Submit a Comment