• Content Type

New work on the use of AI in recruitment and employment

Blog post by: Nala Sharadjaya

mployment, career, job, occupation signpost

The AI Standards Hub is kicking off a research and engagement workstream led by Turing researchers, which will focus on the use of AI in recruitment and employment. Bringing together UK and European stakeholders, this project aims to investigate the role that consensus-based standards can play in advancing cross-jurisdictional coherence on the responsible use of AI in recruitment and employment contexts.


Why recruitment and employment?

The use of AI systems to make employment decisions by recruiters, employers, and managers of workers has become widespread across sectors and throughout the employment life cycle—from hiring through day-to-day management and monitoring, to in some cases, firing. Below, we provide an overview of key AI use cases.

  • Job description software
  • Targeted online advertising
  • Recruiting chatbots
  • Headhunting software
  • CV screening and matching
  • Psychometric testing
  • Video interview analysis
  • Asynchronous interview tools
  • Background check software
  • Scheduling/shift allocation
  • Time management control tool
  • Task assignment
  • Wage setting
  • On-the-job monitoring and surveillance
  • Performance review and discipline
  • Dismissal

AI systems can be used to automate a number of recruitment and employment functions, with the potential to improve efficiency and reduce the role of human error in employment decisions. At the same time, these uses pose a variety of risks. They can perpetuate structural and historical biases and make errors that are difficult to detect and trace, generating concerns about fairness and discrimination, transparency, and contestability and redress. The use of AI systems to make or inform employment decisions may also adversely impact workers’ dignity and autonomy, especially if such decisions appear to be handed down automatically—without justification or human oversight.

Critically, if an AI-powered management or monitoring tool relies on workers’ personal data, workers may face additional privacy and fairness risks related to the type of the data collected, the way it is stored, and the way it is used or analysed to make employment decisions. Without safeguards, this data collection raises the potential for workplace surveillance which can exert a “chilling effect” on the rights of workers.


The case for UK-EU collaboration

Clear best practice guidance on the responsible use of AI systems in recruitment and employment contexts, coupled with flexible and interoperable governance tools like consensus-based standards, could enable employers to integrate AI systems into their management operations while supporting the mitigation of these risks.

Stakeholders in the UK and the EU face an important, shared opportunity to align on priorities and approaches for developing this guidance, promoting responsible innovation when it comes to the use of AI in employment contexts. This topic has received substantial attention in both jurisdictions, where ongoing work reflects complementary areas of expertise and development.

Progress in the EU has included the draft Platform Work Directive and the classification of workplace AI applications as high-risk in the forthcoming EU AI Act. In the UK, so far there has been no introduction of novel legislative or regulatory requirements specifically aimed at the use of AI in employment, but legislators (House of Commons), regulators (EHRC), expert bodies (CDEI), professional bodies (REC), and civil society groups (IFOWTUC) have also begun to analyse needs and advance best practice thinking in this context.

Additionally, organisations and officials in both jurisdictions have recognised the importance of SDO standards for facilitating globally interoperable AI governance. The UK government’s recent white paper, “A pro-innovation approach to AI regulation”, highlights the role that consensus-based standards can play in facilitating flexible and interoperable AI governance, while in the EU, standards are poised to play a crucial role in supporting the implementation of regulatory requirements in the forthcoming EU AI Act.


Looking ahead: Directions for future work

We are looking forward to learning from stakeholders in the UK and Europe who are already engaging with work in these areas. Our project will begin with a set of workshops aiming to surface views on the following topics, first from UK and then from European stakeholders:

  • Key risks and benefits associated with the use of AI in recruitment and employment contexts.
  • The suitability of various approaches to addressing and managing these risks.
  • Workers’ rights and the legal and regulatory context in which uses of these tools are governed.

Through these workshops, we’ll be able to gather stakeholders’ views on the following substantive issues:

  • Similarities and differences between AI use in recruitment and employment contexts in the UK and European landscape, and current approaches to governance and risk management.
  • The suitability of horizontal versus sector-specific approaches to addressing needs in this area.

These engagements will help us to refine the direction and scope of an AI Standards Hub research project aimed at addressing the role of consensus-based standards – and the possible interplay between law, regulation, and standards – for achieving responsible and interoperable governance of AI in recruitment and employment contexts, across jurisdictional borders.


Submit a Comment