• Content Type

Research and analysis item

Understanding bias in facial recognition technologies

Abstract

Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems on impacted individuals and communities.

Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. They also caution that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring.

In this explainer, I focus on one central aspect of this debate: the role that dynamics of bias and discrimination play in the development and deployment of FDRTs. I examine how historical patterns of discrimination have made inroads into the design and implementation of FDRTs from their very earliest moments. And, I explain the ways in which the use of biased FDRTs can lead to distributional and recognitional injustices. I also describe how certain complacent attitudes of innovators and users toward redressing these harms raise serious concerns about expanding future adoption.

The explainer concludes with an exploration of broader ethical questions around the potential proliferation of pervasive face-based surveillance infrastructures and makes some recommendations for cultivating more responsible approaches to the development and governance of these technologies.

Key Information

Name of organisation: The Alan Turing Institute
Type of organisation: Research institution

Date published: 21 Feb 2024

Categorisation

Domain: Horizontal
Type: Report

Type of organisation:

Discussion forum

  • Author
    Posts
  • Up
    0
    ::

    Share your thoughts on this item here.

You must be logged in to contribute to the discussion

Login