Home / Papers / Deepfakes and the promise of algorithmic detectability

Deepfakes and the promise of algorithmic detectability

3 Citations2024
Benjamin N. Jacobsen
European Journal of Cultural Studies

It is argued that the promise of algorithmic detectability falls short and that the ethico-politics of deepfakes cannot be reduced solely to a framework of detection algorithms.

Abstract

Deepfakes, as a sociocultural and technical phenomenon, have engendered two distinct yet intimately interwoven set of responses: on one hand, they have created widespread anxieties concerning the potential and harmful impact of deepfakes. On the other hand, they have also given rise to a new regime of detection: tools, models, and methods that are developed and used to detect whether something is a deepfake or not. However, the ways in which machine learning algorithms are being framed as the solution to the problem of deepfake detection have not received sufficient critical attention. Drawing on the 2019 Deepfake Detection Challenge organised by Meta as well as finding resonances in the work of Eyal Weizman, this article seeks to problematise and unsettle what I call the promise of algorithmic detectability. That is, the claim that machine learning algorithms render the issue of deepfake detection knowable, tractable, and resolvable. Examining the themes of training data, thresholds, and certainty, I emphasise the inherent difficulties, intractabilities and contingencies of deepfake detection models. Ultimately, I argue that the promise of algorithmic detectability falls short and that the ethico-politics of deepfakes cannot be reduced solely to a framework of detection algorithms.