On the Truth Claims of Deepfakes: Indexing Images and Semantic Forensics

by Rebecca Uliasz

PDF
DOI: https://doi.org/10.59547/26911566.3.1.04


Abstract:
When news media shared a video of outgoing president Donald Trump acknowledging the victory of president-elect Joe Biden, some social media users conspired that it was a deepfake, a synthetic image made with machine learning (ML) algorithms, despite evidence to the contrary. Employing this example in the following, I focus on how images generate veracity through the interrelated actions of human and machine learning (ML) algorithms. I argue that ML presents an opportunity to revisit the semiotic infrastructures of images as an approach towards asking how photorealistic images produce truth claims in ways that exceed the purely visual. Drawing from photographic theories of the image index and diagrammatic understandings of ML, I argue that meaning, described here as what images do in the world, is a product of negotiation between multiple technological processes and social registers, spanning data sets, engineering decisions, and human biases. Focusing on Generative Adversarial Networks (GANs), I analyze sociopolitical and scientific discourses around deepfakes to understand the ways in which ML affords hegemonic ways of seeing. I conclude that ML operationalizes the evidentiary power of images, generating new thresholds of visibility to manage uncertainty. My aim is to critically challenge post-truth paranoias by analyzing how ML algorithms come to have ethicopolitical agency in visual culture, with implications for how images are made to matter in post-truth media ecologies.

Keywords: deepfake; machine learning; generative adversarial network; truth claims; indexicality; diagram; post-truth


How to cite: Uliasz, Rebecca. “On the Truth Claims of Deepfakes: Indexing Images and Semantic Forensics.” MAST, vol. 3, no. 1, April. 2022, pp. 63-84.



Copyright is retained by the authors.

© 2022 Rebecca Uliasz

 

Issue: vol. 3 no. 1 (2022): Special Issue: Automating Visuality
Section: Article
Guest Editors: Dominique Routhier, Lila Lee-Morrison, and Kathrin Maurer
Published: 25 April, 2022