Pricing

login
Home / Papers / DeFaking Deepfakes: Understanding Journalists’ Needs for Deepfake Detection

DeFaking Deepfakes: Understanding Journalists’ Needs for Deepfake Detection

11 Citations2020
Saniat Javid Sohrawardi, Akash Chintha
journal unavailable

A study of the perceptions, current procedures, and expectations of journalists regarding a system for detection of deepfake videos, which combines technical knowledge of media forensics and the study to design a system usable by, and useful for, journalists.

Abstract

Although the concern over deliberately inaccurate news is not new in media, the emergence of deepfakes —manipulated audio and video generated using artificial intelligence—changes the landscape of the problem. As these manipulations become more convincing, they can be used to place public figures into manufactured scenarios, effectively making it appear that any-body could say anything. Even if the public does not believe these are real, it will generally make video evidence appear less reliable as a source of validation, such that people no long trust anything they see. This increases the pressure on trusted agents in the media to help validate video and audio for the general public. To support this, we propose to develop a robust and an intuitive system to help journalists detect deepfakes. This paper presents a study of the perceptions, current procedures, and expectations of journalists regarding such a tool. We then combine technical knowledge of media forensics and the findings of the study to design a system for detection of deepfake videos that is usable by, and useful for, journalists.

Use the desktop version to access all features