Deepfake detection by human crowds, machines, and machine-informed crowds
It is found that manipulations designed to disrupt visual processing of faces hinder human participants’ performance while mostly not affecting the model’s performance, suggesting a role for specialized cognitive capacities in explaining human deepfake detection performance.
Abstract
<jats:title>Significance</jats:title> <jats:p>The recent emergence of deepfake videos raises theoretical and practical questions. Are humans or the leading machine learning model more capable of detecting algorithmic visual manipulations of videos? How should content moderation systems be designed to detect and flag video-based misinformation? We present data showing that ordinary humans perform in the range of the leading machine learning model on a large set of minimal context videos. While we find that a system integrating human and model predictions is more accurate than either humans or the model alone, we show inaccurate model predictions often lead humans to incorrectly update their responses. Finally, we demonstrate that specialized face processing and the ability to consider context may specially equip humans for deepfake detection.</jats:p>