Home / Papers / Multimodal Deepfake Detection

Multimodal Deepfake Detection

1 Citations2024
Prof. Sneha G, Prof. Divya S, Lavanya R
International Journal for Research in Applied Science and Engineering Technology

This research introduces a Multimodal Deepfake Detection system capable of identifying manipulated content by combining visual and auditory cues, which employs convolutional neural networks to analyse video frames and process audio spectrograms, providing a comprehensive approach to detecting deepfake content.

Abstract

Abstract: Deepfake technology has become a significant threat to the integrity of multimedia content, posing challenges to areas such as cybersecurity, media forensics, and information authenticity. To address this, our research introduces a Multimodal Deepfake Detection system capable of identifying manipulated content by combining visual and auditory cues. The model employs convolutional neural networks (CNNs) to analyse video frames and process audio spectrograms, providing a comprehensive approach to detecting deepfake content. Experimental results demonstrate the system's effectiveness in accurately identifying both visual and auditory indicators of deepfake manipulation. This solution shows promise in combating the spread of deepfake content on digital platforms, thereby preserving the integrity and trustworthiness of multimedia content in the digital era