EfficientNet-B5 is used to pluck out the spatial options of those faces they are fed as a batch of input series into a two-way long- and short-term memory (BiLSTM) to extract temporal characteristics.
Recently, deepfake face-swapping techniques are widely used, which allow to easily create buinesslike fake videos. Determining the rightfulness of a video is becoming increasingly important due to the potential distructive impact it can have on the world. we used more than technique and compared between them to detect fake videos. we applied different techniques like YOLO-CRNN, LSTM and in this paper, we compared between them in some techniques EfficientNet-B5 is used to pluck out the spatial options of those faces they are fed as a batch of input series into a two-way long- and short-term memory (BiLSTM) to extract temporal characteristics. The scheme is then tested on a a huge new dataset in; CelebDFFaceForencics++ (c23), based on a mash-up of two well-known records; FaceForencies++ (c23) and CelebDF. Achieved Area Under Receiver Operating Characteristic (AUROC) curve 89.35% result, 89.38 accuracy, 83.13% recovery, 85.54% accuracy and 84.23 F1-measure to insert data focus.