Title
MFF-Net: Deepfake Detection Network Based on Multi-Feature Fusion
Abstract
Significant progress has been made in generating counterfeit images and videos. Forged videos generated by deepfaking have been widely spread and have caused severe societal impacts, which stir up public concern about automatic deepfake detection technology. Recently, many deepfake detection methods based on forged features have been proposed. Among the popular forged features, textural features are widely used. However, most of the current texture-based detection methods extract textures directly from RGB images, ignoring the mature spectral analysis methods. Therefore, this research proposes a deepfake detection network fusing RGB features and textural information extracted by neural networks and signal processing methods, namely, MFF-Net. Specifically, it consists of four key components: (1) a feature extraction module to further extract textural and frequency information using the Gabor convolution and residual attention blocks; (2) a texture enhancement module to zoom into the subtle textural features in shallow layers; (3) an attention module to force the classifier to focus on the forged part; (4) two instances of feature fusion to firstly fuse textural features from the shallow RGB branch and feature extraction module and then to fuse the textural features and semantic information. Moreover, we further introduce a new diversity loss to force the feature extraction module to learn features of different scales and directions. The experimental results show that MFF-Net has excellent generalization and has achieved state-of-the-art performance on various deepfake datasets.
Year
DOI
Venue
2021
10.3390/e23121692
ENTROPY
Keywords
DocType
Volume
deepfake, feature fusion, attention, generative adversarial network
Journal
23
Issue
ISSN
Citations 
12
1099-4300
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Lei Zhao100.34
Mingcheng Zhang200.34
H. Ding301.69
Xiaohui Cui400.34