Title
Modality and Event Adversarial Networks for Multi-Modal Fake News Detection
Abstract
With the popularity of news on social media, fake news has become an important issue for the public and government. There exist some fake news detection methods that focus on information exploration and utilization from multiple modalities, e.g., text and image. However, how to effectively learn both modality-invariant and event-invariant discriminant features is still a challenge. In this paper, we propose a novel approach named Modality and Event Adversarial Networks (MEAN) for fake news detection. It contains two parts: a multi-modal generator and a dual discriminator. The multi-modal generator extracts latent discriminant feature representations of text and image modalities. A decoder is adopted to reduce information loss in the generation process for each modality. The dual discriminator includes a modality discriminator and an event discriminator. The discriminator learns to classify the event or the modality of features, and network training is guided by the adversarial scheme. Experiments on two widely used datasets show that MEAN can perform better than state-of-the-art related multi-modal fake news detection methods.
Year
DOI
Venue
2022
10.1109/LSP.2022.3181893
IEEE SIGNAL PROCESSING LETTERS
Keywords
DocType
Volume
Feature extraction, Fake news, Blogs, Image reconstruction, Generators, Social networking (online), Deep learning, Dual discriminator, fake news detection, multi-modal generator
Journal
29
ISSN
Citations 
PageRank 
1070-9908
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Weidong Pan1487.23
Bing-fei Wu244951.46
Yanfei Sun314118.08
h zhou400.34
xy jing500.34