Title
Multi-modal classification of Twitter data during disasters for humanitarian response
Abstract
In recent days, humanitarian organizations rely on social media like Twitter for situational awareness during the disaster. Millions of tweets are posted on Twitter in the form of text or images or both. Existing works showed that both image and text give complementary information during the disaster. Multi-modal informative tweet detection is helpful to both government and non-government organizations which remains a challenging task during the disaster. However, most of the existing works focused on either text or image content, but not both. In this paper, we propose a novel method based on the combination of fine-tuned BERT and DenseNet models for identifying the multi-modal informative tweets during the disaster. Fine-tuned BERT model is used to extract the linguistic, syntactic and semantic features which help deep understanding of the informative text present in the multi-modal tweet. On the other hand, the fine-tuned DenseNet model is used to extract the sophisticated features from the image. Different experiments are performed on a vast number of data-sets such as Hurricane Harvey, Hurricane Irma, Hurricane Maria, California Wildfire, Sri Lanka floods and Iraq–Iran Earthquake. Experimental results demonstrate that the proposed method outperforms the state-of-the-art method on different parameters. It is the first attempt, to the best of our knowledge, to detect multi-modal informative tweets using the combination of fine-tuned BERT and DenseNet models, where at least any text or image is informative during the disaster.
Year
DOI
Venue
2021
10.1007/s12652-020-02791-5
Journal of Ambient Intelligence and Humanized Computing
Keywords
DocType
Volume
Text, Images, DenseNet, BERT
Journal
12
Issue
ISSN
Citations 
11
1868-5137
1
PageRank 
References 
Authors
0.35
0
3
Name
Order
Citations
PageRank
Sreenivasulu Madichetty151.45
M. Sridevi2165.07
P. Jayadev310.35