Title
A social emotion classification approach using multi-model fusion
Abstract
With the proliferation of the online video publishing, the number of multimodal contents on the Internet has exponentially grown. Research of emotion analysis has developed from the traditional single-mode to complex multimode analysis. Most recent studies have paid rare attention to the visual emotion information deriving from merging visual and audio emotional information at the feature or decision level, even though some of them considered the multimodality analysis. In this paper, we extract visual, textual, and audio information from video and propose a multimodal emotional classification framework to capture the emotions of users in social networks. We have designed a 3DCLS (3D Convolutional-Long Short Term Memory) hybrid model that classifies visual emotions as well as a CNN–RNN hybrid model that classifies text-based emotions. Finally, visual, audio and text modes are combined to generate final emotional classification results. Experiments on the MOUD and IEMOCAP emotion datasets show that the proposed framework outperforms existing models in multimodal mood analysis.
Year
DOI
Venue
2020
10.1016/j.future.2019.07.007
Future Generation Computer Systems
Keywords
Field
DocType
Multimodal fusion,Emotion analysis,3D convolutional neural network,Recurrent neural network
Mood,Multimodality,Social network,Computer science,Emotion classification,Real-time computing,Natural language processing,Artificial intelligence,Online video,Short-term memory,Merge (version control),The Internet
Journal
Volume
ISSN
Citations 
102
0167-739X
2
PageRank 
References 
Authors
0.49
0
3
Name
Order
Citations
PageRank
Guangxia Xu1429.46
Weifeng Li220.49
Jun Liu323568.22