Title
Towards an intelligent framework for multimodal affective data analysis.
Abstract
An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human–computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate.
Year
DOI
Venue
2015
10.1016/j.neunet.2014.10.005
Neural Networks
Keywords
Field
DocType
Multimodal,Multimodal sentiment analysis,Facial expressions,Speech,Text,Emotion analysis,Affective computing
Intelligent agent,Social media,Computer science,Word error rate,Feature extraction,Information extraction,Facial expression,Artificial intelligence,Affective computing,Affect (psychology),Machine learning
Journal
Volume
Issue
ISSN
63
1
0893-6080
Citations 
PageRank 
References 
51
1.36
65
Authors
4
Name
Order
Citations
PageRank
Soujanya Poria1133660.98
Erik Cambria23873183.70
Amir Hussain370529.16
Guang-Bin Huang411303470.52