Title
Multimodal emotional state recognition using sequence-dependent deep hierarchical features.
Abstract
Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions.
Year
DOI
Venue
2015
10.1016/j.neunet.2015.09.009
Neural Networks
Keywords
Field
DocType
Emotion recognition,Deep learning,Convolutional Neural Networks,Hierarchical features,Human Robot Interaction
State recognition,Expression (mathematics),Convolutional neural network,Feature extraction,Artificial intelligence,Deep learning,Robot,Mathematics,Machine learning,Human–robot interaction,Robotics
Journal
Volume
Issue
ISSN
72
C
1879-2782
Citations 
PageRank 
References 
14
0.52
24
Authors
4
Name
Order
Citations
PageRank
Pablo V. A. Barros111922.02
Doreen Jirak2293.59
Cornelius Weber331841.92
stefan wermter4211.60