Title
Developing crossmodal expression recognition based on a deep neural model.
Abstract
A robot capable of understanding emotion expressions can increase its own capability of solving problems by using emotion expressions as part of its own decision-making, in a similar way to humans. Evidence shows that the perception of human interaction starts with an innate perception mechanism, where the interaction between different entities is perceived and categorized into two very clear directions: positive or negative. While the person is developing during childhood, the perception evolves and is shaped based on the observation of human interaction, creating the capability to learn different categories of expressions. In the context of human-robot interaction, we propose a model that simulates the innate perception of audio-visual emotion expressions with deep neural networks, that learns new expressions by categorizing them into emotional clusters with a self-organizing layer. The proposed model is evaluated with three different corpora: The Surrey Audio-Visual Expressed Emotion SAVEE database, the visual Bi-modal Face and Body benchmark FABO database, and the multimodal corpus of the Emotion Recognition in the Wild EmotiW challenge. We use these corpora to evaluate the performance of the model to recognize emotional expressions, and compare it to state-of-the-art research.
Year
DOI
Venue
2016
10.1177/1059712316664017
Adaptive Behaviour
Keywords
Field
DocType
Crossmodal learning,convolution neural network,emotion expression recognition,self-organizing maps
Expressed emotion,Crossmodal,Expression (mathematics),Computer science,Convolutional neural network,Self-organizing map,Emotional expression,Artificial intelligence,Robot,Perception,Machine learning
Journal
Volume
Issue
ISSN
24
5
1059-7123
Citations 
PageRank 
References 
20
0.78
25
Authors
2
Name
Order
Citations
PageRank
Pablo V. A. Barros111922.02
Stefan Wermter21100151.62