Title
Face Expression Recognition by Cross Modal Data Association
Abstract
We present a novel facial expression recognition framework using audio-visual information analysis. We propose to model the cross-modality data correlation while allowing them to be treated as asynchronous streams. We also show that our framework can improve the recognition performance while significantly reducing the computational cost by avoiding redundant or insignificant frame processing by incorporating auditory information. In particular, we design a single good image representation of image sequence by weighted sums of registered face images where the weights are derived using auditory features. We use a still image based technique for the expression recognition task. Our framework, however, can be generalized to work with dynamic features as well. We performed experiments using eNTERFACE'05 audio-visual emotional database containing six archetypal emotion classes: Happy, Sad, Surprise, Fear, Anger and Disgust. We present one-to-one binary classification as well as multi-class classification performances evaluated using both subject dependent and independent strategies. Furthermore, we compare multi-class classification accuracies with those of previously published literature which use the same database. Our analyses show promising results.
Year
DOI
Venue
2013
10.1109/TMM.2013.2266635
IEEE Transactions on Multimedia
Keywords
Field
DocType
image classification,face recognition
Binary classification,Computer science,Artificial intelligence,Surprise,Contextual image classification,Asynchronous communication,Computer vision,Facial recognition system,Three-dimensional face recognition,Pattern recognition,Speech recognition,Facial expression,Modal
Journal
Volume
Issue
ISSN
15
7
1520-9210
Citations 
PageRank 
References 
14
0.52
28
Authors
2
Name
Order
Citations
PageRank
Ashish Tawari121916.07
Mohan M. Trivedi26564475.50