Title
Feature Level Fusion For Bimodal Facial Action Unit Recognition
Abstract
Recognizing facial actions from spontaneous facial displays suffers from subtle and complex facial deformation, frequent head movements, and partial occlusions. It is especially challenging when the facial activities are accompanied with speech. Instead of employing information solely from the visual channel, this paper presents a novel fusion framework, which exploits information from both visual and audio channels in recognizing speech-related facial action units (AUs). In particular, features are first extracted from visual and audio channels, independently. Then, the audio features are aligned with the visual features in order to handle the difference in time scales and the time shift between the two signals. Finally, these aligned audio and visual features are integrated via a feature-level fusion framework and utilized in recognizing AUs. Experimental results on a new audiovisual AU-coded dataset have demonstrated that the proposed feature-level fusion framework outperforms a state-of-the-art visual-based method in recognizing speech-related AUs, especially for those AUs that are "invisible" in the visual channel during speech. The improvement is more impressive with occlusions on the facial images, which, fortunately, would not affect the audio channel.
Year
DOI
Venue
2015
10.1109/ISM.2015.116
2015 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM)
Keywords
Field
DocType
facial action unit recognition, feature-level information fusion
Computer vision,Face hallucination,Three-dimensional face recognition,Head movements,Computer science,Fusion,Communication channel,Speech recognition,Artificial intelligence
Conference
Citations 
PageRank 
References 
1
0.35
8
Authors
4
Name
Order
Citations
PageRank
Zibo Meng124813.60
Shizhong Han22449.80
Min Chen324414.75
Yan Tong440921.36