Title
Audio-visual keyword spotting based on adaptive decision fusion under noisy conditions for human-robot interaction
Abstract
Keyword spotting (KWS) deals with the identification of keywords in unconstrained speech, which is a natural, straightforward and friendly way for human-robot interaction (HRI). Most keyword spotters have the common problem of noise-robustness when applied to real-world environment with dramatically changing noises. Since visual information won't be affected by the acoustic noise, it can be utilized to complementarily improve the noise-robustness. In this paper, a novel audio-visual keyword spotting approach based on adaptive decision fusion under noisy conditions is proposed. In order to accurately represent the appearance and movement of mouth region, an improved local binary pattern from three orthogonal planes (ILBP-TOP) is proposed. Besides, a parallel two-step recognition based on acoustic and visual keyword candidates is conducted and generates corresponding acoustic and visual scores for each keyword candidate. Optimal weights for combining acoustic and visual contributions under diverse noise conditions are generated using a neural network based on reliabilities of the two modalities. Experiments show that our proposed audio-visual keyword spotting based on decision fusion significantly improves the noise robustness and attains better performance than feature fusion based audiovisual spotter. Additionally, ILBP-TOP shows more competitive performance than LBP-TOP. © 2014 IEEE.
Year
DOI
Venue
2014
10.1109/ICRA.2014.6907840
Proceedings - IEEE International Conference on Robotics and Automation
DocType
ISSN
Citations 
Conference
1050-4729
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Hong Liu174782.65
Fan Ting281.50
Wu Pingping3324.36