Title
Multichannel ASR with Knowledge Distillation and Generalized Cross Correlation Feature.
Abstract
Multi-channel signal processing techniques have played an important role in the far-field automatic speech recognition (ASR) as the separate front-end enhancement part. However, they often meet the mismatch problem. In this paper, we proposed a novel architecture of acoustic model, in which the multi-channel speech without preprocessing was utilized directly. Besides the strategy of knowledge distillation and the generalized cross correlation (GCC) adaptation were employed. We use knowledge distillation to transfer knowledge from a well-trained close-talking model to distant-talking scenarios in every frame of the multichannel distant speech. Moreover, the GCC between microphones, which contains the spatial information, is supplied as an auxiliary input to the neural network. We observe good compensation of those two techniques. Evaluated with the AMI and ICSI meeting corpora, the proposed methods achieve relative WER improvement of 7.7% and 7.5% over the model trained directly on the concatenated multi-channel speech.
Year
DOI
Venue
2018
10.1109/SLT.2018.8639600
SLT
Keywords
Field
DocType
Microphones,Data models,Adaptation models,Neural networks,Speech recognition,Training,Correlation
Cross-correlation,Spatial analysis,Data modeling,Signal processing,Pattern recognition,Computer science,Speech recognition,Preprocessor,Concatenation,Artificial intelligence,Artificial neural network,Acoustic model
Conference
ISSN
ISBN
Citations 
2639-5479
978-1-5386-4334-1
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Wenjie Li136859.74
Yu Zhang229498.00
Pengyuan Zhang35019.46
Fengpei Ge4135.52