Title
Distilling Knowledge for Distant Speech Recognition via Parallel Data
Abstract
In order to improve the performance of distant speech recognition tasks, this paper proposes to distill knowledge from the close-talking model to the distant model using parallel data. The close-talking model is called the teacher model. The distant model is called the student model. The student model is trained to imitate the output distributions of the teacher model. This constraint can be realized by minimizing the Kullback-Leibler (KL) divergence between the output distribution of the student model and the teacher model. Experimental results on AMI datasets show that the best student model achieves up to 8.5% relative word error rate (WER) reduction when compared with the conventionally-trained baseline models.
Year
DOI
Venue
2019
10.1109/APSIPAASC47483.2019.9023121
2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
Keywords
DocType
ISSN
parallel data,distant speech recognition tasks,distant model,teacher model,student model,output distribution,knowledge distillation,Kullback-Leibler divergence,AMI datasets
Conference
2640-009X
ISBN
Citations 
PageRank 
978-1-7281-3249-5
0
0.34
References 
Authors
6
2
Name
Order
Citations
PageRank
Jiangyan Yi11917.99
Jianhua Tao2848138.00