Title
Multi-Objective Multi-Task Learning on RNNLM for Speech Recognition.
Abstract
The cross entropy (CE) loss function is commonly adopted for neural network language model (NNLM) training. Although this criterion is largely successful, as evidenced by the quick advance of NNLM, minimizing CE only maximizes likelihood of training data. When training data is insufficient, the generalization power of the resulting LM is limited on test data. In this paper, we propose to integrate a pairwise ranking (PR) loss with the CE loss for multi-objective training on recurrent neural network language model (RNNLM). The PR loss emphasizes discrimination between target and non-target words and also reserves probabilities for low-frequency correct words, which complements the distribution learning role of the CE loss. Combining the two losses may therefore help improve the performance of RNNLM. In addition, we incorporate multi-task learning (MTL) into the proposed multi-objective learning to regularize the primary task of RNNLM by an auxiliary task of part-of-speech (POS) tagging. The proposed approach to RNNLM learning has been evaluated on two speech recognition tasks of WSJ and AMI with encouraging results achieved on word error rate reductions.
Year
DOI
Venue
2018
10.1109/SLT.2018.8639649
SLT
Keywords
Field
DocType
Task analysis,Training,Training data,Speech recognition,History,Entropy,Artificial neural networks
Cross entropy,Multi-task learning,Task analysis,Computer science,Word error rate,Recurrent neural network,Speech recognition,Test data,Artificial neural network,Language model
Conference
ISSN
ISBN
Citations 
2639-5479
978-1-5386-4334-1
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Minguang Song102.37
Yunxin Zhao2807121.74
Shaojun Wang346838.96