Title
A comparative study of RPCL and MCE based discriminative training methods for LVCSR
Abstract
This paper presents a comparative study of two discriminative methods, i.e., Rival Penalized Competitive Learning (RPCL) and Minimum Classification Error (MCE), for the tasks of Large Vocabulary Continuous Speech Recognition (LVCSR). MCE aims at minimizing a smoothed sentence error on training data, while RPCL focuses on avoiding misclassification through enforcing the learning of correct class and de-learning its best rival class. For a fair comparison, both the two discriminative mechanisms are implemented at the levels of phones and/or hidden Markov states using the same training corpus. The results show that both the MCE and RPCL based methods perform better than the Maximum Likelihood Estimation (MLE) based method. Comparing with the MCE based method, the RPCL based methods have better discriminative and generalizing abilities on both two levels.
Year
DOI
Venue
2014
10.1016/j.neucom.2013.05.060
Neurocomputing
Keywords
Field
DocType
training data,comparative study,training corpus,discriminative mechanism,discriminative method,large vocabulary continuous speech,best rival class,maximum likelihood estimation,correct class,better discriminative,discriminative training method,minimum classification error
Competitive learning,Computer science,Maximum likelihood,Artificial intelligence,Discriminative model,Training set,Pattern recognition,Generalization,Speech recognition,Hidden Markov model,Vocabulary,Sentence,Machine learning
Journal
Volume
ISSN
Citations 
134,
0925-2312
0
PageRank 
References 
Authors
0.34
9
4
Name
Order
Citations
PageRank
Zaihu Pang1111.96
Shikui Tu23914.25
Xihong Wu327953.02
Lei Xu43590387.32