Title
Improvements To Deep Convolutional Neural Networks For Lvcsr
Abstract
Deep Convolutional Neural Networks (CNNs) are more powerful than Deep Neural Networks (DNN), as they are able to better reduce spectral variation in the input signal. This has also been confirmed experimentally, with CNNs showing improvements in word error rate (WER) between 4-12% relative compared to DNNs across a variety of LVCSR tasks. In this paper, we describe different methods to further improve CNN performance. First, we conduct a deep analysis comparing limited weight sharing and full weight sharing with state-of-the-art features. Second, we apply various pooling strategies that have shown improvements in computer vision to an LVCSR speech task. Third, we introduce a method to effectively incorporate speaker adaptation, namely fMLLR, into log-mel features. Fourth, we introduce an effective strategy to use dropout during Hessian-free sequence training. We find that with these improvements, particularly with fMLLR and dropout, we are able to achieve an additional 2-3% relative improvement in WER on a 50-hour Broadcast News task over our previous best CNN baseline. On a larger 400-hour BN task, we find an additional 4-5% relative improvement over our previous best CNN baseline.
Year
DOI
Venue
2013
10.1109/asru.2013.6707749
2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU)
Keywords
DocType
Volume
neural nets,speech recognition
Conference
abs/1309.1501
Citations 
PageRank 
References 
57
3.67
13
Authors
9
Name
Order
Citations
PageRank
Tara N. Sainath13497232.43
B. Kingsbury24175335.43
Abdel-rahman Mohamed33772266.13
George E. Dahl44734416.42
George Saon582580.99
Hagen Soltau679567.33
Tomás Beran7584.70
Aleksandr Y. Aravkin825232.68
Bhuvana Ramabhadran91779153.83