Title
Deep networks with stochastic depth for acoustic modelling.
Abstract
Training very deep neural networks is very difficult because of gradient degradation. However, the incomparable expressiveness of the many deep layers is highly desirable at testing time and usually leads to better performance. Recently, training techniques such as residual networks that enable us to train very deep networks have proved to be a great success. In this paper, we studied the application of the recently proposed deep networks with stochastic depth (DNSD) to train deeper acoustic models for speech recognition. By randomly dropping a subset of layers during training, the studied stochastic depth training method helps reduce the training time substantially, yet the networks trained are much deeper since all the layers are kept during testing. We investigated this approach on the TIMIT data set. Our preliminary experimental results show that when training data are limited, deep networks with stochastic depth helps very little. However, when more training data are available, DNSD significantly improves the recognition accuracy, compared with a conventional deep neural networks.
Year
Venue
Field
2016
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference
Training set,Residual,Data modeling,TIMIT,Computer science,Stochastic process,Artificial intelligence,Artificial neural network,Deep neural networks,Machine learning,Expressivity
DocType
ISSN
Citations 
Conference
2309-9402
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Duisheng Chen100.34
Weibin Zhang23110.03
Xiangmin Xu310017.62
Xiaofeng Xing400.34