Title
Wise teachers train better DNN acoustic models.
Abstract
Automatic speech recognition is becoming more ubiquitous as recognition performance improves, capable devices increase in number, and areas of new application open up. Neural network acoustic models that can utilize speaker-adaptive features, have deep and wide layers, or more computationally expensive architectures, for example, often obtain best recognition accuracy but may not be suitable for the given budget of computational and storage resources or latency required by the deployed system. We explore a straightforward training approach which takes advantage of highly accurate but expensive-to-evaluate neural network acoustic models by using their outputs to relabel training examples for easier-to-deploy models. Experiments on a large vocabulary continuous speech recognition task offer relative reductions in word error rate of up to 16.7 % over training with the hard aligned labels by effectively making use of large amounts of additional untranscribed data. Somewhat remarkably, the approach works well even when only two output classes are present. Experiments on a voice activity detection task give relative reductions in equal error rate of up to 11.5 % when using a convolutional neural network to relabel training examples for a feedforward neural network. An investigation into the hidden layer weight matrices finds that soft target-trained networks tend to produce weight matrices having fuller rank and slower decay in singular values than their hard target-trained counterparts, suggesting that more of the network's capacity is utilized for learning additional information giving better accuracy.
Year
DOI
Venue
2016
10.1186/s13636-016-0088-7
EURASIP J. Audio, Speech and Music Processing
Keywords
Field
DocType
Soft targets, Deep neural networks, Online speech recognition, Speaker-adaptive features, Model compression
Feedforward neural network,Singular value,Computer science,Voice activity detection,Convolutional neural network,Word error rate,Speech recognition,Time delay neural network,Artificial intelligence,Artificial neural network,Vocabulary,Machine learning
Journal
Volume
Issue
ISSN
2016
1
1687-4722
Citations 
PageRank 
References 
2
0.37
31
Authors
3
Name
Order
Citations
PageRank
Ryan Price120.71
Ken-ichi Iso2355.35
Koichi Shinoda346365.14