Title
Deep Least Squares Regression For Speaker Adaptation
Abstract
Recently, speaker adaptation methods in deep neural networks (DNNs) have been widely studied for automatic speech recognition. However, almost all adaptation methods for DNNs have to consider various heuristic conditions such as mini-batch sizes, learning rate scheduling, stopping criteria, and initialization conditions because of the inherent property of a stochastic gradient descent (SGD)-based training process. Unfortunately, those heuristic conditions are hard to be properly tuned. To alleviate those difficulties, in this paper, we propose a least squares regression -based speaker adaptation method in a DNN framework utilizing posterior mean of each class. Also, we show how the proposed method can provide a unique solution which is quite easy and fast to calculate without SGD. The proposed method was evaluated in the TED-LIUM corpus. Experimental results showed that the proposed method achieved up to a 4.6% relative improvement against a speaker independent DNN. In addition, we report further performance improvement of the proposed method with speaker-adapted features.
Year
DOI
Venue
2017
10.21437/Interspeech.2017-783
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION
Keywords
Field
DocType
deep neural network, speaker adaptation, class-dependent posterior mean, deep least squares regression
Least squares,Pattern recognition,Computer science,Speech recognition,Artificial intelligence,Speaker adaptation
Conference
ISSN
Citations 
PageRank 
2308-457X
0
0.34
References 
Authors
12
4
Name
Order
Citations
PageRank
Younggwan Kim1176.11
Hyungjun Lim2317.66
Jahyun Goo301.35
Hoirin Kim401.01