Title
Unsupervised training of subspace gaussian mixture models for conversational telephone speech recognition
Abstract
This paper presents our preliminary works on exploring unsupervised training of subspace gaussian mixture models for under-resourced CTS recognition task. The subspace model yields better performance than conventional GMM model, particularly in small or middle-sized training set. As an effective way to save human efforts, unsupervised learning is often applied to automatically transcribe a large amount of speech archives. The additional auto-transcribed data may help to improve model accuracy. In this paper, experiments are carried out on two publicly available English conversational telephone speech corpora. Both GMM and SGMM model in combination with unsupervised learning are examined and compared in this paper.
Year
DOI
Venue
2012
10.1109/ICASSP.2012.6289000
ICASSP
Keywords
Field
DocType
autotranscribed data,subspace acoustic model,speech recognition with low resources,speech archives,speech recognition,conversational telephone speech recognition,learning (artificial intelligence),subspace gaussian mixture models,under-resourced cts recognition task,middle-sized training set,english conversational telephone speech corpora,gaussian processes,unsupervised learning,sgmm model,unsupervised training,acoustics,speech,learning artificial intelligence,data models,hidden markov models
Speech corpus,Training set,Pattern recognition,Subspace topology,Computer science,Speech recognition,Unsupervised learning,Artificial intelligence,Gaussian process,Machine learning,Mixture model
Conference
Volume
Issue
ISSN
null
null
1520-6149 E-ISBN : 978-1-4673-0044-5
ISBN
Citations 
PageRank 
978-1-4673-0044-5
0
0.34
References 
Authors
6
3
Name
Order
Citations
PageRank
Zejun Ma111.41
Xiaorui Wang2196.13
Bo Xu324136.59