Title
Deep-level acoustic-to-articulatory mapping for DBN-HMM based phone recognition
Abstract
In this paper we experiment with methods based on Deep Belief Networks (DBNs) to recover measured articulatory data from speech acoustics. Our acoustic-to-articulatory mapping (AAM) processes go through multi-layered and hierarchical (i.e., deep) representations of the acoustic and the articulatory domains obtained through unsupervised learning of DBNs. The unsupervised learning of DBNs can serve two purposes: (i) pre-training of the Multi-layer Perceptrons that perform AAM; (ii) transformation of the articulatory domain that is recovered from acoustics through AAM. The recovered articulatory features are combined with MFCCs to compute phone posteriors for phone recognition. Tested on the MOCHA-TIMIT corpus, the recovered articulatory features, when combined with MFCCs, lead to up to a remarkable 16.6% relative phone error reduction w.r.t. a phone recognizer that only uses MFCCs.
Year
DOI
Venue
2012
10.1109/SLT.2012.6424252
Spoken Language Technology Workshop
Keywords
Field
DocType
multilayer perceptrons,speech recognition,unsupervised learning,AAM,DBN-HMM based phone recognition,MFCCs,articulatory data,deep level acoustic-to-articulatory mapping,multilayer perceptrons,phone recognition,speech acoustics,unsupervised learning,Acoustic-to-articulatory mapping,deep belief networks,phone recognition
Pattern recognition,Computer science,Deep belief network,Speech recognition,Unsupervised learning,Phone,Artificial intelligence,Hidden Markov model,Deep level,Perceptron,Speech Acoustics
Conference
ISSN
ISBN
Citations 
2639-5479
978-1-4673-5124-9
11
PageRank 
References 
Authors
0.70
8
4
Name
Order
Citations
PageRank
Leonardo Badino16710.95
Claudia Canevari2110.70
Luciano Fadiga323519.90
Giorgio Metta42515198.59