Title
A Multimodal Real-Time Mri Articulatory Corpus For Speech Research
Abstract
We present MRI-TIMIT: a large-scale database of synchronized audio and real-time magnetic resonance imaging (rtMRI) data for speech research. The database currently consists of speech data acquired from two male and two female speakers of American English. Subjects' upper airways were imaged in the midsagittal plane while reading the same 460 sentence corpus used in the MOCHA-TIMIT corpus [1]. Accompanying acoustic recordings were phonemically transcribed using forced alignment Vocal tract tissue boundaries were automatically identified in each video frame, allowing for dynamic quantification of each speaker's midsagittal articulation. The database and companion toolset provide a unique resource with which to examine articulatory-acoustic relationships in speech production.
Year
Venue
Keywords
2011
12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5
speech production, speech corpora, real-time MRI, multi-modal database, large-scale phonetic tools
Field
DocType
Citations 
Speech corpus,Computer science,Speech recognition,American English,Natural language processing,Artificial intelligence,Real-time MRI,Sentence,Speech production,Vocal tract
Conference
24
PageRank 
References 
Authors
1.04
4
10
Name
Order
Citations
PageRank
Narayanan Shrikanth15558439.23
Erik Bresch2998.79
Prasanta Kumar Ghosh315632.78
Louis Goldstein4417.82
Athanasios Katsamanis530122.71
Yoon Kim6153357.57
Adam C. Lammert7928.35
Michael I. Proctor8394.63
Vikram Ramanarayanan97013.97
Yinghua Zhu10251.43