Abstract | ||
---|---|---|
Studying tongue motion during speech using ultrasound is a standard procedure, however automatic ultrasound image labelling remains a challenge, as standard tongue shape extraction methods typically require human intervention. This article presents a method based on deep neuralnetworks to automatically extract tongue contours from speech ultrasound images. We use a deep autoencoder trained to learn the relationship between an image and its related contour, so that the model is able to automatically reconstruct contours from the ultrasound image alone. We use an automatic labelling algorithm instead of time-consuming handlabelling during the training process. We afterwards estimate the performances of both automatic labelling and contour extraction as compared to hand-labelling. Observed results show quality scores comparable to the state of the art. |
Year | Venue | DocType |
---|---|---|
2015 | ICPhS | Conference |
Volume | Citations | PageRank |
abs/1605.05912 | 3 | 0.44 |
References | Authors | |
7 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
A. Jaumard-Hakoun | 1 | 7 | 1.78 |
Kele Xu | 2 | 46 | 21.80 |
Pierre Roussel-Ragot | 3 | 45 | 4.38 |
Gérard Dreyfus | 4 | 475 | 58.97 |
B. Denby | 5 | 268 | 26.69 |