Title
Acoustic Rendering of Data Tables Using Earcons and Prosody for Document Accessibility
Abstract
Earlier works show that using a prosody specification that is derived from natural human spoken rendition, increases the naturalness and overall acceptance of speech synthesised complex visual structures by conveying to audio certain semantic information hidden in the visual structure. However, prosody alone, although exhibits significant improvement, cannot perform adequately in the cases of very large complex data tables browsed in a linear manner. This work reports on the use of earcons and spearcons combined with prosodically enriched aural rendition of simple and complex tables. Three spoken combinations earcons+prosody , spearcons+prosody , and prosody were evaluated in order to examine how the resulting acoustic output would improve the document-to-audio semantic correlation throughput from the visual modality. The results show that the use of non-speech sounds can further improve certain qualities, such as listening effort, a crucial parameter when vocalising any complex visual structure contained in a document.
Year
DOI
Venue
2009
10.1007/978-3-642-02713-0_62
HCI (7)
Keywords
Field
DocType
certain quality,combinations earcons,document-to-audio semantic correlation throughput,visual structure,document accessibility,large complex data table,visual modality,complex table,audio certain semantic information,acoustic rendering,prosody specification,complex visual structure,complex data,text to speech
Prosody,Visual structure,Speech synthesis,Computer science,Naturalness,Active listening,Complex data type,Speech recognition,Semantic information,Natural language processing,Artificial intelligence,Rendering (computer graphics)
Conference
Volume
ISSN
Citations 
5616
0302-9743
5
PageRank 
References 
Authors
0.44
22
3
Name
Order
Citations
PageRank
Dimitris Spiliotopoulos17816.93
Panagiota Stavropoulou250.44
Georgios Kouroupetroglou316728.90