Title
Considering multi-modal speech visualization for deaf and hard of hearing people
Abstract
Supporting deaf and hard of hearing (D/HH) people to understand natural conversation is one of the important activities of social welfare. However, currently the communication support for D/HH people is not enough in Japan. Although existing communication methods, such as sign language and lipreading, are effective in one-to-one conversation, they have several disadvantages in one-to-many conversation such as meetings or conventions. In order to support D/HH people in understanding conversation, this paper proposes a multi-modal visualization application which provides many aspects of information about speech contents. Concrete examples of visualization modes include displaying subtitles by voice recognition and showing speaker's mouth to assist lip-reading.
Year
DOI
Venue
2015
10.1109/APSITT.2015.7217102
2015 10th Asia-Pacific Symposium on Information and Telecommunication Technologies (APSITT)
Keywords
Field
DocType
Multi-modal visualization,deaf and hard of hearing people,supporting understanding conversation,lip-reading,voice recognition
Conversation,Visualization,Computer science,Gesture recognition,Cued speech,Speech recognition,Sign language,Modal,Speech technology
Conference
Citations 
PageRank 
References 
0
0.34
1
Authors
8