Abstract | ||
---|---|---|
Supporting deaf and hard of hearing (D/HH) people to understand natural conversation is one of the important activities of social welfare. However, currently the communication support for D/HH people is not enough in Japan. Although existing communication methods, such as sign language and lipreading, are effective in one-to-one conversation, they have several disadvantages in one-to-many conversation such as meetings or conventions. In order to support D/HH people in understanding conversation, this paper proposes a multi-modal visualization application which provides many aspects of information about speech contents. Concrete examples of visualization modes include displaying subtitles by voice recognition and showing speaker's mouth to assist lip-reading. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1109/APSITT.2015.7217102 | 2015 10th Asia-Pacific Symposium on Information and Telecommunication Technologies (APSITT) |
Keywords | Field | DocType |
Multi-modal visualization,deaf and hard of hearing people,supporting understanding conversation,lip-reading,voice recognition | Conversation,Visualization,Computer science,Gesture recognition,Cued speech,Speech recognition,Sign language,Modal,Speech technology | Conference |
Citations | PageRank | References |
0 | 0.34 | 1 |
Authors | ||
8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yusuke Toba | 1 | 0 | 0.34 |
Hiroyasu Horiuchi | 2 | 4 | 1.29 |
Shinsuke Matsumoto | 3 | 205 | 33.53 |
Sachio Saiki | 4 | 55 | 24.46 |
Masahide Nakamura | 5 | 526 | 72.51 |
Tomohito Uchino | 6 | 0 | 0.34 |
Tomohiro Yokoyama | 7 | 0 | 0.34 |
Yasuhiro Takebayashi | 8 | 0 | 0.34 |