Abstract | ||
---|---|---|
Gaze information has the potential to benefit Human-Computer Interaction (HCI) tasks, particularly when combined with speech. Gaze can improve our understanding of the user intention, as a secondary input modality, or it can be used as the main input modality by users with some level of permanent or temporary impairments. In this paper we describe a multimodal HCI system prototype which supports speech, gaze and the combination of both. The system has been developed for Active Assisted Living scenarios. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1145/2700648.2811369 | ACM Conference on Supporting Group Work |
Keywords | Field | DocType |
Multimodal,Gaze,Speech,Fusion | Gaze,Computer science,Speech interaction,Speech recognition,Human–computer interaction | Conference |
Citations | PageRank | References |
1 | 0.39 | 7 |
Authors | ||
8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Diogo Vieira | 1 | 1 | 0.39 |
João Dinis Freitas | 2 | 1 | 0.39 |
Cengiz Acartürk | 3 | 34 | 7.16 |
António J. S. Teixeira | 4 | 152 | 35.26 |
Luís Sousa | 5 | 1 | 0.39 |
Samuel Silva | 6 | 7 | 1.55 |
Sara Candeias | 7 | 43 | 11.10 |
Miguel Sales Dias | 8 | 133 | 24.96 |