Abstract | ||
---|---|---|
Continuing advances in multimodal technology, machine learning, and virtual reality are providing the means to explore and develop multimodal interfaces that are faster, more accurate, and more meaningful in the interactions they support. This paper describes an ongoing effort to develop an interface using input from voice, hand gestures, and eye gaze to interact with information in a virtual environment. A definition for a virtual environment tailored for the presentation and manipulation of information is introduced along with a new metaphor for multimodal interactions within a virtual environment. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1007/978-3-030-21607-8_5 | VIRTUAL, AUGMENTED AND MIXED REALITY: MULTIMODAL INTERACTION, PT I |
Keywords | DocType | Volume |
Multimodal interface, Gesture recognition, Virtual environment | Conference | 11574 |
ISSN | Citations | PageRank |
0302-9743 | 0 | 0.34 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jeffrey T. Hansberger | 1 | 0 | 0.68 |
Chao Peng | 2 | 0 | 1.35 |
Victoria R. Blakely | 3 | 0 | 0.34 |
Sarah C. Meacham | 4 | 0 | 0.34 |
Lizhou Cao | 5 | 4 | 2.49 |
Nicholas Diliberti | 6 | 0 | 0.34 |