Abstract | ||
---|---|---|
Embedded close-to-the-eye gaze tracking permits new types of interaction in see-through augmented and virtual environments. It is however unclear how gaze-input can be used to select and confirm commands when the tracking technologies are located at close proximity to useru0027s eyes, but cannot utilize fixed geometry as in screen-based environments. We conducted a study in a simulated image-guided medical environment where users employed gaze-input to control an on-screen display. The current hand-based interaction of such views is a frequent source of interruption and thus feasibility of alternative input modalities has to be evaluated. We created a three-stage gaze-based confirmation mechanism and evaluated its robustness and the limits of the target size. Two sizes of the target for command selection were evaluated, occupying 12 and 6 degrees of visual angle at the 30cm distance. The results show the time to perform an action using gaze input is shorter than in hand-based interaction with the real-world device, confirming that this input modality is feasible. The size of target has little effect on the interaction and the completion-error is low. The findings have implications on the design of future gaze-based input methods for these devices. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1145/3174910.3174940 | AH |
Keywords | Field | DocType |
Gaze interaction,VR,surgical image-guided techniques,gaze-based interaction,surgical microscope | Modalities,Virtual image,Surgical microscope,Computer vision,Visual angle,Gaze,Computer science,Robustness (computer science),Artificial intelligence | Conference |
Citations | PageRank | References |
0 | 0.34 | 13 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hoorieh Afkari | 1 | 6 | 3.31 |
David Gil de Gómez Pérez | 2 | 0 | 0.68 |
Roman Bednarik | 3 | 561 | 48.77 |