Title
FEETICHE - FEET Input for Contactless Hand gEsture Interaction.
Abstract
Foot input has been proposed to support hand gestures in many interactive contexts, however, little attention has been given contactless 3D object manipulation. This is important since many applications, namely sterile surgical theaters require contactless operation. However, relying solely on hand gestures makes it difficult to specify precise interactions since hand movements are difficult to segment into command and interaction modes. The unfortunate results range from unintended activations, to noisy interactions and misrecognized commands. In this paper, we present FEETICHE a novel set of multi-modal interactions combining hand and foot input for supporting contactless 3D manipulation tasks, while standing in front of large displays driven by foot tapping and heel rotation. We use depth sensing cameras to capture both hand and feet gestures, and developed a simple yet robust motion capture method to track dominant foot input. Through two experiments, we assess how well foot gestures support mode switching and how this frees the hands to perform accurate manipulation tasks. Results indicate that users effectively rely on foot gestures to improve mode switching and reveal improved accuracy on both rotation and translation tasks.
Year
DOI
Venue
2019
10.1145/3359997.3365704
VRCAI '19: The 17th International Conference on Virtual-Reality Continuum and its Applications in Industry Brisbane QLD Australia November, 2019
Field
DocType
ISBN
Computer vision,Gesture,Computer science,Artificial intelligence
Conference
978-1-4503-7002-8
Citations 
PageRank 
References 
0
0.34
0
Authors
6
Name
Order
Citations
PageRank
Daniel S. Lopes1327.16
Filipe Relvas290.83
Soraia Paulo300.34
Yosra Rekik400.34
Laurent Grisoni531423.57
Joaquim A. Jorge6100881.51