Abstract | ||
---|---|---|
Direct touch gestures are getting popular as an input modality for mobile and tabletop interaction. However, the finger touch interface is considered as not accurate compared with pen-based interface. One of the main reasons is that the visual feedback of the finger touch is occluded because of the size of fingertip. It has made difficult for perceiving and correcting errors. We propose to utilize another modality to provide information on occluded area. Spatial information on visual channel is transformed to temporal and frequency information on another modality. We use sound modality to illustrate the proposed trans-modality. Results show that performance with additional modality is better for drawing where the visual information is important than only with the visual feedback. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1145/2702613.2732817 | CHI Extended Abstracts |
Keywords | Field | DocType |
miscellaneous,finger gesture,visual occlusion,touch gestural interface,sound feedback | Spatial analysis,Direct touch,Computer vision,Gesture,Computer science,Communication channel,Human–computer interaction,Artificial intelligence,Multimedia,Visual occlusion | Conference |
Citations | PageRank | References |
0 | 0.34 | 10 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
BoYu Gao | 1 | 2 | 3.09 |
HyungSeok Kim | 2 | 116 | 22.09 |
Hasup Lee | 3 | 6 | 1.77 |
Jooyoung Lee | 4 | 573 | 46.13 |
Jee-In Kim | 5 | 72 | 19.78 |