Abstract | ||
---|---|---|
Multimodal input is known to be advantageous for graphical user interfaces, but its benefits for non-visual interaction are unknown. To explore this issue, an exploratory study was conducted with fourteen sighted subjects on a system that allows speech input and hand input on a touchpad. Findings include: (1) Users chose between these two input modalities based on the types of operations undertaken. Navigation operations were done primarily with touchpad input, while non-navigation instructions were carried out primarily using speech input. (2) Multimodal error correction was not prevalent. Repeating a failed operation until it succeeded and trying other methods in the same input modality were dominant error-correction strategies. (3) The modality learned first was not necessarily the primary modality used later, but a training order effect existed. These empirical results provide guidelines for designing non-visual multimodal input and create a comparison baseline for a subsequent study with blind users. |
Year | DOI | Venue |
---|---|---|
2006 | 10.1109/HICSS.2006.377 | HICSS |
Keywords | Field | DocType |
non-visual information navigation,multimodal input,input modality,speech input,non-visual interaction,hand input,exploratory study,touchpad input,primary modality,multimodal input usage,non-visual multimodal input,multimodal error correction,shape,information systems,speech recognition,navigation,error correction,graphical user interfaces,graphic user interface | Information system,Modalities,Computer science,Error detection and correction,Human–computer interaction,Graphical user interface,Touchpad,Multimedia,Exploratory research | Conference |
Volume | ISSN | ISBN |
6 | 1530-1605 | 0-7695-2507-5 |
Citations | PageRank | References |
4 | 0.47 | 15 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xiaoyu Chen | 1 | 4 | 0.81 |
Marilyn Tremaine | 2 | 387 | 64.54 |