Abstract | ||
---|---|---|
Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for map-based tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by unification of typed feature structures representing the semantic contributions of the different modes. This integration method allows the component modalities to mutually compensate for each others' errors. It is implemented in Quick-Set, a multimodal (pen/voice) system that enables users to set up and control distributed interactive simulations. |
Year | DOI | Venue |
---|---|---|
1997 | 10.3115/976909.979653 | conference of the european chapter of the association for computational linguistics |
Keywords | DocType | Volume |
gestural input,component modality,multimodal interaction,simultaneous input,conclusive advantage,unification-based multimodal integration,multimodal language processing architecture,feature structure,different mode,speech-only interaction,integration method | Conference | P97-1 |
Citations | PageRank | References |
104 | 16.23 | 9 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
michael j g johnston | 1 | 447 | 59.76 |
Phil Cohen | 2 | 3203 | 668.11 |
David McGee | 3 | 546 | 65.27 |
Sharon Oviatt | 4 | 3197 | 439.42 |
James A. Pittman | 5 | 237 | 63.89 |
Ira Smith | 6 | 206 | 26.39 |