Title
Unification-based multimodal integration
Abstract
Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for map-based tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by unification of typed feature structures representing the semantic contributions of the different modes. This integration method allows the component modalities to mutually compensate for each others' errors. It is implemented in Quick-Set, a multimodal (pen/voice) system that enables users to set up and control distributed interactive simulations.
Year
DOI
Venue
1997
10.3115/976909.979653
conference of the european chapter of the association for computational linguistics
Keywords
DocType
Volume
gestural input,component modality,multimodal interaction,simultaneous input,conclusive advantage,unification-based multimodal integration,multimodal language processing architecture,feature structure,different mode,speech-only interaction,integration method
Conference
P97-1
Citations 
PageRank 
References 
104
16.23
9
Authors
6
Search Limit
100104
Name
Order
Citations
PageRank
michael j g johnston144759.76
Phil Cohen23203668.11
David McGee354665.27
Sharon Oviatt43197439.42
James A. Pittman523763.89
Ira Smith620626.39