Title
Exploring Temporal Dependencies In Multimodal Referring Expressions With Mixed Reality
Abstract
In collaborative tasks, people rely both on verbal and nonverbal cues simultaneously to communicate with each other. For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. We analysed the acquired data from mixed reality experiments and formulated a hypothesis that modelling temporal dependencies of events in these three modalities increases the model's predictive power. We evaluated our model on a Bayesian framework to interpret referring expressions with and without exploiting the temporal prior.
Year
DOI
Venue
2019
10.1007/978-3-030-21565-1_8
VIRTUAL, AUGMENTED AND MIXED REALITY: APPLICATIONS AND CASE STUDIES, VAMR 2019, PT II
Keywords
Field
DocType
Multimodal interaction, Human-robot interaction, Referring expressions, Mixed reality
Modalities,Expression (mathematics),Predictive power,Computer science,Head movements,Gesture,Human–computer interaction,Mixed reality,Robot,Bayesian probability
Journal
Volume
ISSN
Citations 
11575
0302-9743
1
PageRank 
References 
Authors
0.36
14
5
Name
Order
Citations
PageRank
Elena Sibirtseva111.38
ali ghadirzadeh2134.32
Iolanda Leite383358.84
Mårten Björkman420213.90
Danica Kragic52070142.17