Title
What's “up”? — Resolving interaction ambiguity through non-visual cues for a robotic dressing assistant
Abstract
Robots that can assist in activities of daily living (ADL) such as dressing assistance, need to be capable of intuitive and safe interaction. Vision systems are often used to provide information on the position and movement of the robot and user. However, in a dressing context, technical complexity, occlusion and concerns over user privacy pushes research to investigate other approaches for human-robot interaction (HRI). We analysed verbal, proprioceptive and force feedback from 18 participants during a human-human dressing experiment where users received dressing assistance from a researcher mimicking robot behaviour. This paper investigates the occurrence of deictic speech in an assisted-dressing task and how any ambiguity could be resolved to ensure safe and reliable HRI. We focus on one of the most frequently occurring deictic words “up”, which was captured over 300 times during the experiments and is used as an example of an ambiguous command. We attempt to resolve the ambiguity of these commands through predictive models. These models were used to predict end effector choice and the direction in which the garment should move. The model for predicting end effector choice resulted in 70.4% accuracy based on the user's head orientation. For predicting garment direction, the model used the angle of the user's arm and resulted in 87.8% accuracy. We also found that additional categories such as the starting position of the user's arms and end-effector height may improve the accuracy of a predictive model. We present suggestions on how these inputs may be attained through non-visual means, for example through haptic perception of end-effector position, proximity sensors and acoustic source localisation.
Year
DOI
Venue
2017
10.1109/ROMAN.2017.8172315
2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
Keywords
Field
DocType
verbal force feedback analysis,ambiguous command,deictic words,reliable HRI,safe HRI,assisted-dressing task,researcher mimicking robot behaviour,human-human dressing experiment,proprioceptive force feedback,human-robot interaction,user privacy,technical complexity,vision systems,intuitive interaction,daily living,robotic dressing assistant,nonvisual cues,interaction ambiguity,end-effector position,nonvisual means,predictive model,end-effector height,end effector choice
Sensory cue,Computer vision,Proximity sensor,Haptic perception,Computer science,Robot end effector,Artificial intelligence,Deixis,Robot,Ambiguity,Haptic technology
Conference
ISSN
ISBN
Citations 
1944-9445
978-1-5386-3519-3
0
PageRank 
References 
Authors
0.34
15
4
Name
Order
Citations
PageRank
Greg Chance132.07
Praminda Caleb-Solly211717.51
Aleksandar Jevtic38210.40
Sanja Dogramadzi44316.60