Title
Using natural language feedback in a neuro-inspired integrated multimodal robotic architecture
Abstract
In this paper we present a multi-modal human robot interaction architecture which is able to combine information coming from different sensory inputs, and can generate feedback for the user which helps to teach him/her implicitly how to interact with the robot. The system combines vision, speech and language with inference and feedback. The system environment consists of a Nao robot which has to learn objects situated on a table only by understanding absolute and relative object locations uttered by the user and afterwards points on a desired object to show what it has learned. The results of a user study and performance test show the usefulness of the feedback produced by the system and also justify the usage of the system in a real-world applications, as its classification accuracy of multi-modal input is around 80.8%. In the experiments, the system was able to detect inconsistent input coming from different sensory modules in all cases and could generate useful feedback for the user from this information.
Year
DOI
Venue
2016
10.1109/ROMAN.2016.7745090
2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
Keywords
Field
DocType
natural language feedback,neuroinspired integrated multimodal robotic architecture,multimodal human robot interaction architecture,sensory inputs,user feedback,vision,speech,inference,system environment,Nao robot,object locations,sensory modules
Situated,Computer vision,Architecture,Inference,Computer science,Simulation,Knowledge-based systems,Natural language,Artificial intelligence,Artificial neural network,Robot,Human–robot interaction
Conference
ISSN
ISBN
Citations 
1944-9445
978-1-5090-3930-2
6
PageRank 
References 
Authors
0.55
5
5
Name
Order
Citations
PageRank
Johannes Twiefel1124.48
Xavier Hinaut2367.93
Marcelo Borghetti360.55
Erik Strahl4315.23
Stefan Wermter51100151.62