Title
iSocioBot: A Multimodal Interactive Social Robot.
Abstract
We present one way of constructing a social robot, such that it is able to interact with humans using multiple modalities. The robotic system is able to direct attention towards the dominant speaker using sound source localization and face detection, it is capable of identifying persons using face recognition and speaker identification and the system is able to communicate and engage in a dialog with humans by using speech recognition, speech synthesis and different facial expressions. The software is built upon the open-source robot operating system framework and our software is made publicly available. Furthermore, the electrical parts (sensors, laptop, base platform, etc.) are standard components, thus allowing for replicating the system. The design of the robot is unique and we justify why this design is suitable for our robot and the intended use. By making software, hardware and design accessible to everyone, we make research in social robotics available to a broader audience. To evaluate the properties and the appearance of the robot we invited users to interact with it in pairs (active interaction partner/observer) and collected their responses via an extended version of the Godspeed Questionnaire. Results suggest an overall positive impression of the robot and interaction experience, as well as significant differences in responses based on type of interaction and gender.
Year
DOI
Venue
2018
https://doi.org/10.1007/s12369-017-0426-7
I. J. Social Robotics
Keywords
Field
DocType
Social robot,Human robot interaction,Speech processing,Image processing
Robot learning,Social robot,Speech synthesis,Simulation,Personal robot,Psychology,Face detection,Mobile robot navigation,Robot,Human–robot interaction
Journal
Volume
Issue
ISSN
10
1
1875-4791
Citations 
PageRank 
References 
0
0.34
18
Authors
7