Title
Wearable multi-modal interface for human multi-robot interaction
Abstract
A complete prototype for multi-modal interaction between humans and multi-robot systems is described. The application focus is on search and rescue missions. From the human-side, speech and arm and hand gestures are combined to select, localize, and communicate task requests and spatial information to one or more robots in the field. From the robot side, LEDs and vocal messages are used to provide feedback to the human. The robots also employ coordinated autonomy to implement group behaviors for mixed initiative interaction. The system has been tested with different robotic platforms based on a number of different useful interaction patterns.
Year
DOI
Venue
2016
10.1109/SSRR.2016.7784305
2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)
Keywords
Field
DocType
wearable multi-modal interface,human multi-robot interaction,search and rescue missions,coordinated autonomy
Spatial analysis,Computer vision,Search and rescue,Wearable computer,Simulation,Computer science,Gesture,Human–computer interaction,Artificial intelligence,Robot,Robotics,Modal
Conference
ISSN
ISBN
Citations 
2374-3247
978-1-5090-4350-7
2
PageRank 
References 
Authors
0.40
8
3
Name
Order
Citations
PageRank
Boris Gromov1164.00
Luca Maria Gambardella27926726.40
Gianni A. Di Caro372151.79