Abstract | ||
---|---|---|
In robotic systems developed for urban search and rescue, information overload can cause operator disorientation and missed detection of victims. In most systems, multiple sensor modalities are present on the robot, and the operator is presented with multiple displays of information requiring attention refocus. We propose a method where multiple sensor inputs are layered into a single, integrated visual display. Such a display might eliminate missed detections and alleviate operator disorientation by allowing the operator to naturally focus on a single display with visual cues added to aid in the use of sound and other non-visual sensor modalities. We have conducted initial trials with multiple sensors in the National Institute of Standards and Technology (NIST) reference test arena for mobile robots to validate the concept and aid in the construction of appropriate display modes and operations. |
Year | DOI | Venue |
---|---|---|
2004 | 10.1109/ICSMC.2004.1400784 | Systems, Man and Cybernetics, 2004 IEEE International Conference |
Keywords | Field | DocType |
man-machine systems,mobile robots,robot vision,sensor fusion,telerobotics,human-robot interaction,integrated visual display,layered sensor modalities,mobile robots,multiple sensor modalities,sensor fusion,urban search and rescue task | Urban search and rescue,Computer vision,Computer science,Sensor fusion,Artificial intelligence,Robot,User interface,Telerobotics,Mobile robot,Human–robot interaction,Robotics | Conference |
Volume | ISSN | ISBN |
3 | 1062-922X | 0-7803-8566-7 |
Citations | PageRank | References |
4 | 1.97 | 2 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hestand, D. | 1 | 4 | 1.97 |
Holly A. Yanco | 2 | 174 | 18.48 |