Abstract | ||
---|---|---|
Research in assistive systems for travelers who are blind/low vision (B/LV) has been largely focused on basic map information. We present NavCue, an intelligent system module for providing rich, multi-sensory, context-based information using speech guidance and robot physical gestures. This approach is motivated by our previous user studies with people who are blind or low vision. This rich information should enhance user location awareness and confidence when traveling through unfamiliar locations.
|
Year | Venue | Keywords |
---|---|---|
2016 | HRI | assistive robots,context immersive navigation,human-robot interaction,blind and low vision |
Field | DocType | ISSN |
Computer science,Gesture,Simulation,Navigation assistance,Feature extraction,Human–computer interaction,Immersion (virtual reality),Mobile robot navigation,Robot,Location awareness,Human–robot interaction | Conference | 2167-2121 |
ISBN | Citations | PageRank |
978-1-4673-8370-7 | 0 | 0.34 |
References | Authors | |
2 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kangwei Chen | 1 | 0 | 0.34 |
Victoria Plaza-Leiva | 2 | 0 | 0.34 |
Byung-Cheol Min | 3 | 99 | 20.16 |
Aaron Steinfeld | 4 | 486 | 46.01 |
Mary Bernardine Dias | 5 | 0 | 0.34 |