Abstract | ||
---|---|---|
The realization of a moving robot that can autonomously work in an actual environment has become important. A three-dimensional dense map that was created using three-dimensional (3D) depth sensors, such as light detection and ranging (LiDAR), is popular in the research field of autonomous moving robots. However, this approach has a few disadvantages: the price of 3D sensing devices and the robustness of localization in practical scenarios with many movable obstacles. To solve this problem, this paper proposes a vision-based navigation scheme that enables autonomous movement in indoor scenes; only a webcam is used as an external sensor. The experimental results from an experiment conducted in a university building demonstrated that a robot can move around on a floor. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/SITIS.2019.00015 | 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) |
Keywords | DocType | ISBN |
Visual navigation, semantic segmentation, obstacle avoidance, road following | Conference | 978-1-7281-5687-3 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Miho Adachi | 1 | 0 | 0.34 |
Sara Shatari | 2 | 0 | 0.34 |
Ryusuke Miyamoto | 3 | 16 | 10.14 |