Abstract | ||
---|---|---|
We present a challenging dataset, the TartanAir, for robot navigation task and more. The data is collected in photo-realistic simulation environments in the presence of various light conditions, weather and moving objects. By collecting data in simulation, we are able to obtain multi-modal sensor data and precise ground truth labels, including the stereo RGB image, depth image, segmentation, optical flow, camera poses, and LiDAR point cloud. We set up a large number of environments with various styles and scenes, covering challenging viewpoints and diverse motion patterns, which are difficult to achieve by using physical data collection platforms. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/IROS45743.2020.9341801 | IROS |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 9 |
Name | Order | Citations | PageRank |
---|---|---|---|
Wenshan Wang | 1 | 24 | 9.00 |
Delong Zhu | 2 | 3 | 7.48 |
Xiangwei Wang | 3 | 1 | 1.05 |
Yaoyu Hu | 4 | 2 | 2.40 |
Yuheng Qiu | 5 | 1 | 1.72 |
Chen Wang | 6 | 141 | 46.56 |
Yafei Hu | 7 | 6 | 1.14 |
Ashish Kapoor | 8 | 1833 | 119.72 |
Sebastian Scherer | 9 | 522 | 57.76 |