Title
Improving 3d Object Detection For Pedestrians With Virtual Multi-View Synthesis Orientation Estimation
Abstract
Accurately estimating the orientation of pedestrians is an important and challenging task for autonomous driving because this information is essential for tracking and predicting pedestrian behavior. This paper presents a flexible Virtual Multi-View Synthesis module that can be adopted into 3D object detection methods to improve orientation estimation. The module uses a multi-step process to acquire the fine-grained semantic information required for accurate orientation estimation. First, the scene's point cloud is densified using a structure preserving depth completion algorithm and each point is colorized using its corresponding RGB pixel. Next, virtual cameras are placed around each object in the densified point cloud to generate novel viewpoints, which preserve the object's appearance. We show that this module greatly improves the orientation estimation on the challenging pedestrian class on the KITTI benchmark. When used with the open-source 3D detector AVOD-FPN, we outperform all other published methods on the pedestrian Orientation, 3D, and Bird's Eye View benchmarks.
Year
DOI
Venue
2019
10.1109/IROS40897.2019.8968242
2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)
Field
DocType
ISSN
Computer vision,Object detection,Pedestrian,Computer science,Viewpoints,View synthesis,RGB color model,Artificial intelligence,Pixel,Point cloud,Detector
Conference
2153-0858
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Jason Ku1813.89
Alex D. Pon2111.14
Sean Walsh3184.32
Steven Lake Waslander444346.89