Title
SINGLE IMAGE 3D VEHICLE POSE ESTIMATION FOR AUGMENTED REALITY
Abstract
The intent of this paper is to introduce a novel method for 3D vehicle pose estimation, a critical component of augmented reality. The proposed method is able to recover the location of a specific object from a single image by combining pre trained reliable semantic segmentation and improved single image depth estimation. Our method exploits a novel pose estimation technique by generating new 2D images created from the projections of rotated point clouds. The rotation of the specific object is able to be predicted. Augmented objects can be shifted, rotated and scaled correctly based on the recovered orientation and localization. Through accurate vehicle pose estimation, virtual vehicles are able to be augmented accurately in place of real vehicles. The effectiveness of our method is verified by comparison with other recent pose estimation methods on the challenging KITTI 3D benchmark. Further experiments on the Cityscapes dataset also demonstrates good robustness in the method. Without requiring ground truth 3D vehicle pose labels for training, our model is able to produce competitive and robust performance in 3D vehicle pose estimation.
Year
DOI
Venue
2019
10.1109/GlobalSIP45357.2019.8969201
IEEE Global Conference on Signal and Information Processing
Keywords
Field
DocType
Vehicle pose estimation,Augmented reality,Depth estimation,Convolutional neural network
Computer vision,Convolutional neural network,Computer science,Segmentation,Robustness (computer science),Augmented reality,Pose,Ground truth,Artificial intelligence,Point cloud
Conference
ISSN
Citations 
PageRank 
2376-4066
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Yawen Lu116.12
Sophia Kourian200.34
Carl Salvaggio354.83
Chenliang Xu443428.73
Guoyu Lu5166.62