Title
Visual Localization and Target Perception Based on Panoptic Segmentation
Abstract
Visual localization is a core part of many computer vision and geospatial perception applications; however, the ever-changing time phase and environment present challenges. Moreover, the ever-enriching spatial data types and sensors create new conditions for visual localization. Based on the prior 3D model and the location sensor, the current study proposes a visual localization method using semantic information. This method integrates panoptic segmentation and the matching network to refine the sensor's position and orientation and complete the target perception. First, the panoptic segmentation and the match network are used together to segment and match the 3D- model-rendered image and the truth image. The matching results are then optimized based on the semantic results. Second, the semantic consistency score is introduced in the RANSAC process to estimate the optimal 6 degree-of-freedom (6DOF) pose. In the final stage, the estimated 6DOF pose, the instance segmentation results, and the depth information are used to locate the target. Experimental results show that the proposed method is a significant improvement on advanced methods for the long-term visual localization benchmark dataset. Additionally, the proposed method is seen to provide improved localization accuracy and is capable of accurately perceiving the target for self-collected data.
Year
DOI
Venue
2022
10.3390/rs14163983
REMOTE SENSING
Keywords
DocType
Volume
visual localization, target perception, panoptic segmentation, semantic consistency
Journal
14
Issue
ISSN
Citations 
16
2072-4292
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Kefeng Lv100.34
Yongsheng Zhang220443.58
Ying Yu300.34
Zhenchao Zhang400.68
Lei Li501.35