Title
Vision-based global localization for mobile robots with hybrid maps of objects and spatial layouts
Abstract
This paper presents a novel vision-based global localization that uses hybrid maps of objects and spatial layouts. We model indoor environments with a stereo camera using the following visual cues: local invariant features for object recognition and their 3D positions for object pose estimation. We also use the depth information at the horizontal centerline of image where the optical axis passes through, which is similar to the data from a 2D laser range finder. This allows us to build our topological node that is composed of a horizontal depth map and an object location map. The horizontal depth map describes the explicit spatial layout of each local space and provides metric information to compute the spatial relationships between adjacent spaces, while the object location map contains the pose information of objects found in each local space and the visual features for object recognition. Based on this map representation, we suggest a coarse-to-fine strategy for global localization. The coarse pose is estimated by means of object recognition and SVD-based point cloud fitting, and then is refined by stochastic scan matching. Experimental results show that our approaches can be used for an effective vision-based map representation as well as for global localization methods.
Year
DOI
Venue
2009
10.1016/j.ins.2009.06.030
Inf. Sci.
Keywords
DocType
Volume
depth information,object recognition,map representation,horizontal depth map,global localization,effective vision-based map representation,object location map,mobile robot,explicit spatial layout,Vision-based global localization,hybrid map,local space
Journal
179
Issue
ISSN
Citations 
24
0020-0255
10
PageRank 
References 
Authors
0.61
17
4
Name
Order
Citations
PageRank
Soon-Yong Park117524.50
SooHwan Kim2608.05
Mignon Park375970.43
Sung-Kee Park429823.99