Title
sMCL: Monte-Carlo Localization for Mobile Robots with Stereo Vision
Abstract
This paper presents Monte-Carlo localization (MCL) (1) with a mixture proposal distribution for mobile robots with stereo vision. We combine filtering with the Scale Invariant Feature Transform (SIFT) image descriptor to accurately and efficiently estimate the robot's location given a map of 3D point landmarks. Our approach completely decouples the motion model from the robot's mechanics and is general enough to solve for the unconstrained 6-degrees of freedom camera motion. We call our approach MCL. Compared to other MCL approaches MCL is more accurate, without requiring that the robot move large distances and make many measurements. More importantly our approach is not limited to robots constrained to planar motion. Its strength is derived from its robust vision-based motion and observation models. MCL is general, robust, efficient and accurate, utilizing the best of Bayesian filtering, invariant image features and multiple view geometry techniques. I. INTRODUCTION Global localization is the problem of a robot estimating its position by considering its motion and observations with respect to a previously learned map. Bayesian filtering is a general method applicable to this problem that recursively estimates the robot's belief about its current pose. Monte-Carlo localization provides an efficient method for representing and updating this belief using a set of weighted samples/particles. Previous MCL approaches have relied on the assumption that the robot traverses a planar world and use a motion model that is a function of the robot's odometric hardware. Based on uninformative sensor measurements, they suffer from the perceptual aliasing problem (2) requiring that the robot move for several meters and make many observations before its location can be established. They also demand a large number of particles in order to converge. MCL has been demonstrated to be accurate for laser-based robots but it has failed to achieve similar results for vision-based ones. In this paper, we present Monte-Carlo localization for robots with stereo vision. We call it MCL and it differs from others in several ways. Firstly, it is not limited to robots executing planar motion. We solve for unconstrained 3D motion (6 degrees of freedom) by decoupling the model from the robot's hardware. We derive an estimate of the robot's motion from visual measurements using stereo vision. Secondly, we use sparse maps of 3D natural landmarks based on the Scale Invariant Feature Transform (3) that is fully invariant to changes in image translation, scaling, rotation and partially invariant to illumination changes. The choice of SIFT leads to a reduction of perceptual aliasing enabling MCL to converge quickly after the robot has traveled only a short distance. Finally, our method is more accurate than other constrained vision-based approaches and only requires a small number of particles.
Year
Venue
Keywords
2005
Robotics: Science and Systems
stereo vision,scale invariant feature transform,degree of freedom,mobile robot,image features,monte carlo localization
Field
DocType
Citations 
Computer vision,Stereo cameras,Computer science,Stereopsis,Artificial intelligence,Sigma,Monte Carlo localization,Mobile robot,Machine learning,Computer stereo vision
Conference
21
PageRank 
References 
Authors
1.17
20
2
Name
Order
Citations
PageRank
Pantelis Elinas117513.21
James J. Little22430269.59