Title
Learning to calibrate: Reinforcement learning for guided calibration of visual–inertial rigs
Abstract
AbstractWe present a new approach to assisted intrinsic and extrinsic calibration with an observability-aware visual–inertial calibration system that guides the user through the calibration procedure by suggesting easy-to-perform motions that render the calibration parameters observable. This is done by identifying which subset of the parameter space is rendered observable with a rank-revealing decomposition of the Fisher information matrix, modeling calibration as a Markov decision process and using reinforcement learning to establish which discrete sequence of motions optimizes for the regression of the desired parameters. The goal is to address the assumption common to most calibration solutions: that sufficiently informative motions are provided by the operator. We do not make use of a process model and instead leverage an experience-based approach that is broadly applicable to any platform in the context of simultaneous localization and mapping. This is a step in the direction of long-term autonomy and “power-on-and-go” robotic systems, making repeatable and reliable calibration accessible to the non-expert operator.
Year
DOI
Venue
2019
10.1177/0278364919844824
Periodicals
Keywords
Field
DocType
Calibration, reinforcement learning, observability, extrinsic, intrinsic
Inertial frame of reference,Observability,Control engineering,Calibration,Mathematics,Reinforcement learning
Journal
Volume
Issue
ISSN
38
12-13
0278-3649
Citations 
PageRank 
References 
2
0.37
0
Authors
2
Name
Order
Citations
PageRank
Fernando Nobre131.40
Christoffer R. Heckman21210.78