Title
Autonomous learning of active multi-scale binocular vision
Abstract
We present a method for autonomously learning representations of visual disparity between images from left and right eye, as well as appropriate vergence movements to fixate objects with both eyes. A sparse coding model (perception) encodes sensory information using binocular basis functions, while a reinforcement learner (behavior) generates the eye movement, according to the sensed disparity. Perception and behavior develop in parallel, by minimizing the same cost function: the reconstruction error of the stimulus by the generative model. In order to efficiently cope with multiple disparity ranges, sparse coding models are learnt at multiple scales, encoding disparities at various resolutions. Similarly, vergence commands are defined on a logarithmic scale to allow both coarse and fine actions. We demonstrate the efficacy of the proposed method using the humanoid robot iCub. We show that the model is fully self-calibrating and does not require any prior information about the camera parameters or the system dynamics.
Year
DOI
Venue
2013
10.1109/DevLrn.2013.6652541
Development and Learning and Epigenetic Robotics
Keywords
Field
DocType
active vision,humanoid robots,image coding,learning (artificial intelligence),stereo image processing,visual perception,active multiscale binocular vision,autonomous learning,binocular basis functions,camera parameters,cost function,eye movement,generative model,humanoid robot,icub,perception,reconstruction error,sensory information,sparse coding models,system dynamics,vergence commands,visual disparity,learning artificial intelligence
Computer vision,Binocular vision,Active vision,iCub,Vergence,Computer science,Neural coding,Artificial intelligence,Visual perception,Generative model,Humanoid robot
Conference
Citations 
PageRank 
References 
9
0.80
4
Authors
5
Name
Order
Citations
PageRank
Luca Lonini1272.78
Yu Zhao2272.92
Pramod Chandrashekhariah3102.53
Bertram E. Shi439656.91
Jochen Triesch569073.73