Abstract | ||
---|---|---|
The human brain does not remember the images seen through the eyes just by the light influx into the retina, but by the sensory-motor interaction between the human movement (including eye movement) and the environment. We propose a system for autonomous mobile robot localization in the environment that uses the behavioral aspects of the human brain, especially the saccadic movement of the eyes. The hypothesis for the robot location in the environment is built not by the image itself, but as a collection of saccadic sensory-motor pairs of the image observations that create the mental representation of all perceived objects in the environment. This approach is implemented and tested on a dataset of images taken from a real environment. The obtained results from the localization show that the comparison of the saccadic sequences of the descriptors outperforms the naive descriptor matching of the images. |
Year | DOI | Venue |
---|---|---|
2010 | 10.1007/978-3-642-19325-5_24 | ICT INNOVATIONS 2010 |
Keywords | Field | DocType |
Active Perception, Mobile Robot Localization, Saccades, Behaviorism | Computer vision,Active perception,Eye movement,Artificial intelligence,Saccadic masking,Robot,Geography,Mobile robot,Mental representation | Conference |
Volume | ISSN | Citations |
83 | 1865-0929 | 0 |
PageRank | References | Authors |
0.34 | 8 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Petre Lameski | 1 | 61 | 13.84 |
Andrea Kulakov | 2 | 98 | 14.79 |