Title
An egocentric perspective on active vision and visual object learning in toddlers.
Abstract
Toddlers quickly learn to recognize thousands of everyday objects despite the seemingly suboptimal training conditions of a visually cluttered world. One reason for this success may be that toddlers do not just passively perceive visual information, but actively explore and manipulate objects around them. The work in this paper is based on the idea that active viewing and exploration creates "clean" egocentric scenes that serve as high-quality training data for the visual system. We tested this idea by collecting first-person video data of free toy play between toddler-parent pairs. We use the raw frames from this data, weakly annotated with toy object labels, to train state-of-the-art machine learning models for object recognition (Convolutional Neural Networks, or CNNs). We run several training simulations, varying quantity and quality of the training data. Our results show that scenes captured by parents and toddlers have different properties, and that toddler scenes lead to models that learn more robust visual representations of the toy objects in them.
Year
Venue
Field
2017
Joint IEEE International Conference on Development and Learning and Epigenetic Robotics ICDL-EpiRob
Training set,Data modeling,Active vision,Visualization,Convolutional neural network,Toddler,Computer science,Human–computer interaction,Cognitive neuroscience of visual object recognition
DocType
ISSN
Citations 
Conference
2161-9484
2
PageRank 
References 
Authors
0.37
5
4
Name
Order
Citations
PageRank
Sven Bambach1645.77
D. Crandall22111168.58
Linda B. Smith320.71
Chen Yu418522.81