Title
Learning to relate images.
Abstract
A fundamental operation in many vision tasks, including motion understanding, stereopsis, visual odometry, or invariant recognition, is establishing correspondences between images or between images and data from other modalities. Recently, there has been increasing interest in learning to infer correspondences from data using relational, spatiotemporal, and bilinear variants of deep learning methods. These methods use multiplicative interactions between pixels or between features to represent correlation patterns across multiple images. In this paper, we review the recent work on relational feature learning, and we provide an analysis of the role that multiplicative interactions play in learning to encode relations. We also discuss how square-pooling and complex cell models can be viewed as a way to represent multiplicative interactions and thereby as a way to encode relations.
Year
DOI
Venue
2013
10.1109/TPAMI.2013.53
IEEE Trans. Pattern Anal. Mach. Intell.
Keywords
Field
DocType
computer vision,correlation methods,image coding,inference mechanisms,learning (artificial intelligence),spatiotemporal phenomena,bilinear deep-learning method,complex cell model,correlation pattern representation,image features,image pixels,inference framework,multiplicative interaction representation,relation encoding,relational deep-learning method,relational feature learning,spatiotemporal deep-learning method,square-pooling model,vision tasks,Learning image relations,complex cells,energy models,mapping units,spatiotemporal features
Visual odometry,Multiplicative function,Computer science,Stereopsis,Artificial intelligence,Deep learning,Computer vision,ENCODE,Pattern recognition,Invariant (mathematics),Feature learning,Machine learning,Bilinear interpolation
Journal
Volume
Issue
ISSN
35
8
1939-3539
Citations 
PageRank 
References 
44
2.28
32
Authors
1
Name
Order
Citations
PageRank
Roland Memisevic1111665.87