Title
Learning Disentangled Representations with Reference-Based Variational Autoencoders.
Abstract
Learning disentangled representations from visual data, where different high-level generative factors are independently encoded, is of importance for many computer vision tasks. Solving this problem, however, typically requires to explicitly label all the factors of interest in training images. To alleviate the annotation cost, we introduce a learning setting which we refer to as disentangling. Given a pool of unlabeled images, the goal is to learn a representation where a of target factors are disentangled from others. The only supervision comes from an auxiliary set containing images where the factors of interest are constant. In order to address this problem, we propose reference-based variational autoencoders, a novel deep generative model designed to exploit the weak-supervision provided by the reference set. By addressing tasks such as feature learning, conditional image generation or attribute transfer, we validate the ability of the proposed model to learn disentangled representations from this minimal form of supervision.
Year
Venue
Field
2019
arXiv: Computer Vision and Pattern Recognition
Image generation,Annotation,Computer science,Exploit,Artificial intelligence,Generative grammar,Machine learning,Feature learning,Generative model
DocType
Volume
Citations 
Journal
abs/1901.08534
1
PageRank 
References 
Authors
0.35
26
4
Name
Order
Citations
PageRank
Adria Ruiz1234.71
Oriol Martínez210.35
Xavier Binefa322433.71
J. J. Verbeek43944181.44