Title
Learning Invariances using the Marginal Likelihood.
Abstract
Generalising well in supervised learning tasks relies on correctly extrapolating the training data to a large region of the input space. One way to achieve this is to constrain the predictions to be invariant to transformations of the input that are known to be irrelevant (e.g. translation). Commonly, this is done through data augmentation, where the training set is enlarged by applying hand-crafted transformations to the inputs. We argue that invariances should instead be incorporated in the model structure, and learned using the marginal likelihood, which correctly rewards the reduced complexity of invariant models. We demonstrate this for Gaussian process models, due to the ease with which their marginal likelihood can be estimated. Our main contribution is a variational inference scheme for Gaussian processes containing invariances described by a sampling procedure. We learn the sampling procedure by backpropagating through it to maximise the marginal likelihood.
Year
Venue
Keywords
2018
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)
probability distribution,gaussian process,data augmentation,training data,variational inference,model structure,marginal likelihood
DocType
Volume
ISSN
Conference
31
1049-5258
Citations 
PageRank 
References 
1
0.36
5
Authors
4
Name
Order
Citations
PageRank
van der Wilk, Mark1659.35
Matthias Bauer22510.40
S. T. John310.36
James Hensman426520.05