Title
Optimal reverse prediction: a unified perspective on supervised, unsupervised and semi-supervised learning
Abstract
Training principles for unsupervised learning are often derived from motivations that appear to be independent of supervised learning. In this paper we present a simple unification of several supervised and unsupervised training principles through the concept of optimal reverse prediction: predict the inputs from the target labels, optimizing both over model parameters and any missing labels. In particular, we show how supervised least squares, principal components analysis, k-means clustering and normalized graph-cut can all be expressed as instances of the same training principle. Natural forms of semi-supervised regression and classification are then automatically derived, yielding semi-supervised learning algorithms for regression and classification that, surprisingly, are novel and refine the state of the art. These algorithms can all be combined with standard regularizers and made non-linear via kernels.
Year
DOI
Venue
2009
10.1145/1553374.1553519
ICML
Keywords
Field
DocType
missing label,semi-supervised regression,unsupervised training principle,unified perspective,training principle,model parameter,semi-supervised learning,normalized graph-cut,natural form,optimal reverse prediction,k-means clustering,unsupervised learning,supervised learning,dimensionality reduction,least square,graph cut,principal component analysis,semi supervised learning,k means clustering
Least squares,Competitive learning,Stability (learning theory),Semi-supervised learning,Pattern recognition,Computer science,Supervised learning,Unsupervised learning,Artificial intelligence,Cluster analysis,Machine learning,Principal component analysis
Conference
Citations 
PageRank 
References 
8
0.47
14
Authors
3
Name
Order
Citations
PageRank
Linli Xu179042.51
Martha White219827.75
Dale Schuurmans32760317.49