Abstract | ||
---|---|---|
Integrating generative models and discriminative models in a hybrid scheme has shown some success in recognition tasks. In such scheme, generative models are used to derive feature maps for outputting a set of fixed length features that are used by discriminative models to perform classification. In this paper, we present a method, called posterior divergence, to derive feature maps from the log likelihood function implied in the incremental expectation-maximization algorithm. These feature maps evaluate a sample in three complementary measures: (1) how much the sample affects the model; (2) how well the sample fits the model; (3) how uncertain the fit is. We prove that the linear classification error rate using the outputs of the derived feature maps is at least as low as that of plug-in estimation. We present efficient algorithms for computing these feature maps for semi-supervised learning and supervised learning. We evaluate the proposed method on three typical applications, i.e. scene recognition, face and non-face classification and protein sequence analysis, and demonstrate improvements over related methods. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1109/CVPR.2011.5995584 | CVPR |
Keywords | Field | DocType |
posterior divergence,supervised learning,fixed length feature,expectation-maximisation algorithm,hybrid scheme,hybrid generative-discriminative classification,log likelihood function,learning (artificial intelligence),fixed length features,semi-supervised learning,feature maps,linear classification error rate,generative model,image reconstruction,related method,image classification,expectation maximization algorithm,discriminative models,plug-in estimation,recognition task,non-face classification,self-organising feature maps,generative models,discriminative model,feature map,estimation,random variables,likelihood function,feature extraction,mathematical model,semi supervised learning,learning artificial intelligence | Semi-supervised learning,Pattern recognition,Computer science,Feature (computer vision),Expectation–maximization algorithm,Feature extraction,Supervised learning,Artificial intelligence,Linear classifier,Contextual image classification,Discriminative model,Machine learning | Conference |
Volume | Issue | ISSN |
2011 | 1 | 1063-6919 |
ISBN | Citations | PageRank |
978-1-4577-0394-2 | 17 | 0.74 |
References | Authors | |
13 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
xiong li | 1 | 68 | 8.63 |
Tai Sing Lee | 2 | 794 | 88.73 |
Yuncai Liu | 3 | 1234 | 185.16 |