Title
What Makes A Good Model Of Natural Images?
Abstract
Many low-level vision algorithms assume a prior probability over images, and there has been great interest in trying to learn this prior from examples. Since images are very non Gaussian, high dimensional, continuous signals, learning their distribution presents a tremendous computational challenge. Perhaps the most successful recent algorithm is the Fields of Experts (FOE) (201 model which has shown impressive performance by modeling image statistics with a product of potentials defined on filter outputs. However, as in previous models of images based on filter outputs [30], calculating the probability of an image given the model requires evaluating an intractable partition function. This makes learning very slow (requires Monte-Carlo sampling at every step) and makes it virtually impossible to compare the likelihood of two different models. Given this computational difficulty, it is hard to say whether nonintuitive-features learned by such models represent a true property of natural images or an artifact of the approximations used during learning.In this paper we present (1) tractable lower and upper bounds on the partition function of models based on filter outputs and (2) efficient learning algorithms that do not require any sampling. Our results are based on recent results in machine learning that deal with Gaussian potentials. We extend these results to non-Gaussian potentials and derive a novel, basis rotation algorithm for approximating the maximum likelihood filters. Our results allow us to (1) rigorously compare the likelihood of different models and (2) calculate high likelihood models of natural image statistics in a matter of minutes. Applying our results to previous models shows that the nonintuitive features are not an artifact of the learning process but rather are capturing robust properties of natural images.
Year
DOI
Venue
2007
10.1109/CVPR.2007.383092
2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8
Keywords
Field
DocType
upper bound,probability,statistics,statistical distributions,statistical analysis,monte carlo sampling,machine learning,distributed computing,image processing,monte carlo methods,partition function
Computer vision,Monte Carlo method,Pattern recognition,Partition function (statistical mechanics),Computer science,Image processing,Maximum likelihood,Gaussian,Sampling (statistics),Artificial intelligence,Prior probability,Vision algorithms
Conference
Volume
Issue
ISSN
2007
1
1063-6919
Citations 
PageRank 
References 
84
8.93
17
Authors
2
Name
Order
Citations
PageRank
Yair Weiss110240834.60
William T. Freeman2173821968.76