Title
On Greedy Maximization of Entropy
Abstract
Submodular function maximization is one of the key problems that arise in many machine learning tasks. Greedy selection algorithms are the proven choice to solve such problems, where prior theoretical work guarantees (1 - 1/e) approximation ratio. However, it has been empirically observed that greedy selection provides almost optimal solutions in practice. The main goal of this paper is to explore and answer why the greedy selection does significantly better than the theoretical guarantee of (1 - 1/e). Applications include, but are not limited to, sensor selection tasks which use both entropy and mutual information as a maximization criteria. We give a theoretical justification for the nearly optimal approximation ratio via detailed analysis of the curvature of these objective functions for Gaussian RBF kernels.
Year
Venue
Field
2015
International Conference on Machine Learning
Mathematical optimization,Curvature,Computer science,Entropy maximization,Submodular set function,Greedy algorithm,Gaussian,Artificial intelligence,Mutual information,Greedy randomized adaptive search procedure,Machine learning,Maximization
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
5
3
Name
Order
Citations
PageRank
Dravyansh Sharma100.34
Ashish Kapoor21833119.72
Amit Deshpande300.34