Title
Greedy metrics in orthogonal greedy learning.
Abstract
Orthogonal greedy learning (OGL) is a stepwise learning scheme that adds a new atom from a dictionary via the steepest gradient descent and build the estimator via orthogonal projecting the target function to the space spanned by the selected atoms in each greedy step. Here, "greed" means choosing a new atom according to the steepest gradient descent principle. OGL then avoids the overfitting/underfitting by selecting an appropriate iteration number. In this paper, we point out that the overfitting/underfitting can also be avoided via redefining "greed" in OGL. To this end, we introduce a new greedy metric, called $\delta$-greedy thresholds, to refine "greed" and theoretically verifies its feasibility. Furthermore, we reveals that such a greedy metric can bring an adaptive termination rule on the premise of maintaining the prominent learning performance of OGL. Our results show that the steepest gradient descent is not the unique greedy metric of OGL and some other more suitable metric may lessen the hassle of model-selection of OGL.
Year
Venue
Field
2014
CoRR
Gradient descent,Mathematical optimization,Artificial intelligence,Overfitting,Greedy randomized adaptive search procedure,Mathematics,Machine learning,Estimator
DocType
Volume
Citations 
Journal
abs/1411.3553
2
PageRank 
References 
Authors
0.41
10
4
Name
Order
Citations
PageRank
Lin Xu1367.52
Shaobo Lin218420.02
Jinshan Zeng323618.82
Zongben Xu43203198.88