Title
Greedy Algorithms for Sparse Reinforcement Learning.
Abstract
Feature selection and regularization are becoming increasingly prominent tools in the efforts of the reinforcement learning (RL) community to expand the reach and applicability of RL. One approach to the problem of feature selection is to impose a sparsity-inducing form of regularization on the learning method. Recent work on $L_1$ regularization has adapted techniques from the supervised learning literature for use with RL. Another approach that has received renewed attention in the supervised learning community is that of using a simple algorithm that greedily adds new features. Such algorithms have many of the good properties of the $L_1$ regularization methods, while also being extremely efficient and, in some cases, allowing theoretical guarantees on recovery of the true form of a sparse target function from sampled data. This paper considers variants of orthogonal matching pursuit (OMP) applied to reinforcement learning. The resulting algorithms are analyzed and compared experimentally with existing $L_1$ regularized approaches. We demonstrate that perhaps the most natural scenario in which one might hope to achieve sparse recovery fails; however, one variant, OMP-BRM, provides promising theoretical guarantees under certain assumptions on the feature dictionary. Another variant, OMP-TD, empirically outperforms prior methods both in approximation accuracy and efficiency on several benchmark problems.
Year
Venue
Field
2012
ICML
Matching pursuit,Semi-supervised learning,Pattern recognition,Feature selection,Computer science,Supervised learning,Greedy algorithm,Regularization (mathematics),Artificial intelligence,Machine learning,Feature learning,Reinforcement learning
DocType
Citations 
PageRank 
Conference
21
0.89
References 
Authors
16
2
Name
Order
Citations
PageRank
Christopher Painter-Wakefield11707.96
Ronald Parr22428186.85