Abstract | ||
---|---|---|
In compressed sensing, a primary problem to solve is to reconstruct a high dimensional sparse signal from a small number of observations. In this work, we develop a new sparse signal recovery algorithm using reinforcement learning (RL) and Monte Carlo Tree Search (MCTS). Similarly to OMP, our RL+MCTS algorithm chooses the support of the signal sequentially. The key novelty is that the proposed algorithm learns how to choose the next support as opposed to following a pre-designed rule as in OMP. Empirical results are provided to demonstrate the superior performance of the proposed RL+MCTS algorithm over existing sparse signal recovery algorithms. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/ALLERTON.2019.8919947 | 2019 57TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON) |
Keywords | Field | DocType |
Compressed Sensing,Reinforcement Learning,Monte Carlo Tree Search,Basis Pursuit,Orthogonal Matching Pursuit | Small number,Monte Carlo tree search,Monte Carlo method,Mathematical optimization,Computer science,Algorithm,Novelty,Primary problem,Compressed sensing,Sparse matrix,Reinforcement learning | Conference |
ISSN | Citations | PageRank |
2474-0195 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sichen Zhong | 1 | 0 | 0.34 |
Yue Zhao | 2 | 186 | 33.54 |
Jianshu Chen | 3 | 883 | 52.94 |