Abstract | ||
---|---|---|
We study how representation learning can improve the efficiency of bandit problems. We study the setting where we play T linear bandits with dimension d concurrently, and these T bandit tasks share a common k(≪d) dimensional linear representation. For the finite-action setting, we present a new algorithm which achieves O~(TkN+dkNT) regret, where N is the number of rounds we play for each bandit. When T is sufficiently large, our algorithm significantly outperforms the naive algorithm (playing T bandits independently) that achieves O~(TdN) regret. We also provide an Ω(TkN+dkNT) regret lower bound, showing that our algorithm is minimax-optimal up to poly-logarithmic factors. Furthermore, we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes. We also present experiments on synthetic and real-world data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms. |
Year | Venue | DocType |
---|---|---|
2021 | ICLR | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jiaqi Yang | 1 | 0 | 0.34 |
Wei Hu | 2 | 69 | 8.37 |
Lee, Jason D. | 3 | 711 | 48.29 |
Simon Du | 4 | 210 | 29.79 |