Title | ||
---|---|---|
Improving Offline Value-Function Approximations For Pomdps By Reducing Discount Factors |
Abstract | ||
---|---|---|
A common solution criterion for partially observable Markov decision processes (POMDPs) is to maximize the expected sum of exponentially discounted rewards, for which a variety of approximate methods have been proposed. Those that plan in the belief space typically provide tighter performance guarantees, but those that plan over the state space (e.g., QMDP and FIB) often require much less memory and computation. This paper presents an encouraging result that shows that reducing the discount factor while planning in the state space can actually improve performance significantly when evaluated on the original problem. This phenomenon is confirmed by both a theoretical analysis as well as a series of empirical studies on benchmark problems. As predicted by the theory and confirmed empirically, the phenomenon is most prominent when the observation model is noisy or rewards are sparse. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1109/IROS.2018.8594418 | 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) |
Field | DocType | ISSN |
Observability,Mathematical optimization,Markov process,Discounting,Computer science,Markov decision process,Bellman equation,Control engineering,Memory management,State space,Benchmark (computing) | Conference | 2153-0858 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yi-Chun Chen | 1 | 2 | 0.75 |
Mykel J. Kochenderfer | 2 | 423 | 68.51 |
Matthijs T.J. Spaan | 3 | 863 | 63.84 |