Abstract | ||
---|---|---|
In a deterministic world, a planning agent can be certain of the consequences of its planned sequence of actions. Not so, however, in dynamic, stochastic domains where Markov decision processes are commonly used. Unfortunately these suffer from the 'curse of dimensionality': if the state space is a Cartesian product of many small sets ('dimensions'), planning is exponential in the number of those dimensions. Our new technique exploits the intuitive strategy of selectively ignoring various dimensions in different parts of the state space. The resulting non-uniformity has strong implications, since the approximation is no longer Markovian, requiring the use of a modified planner. We also use a spatial and temporal proximity measure, which responds to continued planning as well as movement of the agent through the state space, to dynamically adapt the abstraction as planning progresses. We present qualitative and quantitative results across a range of experimental domains showing that an agent exploiting this novel approximation method successfully finds solutions to the planning problem using much less than the full state space. We assess and analyse the features of domains which our method can exploit. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1613/jair.3414 | Journal of Artificial Intelligence Research |
Keywords | DocType | Volume |
different part,markov decision process,cartesian product,deterministic world,full state space,planning agent,state space,approximate planning,proximity-based non-uniform abstraction,planning problem,novel approximation method,continued planning | Journal | abs/1401.4592 |
Issue | ISSN | Citations |
1 | Journal Of Artificial Intelligence Research, Volume 43, pages
477-522, 2012 | 1 |
PageRank | References | Authors |
0.34 | 29 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jiří Baum | 1 | 4 | 0.75 |
Ann E. Nicholson | 2 | 692 | 88.01 |
Trevor I. Dix | 3 | 233 | 26.19 |