Abstract | ||
---|---|---|
We consider a decision-making problem where the environment varies both in space and time. Such problems arise naturally when considering e.g., the navigation of an underwater robot amidst ocean currents or the navigation of an aerial vehicle in wind. To model such spatiotemporal variation, we extend the standard Markov Decision Process (MDP) to a new framework called the Time-Varying Markov Decision Process (TVMDP). The TVMDP has a time-varying state transition model and transforms the standard MDP that considers only {em immediate} and {em static} uncertainty descriptions of state transitions, to a framework that is able to adapt to future time-varying uncertainty over some horizon. We show how to solve a TVMDP via a redesign of the MDP value propagation mechanisms by incorporating the introduced dynamics along the temporal dimension. We validate our framework in a marine robotics navigation setting using real spatiotemporal ocean data and show that it outperforms prior efforts to explicitly accommodate time by including it in the state. |
Year | Venue | Field |
---|---|---|
2016 | arXiv: Robotics | Simulation,Partially observable Markov decision process,Horizon,Spacetime,Markov decision process,Uncertain data,Artificial intelligence,Underwater robot,Engineering,Robotics |
DocType | Volume | Citations |
Journal | abs/1605.01018 | 0 |
PageRank | References | Authors |
0.34 | 24 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Lantao Liu | 1 | 157 | 16.49 |
Gaurav S. Sukhatme | 2 | 5469 | 548.13 |