Title
Improving State-Action Space Exploration In Reinforcement Learning Using Geometric Properties
Abstract
Learning a model or learning a policy that optimizes some objective function relies on data-sets that describe the behavior of the system. When such sets are unavailable or insufficient, additional data may be generated through new experiments (if feasible) or through simulations (if an accurate model is available). In this paper we describe a third alternative that is based on the availability of a qualitative model of the physical system. In particular, we show how the number of experiments used in reinforcement learning can be reduced by leveraging geometric properties of the system. The geometric properties are independent of any particular instantiation of the qualitative model. As an illustrative example, we apply our approach to a cart-pole system.
Year
Venue
Field
2017
2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC)
Approximation algorithm,Data modeling,Mathematical optimization,Markov process,Computer science,Physical system,Space exploration,Artificial intelligence,Process control,Trajectory,Reinforcement learning
DocType
ISSN
Citations 
Conference
0743-1546
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Ion Matei114913.66
Raj Minhas200.68
Johan De Kleer32839764.82
Anurag Ganguli415415.72