Title | ||
---|---|---|
Q-Surfing: Exploring a World Model by Significance Values in Reinforcement Learning Tasks |
Abstract | ||
---|---|---|
Reinforcement Learning addresses the problem of learning to select actions in unknown environments. Due to the poor performance of Reinforcement Learning in more complex and thus more realistic tasks with large state spaces and sparse reinforcement, much effort is done to speed up learning as well as on finding structure in problem spaces [11, 12]. Models are introduced in order to improve learning by allowing to plan on the internal world model. This implies that a directed exploration in the model is a very important factor in relation to better learning results. In this paper we present an algorithm which explores the model by computing so-called Significance Values for each state. Using these values for model planning, during early stages knowledge propagation is enhanced, during later stages values in important states retain higher values acid might therefor be useful for future decomposition of state spaces. Empirical results in a simple grid navigation task will demonstrate this process. |
Year | Venue | Keywords |
---|---|---|
2000 | FRONTIERS IN ARTIFICIAL INTELLIGENCE AND APPLICATIONS | internal model,reinforcement learning,state space |
Field | DocType | Volume |
Robot learning,Temporal difference learning,Instance-based learning,Multi-task learning,Active learning (machine learning),Computer science,Q-learning,Unsupervised learning,Artificial intelligence,Machine learning,Reinforcement learning | Conference | 54.0 |
ISSN | Citations | PageRank |
0922-6389 | 1 | 0.35 |
References | Authors | |
6 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Frank Kirchner | 1 | 64 | 9.48 |
Corinna Richter | 2 | 1 | 0.69 |