Abstract | ||
---|---|---|
Relational reinforcement learning is the application of reinforcement learning to structured state descriptions. Model-based methods learn a policy based on a known model that comprises a description of the actions and their effects as well as the reward function. If the model is initially unknown, one might learn the model first and then apply the model-based method (indirect reinforcement learning). In this paper, we propose a method for model-learning that is based on a combination of several SVMs using graph kernels. Indeterministic processes can be dealt with by combining the kernel approach with a clustering technique. We demonstrate the validity of the approach by a range of experiments on various Blocksworld scenarios. |
Year | DOI | Venue |
---|---|---|
2007 | 10.1007/978-3-540-76631-5_39 | MICAI |
Keywords | Field | DocType |
indirect reinforcement learning,graph kernel,structured state description,relational mdps,clustering technique,indeterministic process,relational reinforcement learning,known model,reward function,kernel approach,model-based method,reinforcement learning | Temporal difference learning,Instance-based learning,Semi-supervised learning,Pattern recognition,Active learning (machine learning),Computer science,Statistical relational learning,Unsupervised learning,Artificial intelligence,Machine learning,Reinforcement learning,Learning classifier system | Conference |
Volume | ISSN | ISBN |
4827 | 0302-9743 | 3-540-76630-8 |
Citations | PageRank | References |
5 | 0.47 | 18 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Florian Halbritter | 1 | 9 | 0.95 |
Peter Geibel | 2 | 286 | 26.62 |