Title
Modeling of autonomous problem solving process by dynamic construction of task models in multiple tasks environment.
Abstract
Traditional reinforcement learning (RL) supposes a complex but single task to be solved. When a RL agent faces a task similar to a learned one, the agent must re-learn the task from the beginning because it doesn't reuse the past learned results. This is the problem of quick action learning, which is the foundation of decision making in the real world. In this paper, we suppose agents that can solve a set of tasks similar to each other in a multiple tasks environment, where we encounter various problems one after another, and propose a technique of action learning that can quickly solve similar tasks by reusing previously learned knowledge. In our method, a model-based RL uses a task model constructed by combining primitive local predictors for predicting task and environmental dynamics. To evaluate the proposed method, we performed a computer simulation using a simple ping-pong game with variations.
Year
DOI
Venue
2006
10.1016/j.neunet.2006.05.037
Neural Networks
Keywords
Field
DocType
quick action learning,multiple tasks,single task,action learning,autonomous problem,dynamic construction,similar task,traditional reinforcement learning,model-based reinforcement learning,multiple tasks environment,task model,reuse of knowledge,rl agent,model-based rl,computer simulation,reinforcement learning
Multi-task learning,Task analysis,Reuse,Computer science,Decision support system,Action learning,Autonomous system (mathematics),Artificial intelligence,Artificial neural network,Machine learning,Reinforcement learning
Journal
Volume
Issue
ISSN
19
8
0893-6080
Citations 
PageRank 
References 
0
0.34
11
Authors
2
Name
Order
Citations
PageRank
Yu Ohigashi110.70
Takashi Omori221.41