Abstract | ||
---|---|---|
In this study, a novel model of the intelligent agent is proposed by introducing a dynamic emotion model into conventional action selection policy of the reinforcement learning method. Comparing with the conventional Q-learning of reinforcement learning, the proposed method adds two emotional factors in to the state-action value function: "arousal value" factor which affects motivation of action and "pleasure value" factor which influences the probability of action selection. The emotional factors are affected by the other agents when multiple agents exist in the perception area. Computer simulations of pursuit problems of static/dynamic preys were performed and all results showed effectiveness of the proposed method, i.e., faster learning convergence was confirmed comparing with the case of conventional Q-learning method. |
Year | DOI | Venue |
---|---|---|
2013 | 10.1007/978-3-642-39479-9_3 | ICIC (1) |
Keywords | Field | DocType |
emotional factor,emotional intelligent agent,conventional q-learning method,cooperative goal exploration,conventional action selection policy,reinforcement learning,action selection,state-action value function,pleasure value,conventional q-learning,arousal value,q learning,intelligent agent | Convergence (routing),Intelligent agent,Computer science,Q-learning,Bellman equation,Artificial intelligence,Emotional intelligence,Action selection,Perception,Machine learning,Reinforcement learning | Conference |
Volume | ISSN | Citations |
7995 | 0302-9743 | 1 |
PageRank | References | Authors |
0.36 | 6 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Takashi Kuremoto | 1 | 196 | 27.73 |
Tetsuya Tsurusaki | 2 | 4 | 0.86 |
Kunikazu Kobayashi | 3 | 173 | 21.96 |
Shingo Mabu | 4 | 493 | 77.00 |
Masanao Obayashi | 5 | 198 | 26.10 |