Title
CGAR: Critic Guided Action Redistribution in Reinforcement Leaning
Abstract
Training a game-playing reinforcement learning agent requires multiple interactions with the environment. Ignorant random exploration may cause a waste of time and resources. It's essential to alleviate such waste. As discussed in this paper, under the settings of the off-policy actor critic algorithms, we demonstrate that the critic can bring more expected discounted rewards than or at least equal to the actor. Thus, the Q value predicted by the critic is a better signal to redistribute the action originally sampled from the policy distribution predicted by the actor. This paper introduces the novel Critic Guided Action Redistribution (CGAR) algorithm and tests it on the OpenAI MuJoCo tasks. The experimental results demonstrate that our method improves the sample efficiency and achieves state-of-the-art performance. Our code can be found at https://github.com/tairanhuang/CGAR.
Year
DOI
Venue
2022
10.1109/CoG51982.2022.9893666
2022 IEEE Conference on Games (CoG)
Keywords
DocType
ISSN
Reinforcement Learning,Soft actor-critic,MuJoCo tasks
Conference
2325-4270
ISBN
Citations 
PageRank 
978-1-6654-5990-7
0
0.34
References 
Authors
1
5
Name
Order
Citations
PageRank
Tairan Huang100.34
Xu Li200.34
Hao Li300.34
Mingming Sun400.34
Ping Li51672127.72