Title
Learning Agents With Prioritization And Parameter Noise In Continuous State And Action Space
Abstract
Among the many variants of RL, an important class of problems is where the state and action spaces are continuous-autonomous robots, autonomous vehicles, optimal control are all examples of such problems that can lend themselves naturally to reinforcement based algorithms, and have continuous state and action spaces. In this paper, we introduce a prioritized form of a combination of state-of-the-art approaches such as Deep Q-learning (DQN) and Deep Deterministic Policy Gradient (DDPG) to outperform the earlier results for continuous state and action space problems. Our experiments also involve the use of parameter noise during training resulting in more robust deep RL models outperforming the earlier results significantly. We believe these results are a valuable addition for continuous state and action space problems.
Year
DOI
Venue
2019
10.1007/978-3-030-22796-8_22
ADVANCES IN NEURAL NETWORKS - ISNN 2019, PT I
Keywords
DocType
Volume
Reinforcement learning, Policy search, Prioritized learning, Parameter noise, RL, Deep learning, Mujoco, Policy gradient, DDPG
Conference
11554
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Rajesh Mangannavar100.34
Gopalakrishnan Srinivasaraghavan200.34