Title
Regularization In Dqn For Parameter-Varying Control Learning Tasks
Abstract
As an important technique of preventing overfitting, regularization is widely used in supervised learning. However, regularization has not been systematically studied in deep reinforcement learning (deep RL). In this paper, we study the generalization of deep Q-network (DQN), applying with mainstream regularization approaches, including l(1), l(2) and dropout. We pay attention on agent's performance not only in original environments, but also in parameter-varying environments which are variational but the same task type. Furthermore, the dropout is modified to make it more adaptive to DQN. Then, a new dropout is proposed to speed up the optimization of DQN. Experiments show that regularization helps deep RL achieve better performance in both original and parameter-varying environments when the number of samples is insufficient.
Year
DOI
Venue
2019
10.1007/978-3-030-22808-8_4
ADVANCES IN NEURAL NETWORKS - ISNN 2019, PT II
Keywords
Field
DocType
Regularization, Deep RL, Control learning task
Computer science,Supervised learning,Regularization (mathematics),Artificial intelligence,Overfitting,Machine learning,Reinforcement learning,Speedup
Conference
Volume
ISSN
Citations 
11555
0302-9743
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Dazi Li1216.13
Chengjia Lei200.34
Qibing Jin31911.28
Min Han476168.01