Title
L2NAS: Learning to Optimize Neural Architectures via Continuous-Action Reinforcement Learning
Abstract
ABSTRACTNeural architecture search (NAS) has achieved remarkable results in deep neural network design. Differentiable architecture search converts the search over discrete architectures into a hyperparameter optimization problem which can be solved by gradient descent. However, questions have been raised regarding the effectiveness and generalizability of gradient methods for solving non-convex architecture hyperparameter optimization problems. In this paper, we propose L2NAS, which learns to intelligently optimize and update architecture hyperparameters via an actor neural network based on the distribution of high-performing architectures in the search history. We introduce a quantile-driven training procedure which efficiently trains L2NAS in an actor-critic framework via continuous-action reinforcement learning. Experiments show that L2NAS achieves state-of-the-art results on NAS-Bench-201 benchmark as well as DARTS search space and Once-for-All MobileNetV3 search space. We also show that search policies generated by L2NAS are generalizable and transferable across different training datasets with minimal fine-tuning.
Year
DOI
Venue
2021
10.1145/3459637.3482360
Conference on Information and Knowledge Management
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
9
Name
Order
Citations
PageRank
Keith Mills1102.30
Fred X. Han201.01
Mohammad Salameh300.68
Seyed Saeed Changiz Rezaei4516.30
Linglong Kong54211.37
Wei Lu601.01
Shuo Lian743.46
Shang-Ling Jui8123.95
Di Niu945341.73