Title
Neural architecture tuning with policy adaptation
Abstract
Neural architecture search (NAS) is to automatically design task-specific neural architectures, whose performance has already surpassed those of many manually designed neural networks. Existing NAS techniques focus on searching for the neural architecture and training the optimal network weights from the scratch. Nevertheless, it could be essential to study how to tune a given neural architecture instead of producing a completely new neural architecture in some scenarios, which may lead to a more optimal solution by combining human experience and the advantages of the machine’s automatic searching. This paper proposes to learn to tune the architectures at hand to achieve better performance. The proposed Neural Architecture Tuning (NAT) algorithm trains a deep Q-network to tune neural architectures given a random architecture so that we can achieve better performance on a reduced space. We then apply adversarial autoencoder to make the learned policy be generalized to a different searching space in real-world applications. The proposed algorithm is evaluated on the NAS-Bench-101 dataset. The results indicate that our NAT framework can achieve state-of-the-art performance on the NAS-Bench-101 benchmark, and the learned policy can be adapted to a different search space while maintaining the performance.
Year
DOI
Venue
2022
10.1016/j.neucom.2021.10.095
Neurocomputing
Keywords
DocType
Volume
Neural architecture search,Reinforcement learning,Transfer learning
Journal
485
ISSN
Citations 
PageRank 
0925-2312
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Yanxi Li102.70
Minjing Dong200.34
Yixing Xu395.09
Yunhe Wang411322.76
Chang Xu578147.60