Abstract | ||
---|---|---|
If NAS methods are solutions, what is the problem? Most existing NAS methods require two-stage parameter optimization. However, performance of the same architecture in the two stages correlates poorly. In this work, we propose a new problem definition for NAS, task-specific end-to-end, based on this observation. We argue that given a computer vision task for which a NAS method is expected, this definition can reduce the vaguely-defined NAS evaluation to i) accuracy of this task and ii) the total computation consumed to finally obtain a model with satisfying accuracy. Seeing that most existing methods do not solve this problem directly, we propose DSNAS, an efficient differentiable NAS framework that simultaneously optimizes architecture and parameters with a low-biased Monte Carlo estimate. Child networks derived from DSNAS can be deployed directly without parameter retraining. Comparing with two-stage methods, DSNAS successfully discovers networks with comparable accuracy (74.4%) on ImageNet in 420 GPU hours, reducing the total time by more than 34%. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/CVPR42600.2020.01210 | CVPR |
DocType | Citations | PageRank |
Conference | 2 | 0.39 |
References | Authors | |
22 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Shoukang Hu | 1 | 6 | 10.90 |
Sirui Xie | 2 | 44 | 3.34 |
Hehui Zheng | 3 | 39 | 1.91 |
Chunxiao Liu | 4 | 259 | 12.60 |
Jianping Shi | 5 | 920 | 43.57 |
Xunying Liu | 6 | 330 | 52.46 |
Dahua Lin | 7 | 1117 | 72.62 |