Title
FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions
Abstract
Differentiable Neural Architecture Search (DNAS) has demonstrated great success in designing state-of-the-art, efficient neural networks. However, DARTS-based DNAS's search space is small when compared to other search methods', since all candidate network layers must be explicitly instantiated in memory. To address this bottleneck, we propose a memory and computationally efficient DNAS variant: DMaskingNAS. This algorithm expands the search space by up to $10^{14}\times$ over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters. We propose a masking mechanism for feature map reuse, so that memory and computational costs stay nearly constant as the search space expands. Furthermore, we employ effective shape propagation to maximize per-FLOP or per-parameter accuracy. The searched FBNetV2s yield state-of-the-art performance when compared with all previous architectures. With up to 421$\times$ less search cost, DMaskingNAS finds models with 0.9% higher accuracy, 15% fewer FLOPs than MobileNetV3-Small; and with similar accuracy but 20% fewer FLOPs than Efficient-B0. Furthermore, our FBNetV2 outperforms MobileNetV3 by 2.6% in accuracy, with equivalent model size. FBNetV2 models are open-sourced at https://github.com/facebookresearch/mobile-vision.
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.01298
CVPR
DocType
Citations 
PageRank 
Conference
8
0.47
References 
Authors
24
12
Name
Order
Citations
PageRank
Alvin Wan1152.59
Xiaoliang Dai21268.58
Peizhao Zhang325212.47
Zijian He4111.85
Yuandong Tian570343.06
Saining Xie623112.45
Bichen Wu7967.15
Matt C. Yu8898.62
Tao Xu918711.21
Chen Kan101319.28
Peter Vajda11805.14
Joseph E. Gonzalez122219102.68