Title
A-ViT: Adaptive Tokens for Efficient Vision Transformer
Abstract
We introduce A - ViT, a method that adaptively adjusts the inference cost of vision transformer (ViT) for images of different complexity. A - ViT achieves this by automatically reducing the number of tokens in vision transformers that are processed in the network as inference proceeds. We refor-mulate Adaptive Computation Time (ACT [17]) for this task, extending halting to discard redundant spatial tokens. The appealing architectural properties of vision transformers enables our adaptive token reduction mechanism to speed up inference without modifying the network architecture or inference hardware. We demonstrate that A - ViT requires no extra parameters or sub-network for halting, as we base the learning of adaptive halting on the original network parameters. We further introduce distributional prior regularization that stabilizes training compared to prior ACT approaches. On the image classification task (ImageNet1K), we show that our proposed A - ViT yields high efficacy in filtering informative spatial features and cutting down on the overall compute. The proposed method improves the throughput of DeiT-Tiny by 62% and DeiT-Small by 38% with only 0.3% accuracy drop, outperforming prior art by a large margin.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.01054
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Deep learning architectures and techniques, Efficient learning and inferences
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Hongxu Yin100.34
Vahdat, Arash235318.20
José María Álvarez346848.77
Arun Mallya41029.33
Jan Kautz53615198.77
Pavlo O. Molchanov619811.96