Title
MLP-Mixer: An all-MLP Architecture for Vision.
Abstract
Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with MLPs applied across patches (i.e. "mixing" spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.
Year
Venue
DocType
2021
Annual Conference on Neural Information Processing Systems
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
11
Name
Order
Citations
PageRank
Ilya Tolstikhin100.34
Neil Houlsby215314.73
Alexander Kolesnikov315211.94
Lucas Beyer423213.50
Xiaohua Zhai520913.00
Thomas Unterthiner671428.63
Jessica Yung700.34
Daniel Keysers801.35
Jakob Uszkoreit9110739.31
Mario Lucic1023116.10
Alexey Dosovitskiy11179780.48