Title
How far can we go without convolution: Improving fully-connected networks.
Abstract
We propose ways to improve the performance of fully connected networks. We found that two approaches in particular have a strong effect on performance: linear bottleneck layers and unsupervised pre-training using autoencoders without hidden unit biases. We show how both approaches can be related to improving gradient flow and reducing sparsity in the network. We show that a fully connected network can yield approximately 70% classification accuracy on the permutation-invariant CIFAR-10 task, which is much higher than the current state-of-the-art. By adding deformations to the training data, the fully connected network achieves 78% accuracy, which is just 10% short of a decent convolutional network.
Year
Venue
Field
2015
arXiv: Learning
Training set,Bottleneck,Computer science,Convolution,Artificial intelligence,Balanced flow,Machine learning
DocType
Volume
Citations 
Journal
abs/1511.02580
3
PageRank 
References 
Authors
0.46
9
3
Name
Order
Citations
PageRank
Zhouhan Lin141917.51
Roland Memisevic2111665.87
Kishore Reddy Konda342818.22