Title
Tight Sample Complexity of Learning One-hidden-layer Convolutional Neural Networks
Abstract
We study the sample complexity of learning one-hidden-layer convolutional neural networks (CNNs) with non-overlapping filters. We propose a novel algorithm called approximate gradient descent for training CNNs, and show that, with high probability, the proposed algorithm with random initialization grants a linear convergence to the ground-truth parameters up to statistical precision. Compared with existing work, our result applies to general non-trivial, monotonic and Lipschitz continuous activation functions including ReLU, Leaky ReLU, Sigmod and Soft-plus etc. Moreover, our sample complexity beats existing results in the dependency of the number of hidden nodes and filter size. In fact, our result matches the information-theoretic lower bound for learning one-hidden-layer CNNs with linear activation functions, suggesting that our sample complexity is tight. Our theoretical analysis is backed up by numerical experiments.
Year
Venue
Keywords
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
sample complexity
Field
DocType
Volume
Convolutional neural network,Computer science,Artificial intelligence,Sample complexity,Machine learning
Conference
32
ISSN
Citations 
PageRank 
1049-5258
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Yuan Cao1206.45
Quanquan Gu2111678.25