Abstract | ||
---|---|---|
We introduce a sparse scattering deep convolutional neural network, which provides a simple model to analyze properties of deep representation learning for classification. Learning a single dictionary matrix with a classifier yields a higher classification accuracy than AlexNet over the ImageNet 2012 dataset. The network first applies a scattering transform that linearizes variabilities due to geometric transformations such as translations and small deformations.
A sparse $\ell^1$ dictionary coding reduces intra-class variability while preserving class separation through projections over unions of linear spaces. It is implemented in a deep convolutional network with a homotopy algorithm having an exponential convergence. A convergence proof is given in a general framework that includes ALISTA. Classification results are analyzed on ImageNet. |
Year | Venue | Keywords |
---|---|---|
2020 | ICLR | dictionary learning, scattering transform, sparse coding, imagenet |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
19 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
John Zarka | 1 | 0 | 1.01 |
Louis Thiry | 2 | 3 | 2.42 |
Tomás Angles | 3 | 4 | 1.43 |
Mallat, S. | 4 | 14 | 3.35 |