Title
Scale-Regularized Filter Learning.
Abstract
We start out by demonstrating that an elementary learning task, corresponding to the training of a single linear neuron in a convolutional neural network, can be solved for feature spaces of very high dimensionality. In a second step, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and arguing that the problem of scale has not been taken care of in a very satisfactory manner, we come to a combined resolution of both of these shortcomings by proposing a form of scale regularization. Moreover, using variational method, this regularization problem can also be solved rather efficiently and we demonstrate, on an artificial filter learning problem, the capabilities of our basic linear neuron. From a more general standpoint, we see this work as prime example of how learning and variational methods could, or even should work to their mutual benefit.
Year
Venue
DocType
2017
CoRR
Journal
Volume
Citations 
PageRank 
abs/1707.02813
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Marco Loog11796154.31
François Lauze230629.69