Abstract | ||
---|---|---|
Training non-linear neural networks is a challenging task, but over the years, various approaches coming from different perspectives have been proposed to improve performance. However, insights into what fundamentally constitutes textit{optimal} network parameters remains obscure. Similarly, given what properties of data can we hope for a non-linear network to learn is also not well studied. In order to address these challenges, we take a novel approach by analysing neural network from a data generating perspective, where we assume hidden layers generate the observed data. This perspective allows us to connect seemingly disparate approaches explored independently in the machine learning community such as batch normalization, Independent Component Analysis, orthogonal weight initialization, etc, as parts of a bigger picture and provide insights into non-linear networks in terms of properties of parameter and data that lead to better performance. |
Year | Venue | Field |
---|---|---|
2016 | arXiv: Machine Learning | Nonlinear system,Normalization (statistics),Computer science,Independent component analysis,Artificial intelligence,Initialization,Artificial neural network,Machine learning |
DocType | Volume | Citations |
Journal | abs/1605.07145 | 0 |
PageRank | References | Authors |
0.34 | 6 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Devansh Arpit | 1 | 146 | 14.24 |
Hung Q. Ngo | 2 | 670 | 56.62 |
Yingbo Zhou | 3 | 263 | 19.43 |
Nils Napp | 4 | 122 | 16.71 |
Venu Govindaraju | 5 | 3521 | 422.00 |