Title
Failures of Gradient-Based Deep Learning.
Abstract
In recent years, Deep Learning has become the go-to solution for a broad range of applications, often outperforming state-of-the-art. However, it is important, for both theoreticians and practitioners, to gain a deeper understanding of the difficulties and limitations associated with common approaches and algorithms. We describe four types of simple problems, for which the gradient-based algorithms commonly used in deep learning either fail or suffer from significant difficulties. We illustrate the failures through practical experiments, and provide theoretical insights explaining their source, and how they might be remedied.
Year
Venue
Field
2017
ICML
Computer science,Artificial intelligence,Deep learning,Machine learning
DocType
Citations 
PageRank 
Conference
25
0.99
References 
Authors
19
3
Name
Order
Citations
PageRank
Shai Shalev-Shwartz13681276.32
Ohad Shamir21627119.03
Shaked Shammah3250.99