Title
IAPR keynote lecture IV: Deep learning
Abstract
Deep learning has arisen around 2006 as a renewal of neural networks research allowing such models to have more layers. Theoretical investigations have shown that functions obtained as deep compositions of simpler functions (which includes both deep and recurrent nets) can express highly varying functions (with many ups and downs and different input regions that can be distinguished) much more efficiently (with fewer parameters) than otherwise. Empirical work in a variety of applications has demonstrated that, when well trained, such deep architectures can be highly successful, remarkably breaking through previous state-of-the-art in many areas, including speech recognition, object recognition, language models, and transfer learning. This talk will summarize the advances that have made these breakthroughs possible, and end with questions about some major challenges still ahead of researchers in order to continue our climb towards AI-level competence.
Year
DOI
Venue
2015
10.1109/ACPR.2015.7486451
2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR)
Field
DocType
Citations 
Computer science,Transfer of learning,Artificial intelligence,Deep learning,Artificial neural network,Language model,Cognitive neuroscience of visual object recognition
Conference
0
PageRank 
References 
Authors
0.34
0
1
Name
Order
Citations
PageRank
Yoshua Bengio1426773039.83