Title
Local To Global Learning: Gradually Adding Classes For Training Deep Neural Networks
Abstract
We propose a new learning paradigm, Local to Global Learning (LGL), for Deep Neural Networks (DNNs) to improve the performance of classification problems. The core of LGL is to learn a DNN model from fewer categories (local) to more categories (global) gradually within the entire training set. LGL is most related to the Self-Paced Learning (SPL) algorithm but its formulation is different from SPL. SPL trains its data from simple to complex, while LGL from local to global. In this paper, we incorporate the idea of LGL into the learning objective of DNNs and explain why LGL works better from an information-theoretic perspective. Experiments on the toy data, CIFAR-I0, CIFAR-100, and ImageNet dataset show that LGL outperforms the baseline and SPL-based algorithms.
Year
DOI
Venue
2019
10.1109/CVPR.2019.00488
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
ISSN
Pattern recognition,Computer science,Artificial intelligence,Deep neural networks
Conference
1063-6919
Citations 
PageRank 
References 
1
0.35
0
Authors
6
Name
Order
Citations
PageRank
Hao Cheng161.77
Dongze Lian2325.90
Bowen Deng310.35
Shenghua Gao4160766.89
Tao Tan54610.25
Yanlin Geng6748.63