Abstract | ||
---|---|---|
We propose a new learning paradigm, Local to Global Learning (LGL), for Deep Neural Networks (DNNs) to improve the performance of classification problems. The core of LGL is to learn a DNN model from fewer categories (local) to more categories (global) gradually within the entire training set. LGL is most related to the Self-Paced Learning (SPL) algorithm but its formulation is different from SPL. SPL trains its data from simple to complex, while LGL from local to global. In this paper, we incorporate the idea of LGL into the learning objective of DNNs and explain why LGL works better from an information-theoretic perspective. Experiments on the toy data, CIFAR-I0, CIFAR-100, and ImageNet dataset show that LGL outperforms the baseline and SPL-based algorithms. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/CVPR.2019.00488 | 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) |
Field | DocType | ISSN |
Pattern recognition,Computer science,Artificial intelligence,Deep neural networks | Conference | 1063-6919 |
Citations | PageRank | References |
1 | 0.35 | 0 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Hao Cheng | 1 | 6 | 1.77 |
Dongze Lian | 2 | 32 | 5.90 |
Bowen Deng | 3 | 1 | 0.35 |
Shenghua Gao | 4 | 1607 | 66.89 |
Tao Tan | 5 | 46 | 10.25 |
Yanlin Geng | 6 | 74 | 8.63 |