Title
CRSSC: Salvage Reusable Samples from Noisy Data for Robust Learning
Abstract
Due to the existence of label noise in web images and the high memorization capacity of deep neural networks, training deep fine-grained (FG) models directly through web images tends to have an inferior recognition ability. In the literature, to alleviate this issue, loss correction methods try to estimate the noise transition matrix, but the inevitable false correction would cause severe accumulated errors. Sample selection methods identify clean ("easy") samples based on the fact that small losses can alleviate the accumulated errors. However, "hard" and mislabeled examples that can both boost the robustness of FG models are also dropped. To this end, we propose a certainty-based reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images. Our key idea is to additionally identify and correct reusable samples, and then leverage them together with clean examples to update the networks. We demonstrate the superiority of the proposed approach from both theoretical and experimental perspectives.
Year
DOI
Venue
2020
10.1145/3394171.3413978
MM '20: The 28th ACM International Conference on Multimedia Seattle WA USA October, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7988-5
4
PageRank 
References 
Authors
0.40
31
6
Name
Order
Citations
PageRank
Zeren Sun172.47
Xian-Sheng Hua26566328.17
Yazhou Yao38616.61
Xiu-Shen Wei433118.84
Guosheng Hu517616.88
Jian Zhang61305100.05