Title
Few-Shot Learning Via Saliency-Guided Hallucination Of Samples
Abstract
Learning new concepts from a few of samples is a standard challenge in computer vision. The main directions to improve the learning ability of few-shot training models include (i) a robust similarity learning and (ii) generating or hallucinating additional data from the limited existing samples. In this paper, we follow the latter direction and present a novel data hallucination model. Currently, most datapoint generators contain a specialized network (i.e., GAN) tasked with hallucinating new datapoints, thus requiring large numbers of annotated data for their training in the first place. In this paper, we propose a novel less-costly hallucination method for few-shot learning which utilizes saliency maps. To this end, we employ a saliency network to obtain the foregrounds and backgrounds of available image samples and feed the resulting maps into a two-stream network to hallucinate datapoints directly in the feature space from viable foreground-background combinations. To the best of our knowledge, we are the first to leverage saliency maps for such a task and we demonstrate their usefulness in hallucinating additional datapoints for few-shot learning. Our proposed network achieves the state of the art on publicly available datasets.
Year
DOI
Venue
2019
10.1109/CVPR.2019.00288
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
Volume
Similarity learning,Feature vector,Pattern recognition,Salience (neuroscience),Computer science,Artificial intelligence,Machine learning,Hallucinating,Hallucinate
Journal
abs/1904.03472
ISSN
Citations 
PageRank 
1063-6919
7
0.44
References 
Authors
0
3
Name
Order
Citations
PageRank
Hongguang Zhang110616.70
Jing Zhang2246.36
Piotr Koniusz317316.64