Title
Concept Learning through Deep Reinforcement Learning with Memory-Augmented Neural Networks.
Abstract
Deep neural networks have shown superior performance in many regimes to remember familiar patterns with large amounts of data. However, the standard supervised deep learning paradigm is still limited when facing the need to learn new concepts efficiently from scarce data. In this paper, we present a memory-augmented neural network which is motivated by the process of human concept learning. The training procedure, imitating the concept formation course of human, learns how to distinguish samples from different classes and aggregate samples of the same kind. In order to better utilize the advantages originated from the human behavior, we propose a sequential process, during which the network should decide how to remember each sample at every step. In this sequential process, a stable and interactive memory serves as an important module. We validate our model in some typical one-shot learning tasks and also an exploratory outlier detection problem. In all the experiments, our model gets highly competitive to reach or outperform those strong baselines.
Year
DOI
Venue
2019
10.1016/j.neunet.2018.10.018
Neural Networks
Keywords
DocType
Volume
One-shot learning,Memory,Attention,Deep reinforcement learning,Neural networks
Journal
110
Issue
ISSN
Citations 
1
0893-6080
1
PageRank 
References 
Authors
0.35
11
4
Name
Order
Citations
PageRank
Jing Shi155.80
Jiaming Xu228435.34
Yiqun Yao311.70
Bo Xu424136.59