Title
Improving and Understanding Variational Continual Learning.
Abstract
In the continual learning setting, tasks are encountered sequentially. The goal is to learn whilst i) avoiding catastrophic forgetting, ii) efficiently using model capacity, and iii) employing forward and backward transfer learning. In this paper, we explore how the Variational Continual Learning (VCL) framework achieves these desiderata on two benchmarks in continual learning: split MNIST and permuted MNIST. We first report significantly improved results on what was already a competitive approach. The improvements are achieved by establishing a new best practice approach to mean-field variational Bayesian neural networks. We then look at the solutions in detail. This allows us to obtain an understanding of why VCL performs as it does, and we compare the solution to what an `ideal' continual learning solution might be.
Year
Venue
DocType
2019
arXiv: Machine Learning
Journal
Volume
Citations 
PageRank 
abs/1905.02099
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Siddharth Swaroop133.06
Cuong Nguyen220735.89
Bui, Thang D.3575.77
Richard E. Turner432237.95