Title
Incremental Learning of Multi-Domain Image-to-Image Translations
Abstract
Current multi-domain image-to-image translation models assume a fixed set of domains and that all the data are always available during training. However, over time, we may want to include additional domains to our model. Existing methods either require re-training the whole model with data from all domains or require training several additional modules to accommodate new domains. To address these limitations, we present IncrementalGAN, a multi-domain image-to-image translation model that can incrementally learn new domains using only a single generator. Our approach first decouples the domain label representation from the generator to allow it to be re-used for new domains without any architectural modification. Next, we introduce a distillation loss that prevents the model from forgetting previously learned domains. Our model compares favorably against several state-of-the-art baselines.
Year
DOI
Venue
2021
10.1109/TCSVT.2020.3005311
IEEE Transactions on Circuits and Systems for Video Technology
Keywords
DocType
Volume
Generators,Data models,Generative adversarial networks,Training,Training data,Gallium nitride,Task analysis
Journal
31
Issue
ISSN
Citations 
4
1051-8215
5
PageRank 
References 
Authors
0.46
5
3
Name
Order
Citations
PageRank
Daniel Stanley Tan1165.04
Yong-Xiang Lin261.17
Kai-Lung Hua326542.99