Abstract | ||
---|---|---|
Image translation is that converting an image from one domain to another domain. Many existing methods with GANs learn a mapping function by adversarial loss and other constraints. However, this learned mapping function can not express the detailed information of generated images and its generalization capability is not enough. To address this problem, in this paper, we propose an unpaired generative adversarial networks model with augmented auxiliary domain. The proposed model combines augmented auxiliary domain with the domains to be learned together to model. In particular, we design multiple generators and discriminators to achieve unpaired cross domain learn. The designed generators and discriminators are subject to multiple adversarial losses and full cycle constraint losses, which can learn the information of augmented auxiliary domain and reduce their mapping space. At last, we conduct experiments on seven cases and the results show that our model has better performance than other unpaired cross domain methods. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1016/j.neucom.2018.07.057 | Neurocomputing |
Keywords | Field | DocType |
Image translation,GAN,Auxiliary domain,Cycle constraint,Cross domain | Image translation,Pattern recognition,Theoretical computer science,Full cycle,Artificial intelligence,Generative grammar,Mathematics | Journal |
Volume | ISSN | Citations |
316 | 0925-2312 | 2 |
PageRank | References | Authors |
0.38 | 18 | 5 |