Abstract | ||
---|---|---|
Generative Adversarial Net is a frontier method of generative models for images, audios and videos. In this paper, we focus on conditional image generation and introduce conditional Feature-Matching Generative Adversarial Net to generate images from category labels. By visualizing state-of-art discriminative conditional generative models, we find these networks do not gain clear semantic concepts. Thus we design the loss function in the light of metric learning to measure semantic distance. The proposed model is evaluated on several well-known datasets. It is shown to be of higher perceptual quality and better diversity then existing generative models. |
Year | Venue | Keywords |
---|---|---|
2017 | 2017 10TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI) | Generative Adversarial Net, Image Generation, Deep Generative Model |
Field | DocType | Citations |
Semantic similarity,Data modeling,Image generation,Pattern recognition,Computer science,Feature matching,Artificial intelligence,Generative grammar,Discriminative model,Perception,Semantics | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yuzhong Liu | 1 | 0 | 0.34 |
Zhao, Q. | 2 | 2 | 1.71 |
Cheng Jiang | 3 | 1 | 2.72 |