Title
Zero-Shot Text-to-Image Generation
Abstract
Text-to-image generation has traditionally focused on finding better modeling assumptions for training on a fixed dataset. These assumptions might involve complex architectures, auxiliary losses, or side information such as object part labels or segmentation masks supplied during training. We describe a simple approach for this task based on a transformer that autoregressively models the text and image tokens as a single stream of data. With sufficient data and scale, our approach is competitive with previous domain-specific models when evaluated in a zero-shot fashion.
Year
Venue
DocType
2021
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139
Conference
Volume
ISSN
Citations 
139
2640-3498
0
PageRank 
References 
Authors
0.34
0
8
Name
Order
Citations
PageRank
Aditya Ramesh101.01
Mikhail Pavlov200.34
Gabriel Goh300.34
scott gray4452.12
Chelsea Voss500.34
alec radford6216575.60
Mark Chen701.35
Ilya Sutskever8258141120.24