Title
Learning to Generate Radar Image Sequences Using Two-Stage Generative Adversarial Networks
Abstract
While quantitative precipitation estimation (QPE) using weather radar is widely adopted in operation, precipitation data sets are often highly imbalanced. In particular, extreme precipitation usually lacks representation, which may introduce the bottleneck for radar QPE with machine learning models. Discovering the intrinsic characteristic of extreme precipitation with few samples is challenging. In this letter, we focus on the radar reflectivity data and aim to generate synthetic radar image sequences with respect to extreme precipitation. Considering the relatively long interval between continuous radar images due to radar volume scan, traditional methods in video generation are not suitable. In this letter, we propose Two-stage Generative Adversarial Networks (TsGANs) to address the above-mentioned problem. In general, our TsGAN constructs adversarial process between generators and discriminators: the generator produces samples similar to real data, while the discriminator determines whether or not a sample is eligible. In Stage I, we generate an image sequence containing content and motion features. In Stage II, we design an enhanced net structure to enrich the adversarial processes and further improve the motion features. Experimental testing is performed within the radar coverage in Shenzhen, China, on rainfall events in 2014–2016. Results show that our TsGAN is superior to previous works.
Year
DOI
Venue
2020
10.1109/LGRS.2019.2922326
IEEE Geoscience and Remote Sensing Letters
Keywords
Field
DocType
Deep learning,extreme precipitation,generative adversarial networks (GANs),radar image sequences
Computer vision,Radar imaging,Artificial intelligence,Generative grammar,Mathematics,Adversarial system
Journal
Volume
Issue
ISSN
17
3
1545-598X
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Chenyang Zhang100.68
Xuebing Yang284.55
Wensheng Zhang332328.76
Wensheng Zhang4389.48