Title
Performance Benchmarking Of Parallel Hyperparameter Tuning For Deep Learning Based Tornado Predictions
Abstract
Predicting violent storms and dangerous weather conditions with current models can take a long time due to the immense complexity associated with weather simulation. Machine learning has the potential to classify tornadic weather patterns much more rapidly, thus allowing for more timely alerts to the public. To deal with class imbalance challenges in machine learning, different data augmentation approaches have been proposed. In this work, we examine the wall time difference between live data augmentation methods versus the use of preaugmented data when they are used in a convolutional neural network based training for tornado prediction. We also compare CPU and GPU based training over varying sizes of augmented data sets. Additionally we examine what impact varying the number of GPUs used for training will produce given a convolutional neural network on wall time and accuracy. We conclude that using multiple GPUs to train a single network has no significant advantage over using a single GPU. The number of GPUs used during training should be kept as small as possible for maximum search throughput as the native Keras multi-GPU model provides little speedup with optimal learning parameters. (C) 2021 Elsevier Inc. All rights reserved.
Year
DOI
Venue
2021
10.1016/j.bdr.2021.100212
BIG DATA RESEARCH
Keywords
DocType
Volume
Deep learning, Data augmentation, Parallel performance, TensorFlow, Keras, GPU programming
Journal
25
ISSN
Citations 
PageRank 
2214-5796
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Jonathan N. Basalyga100.34
Carlos A. Barajas200.34
Matthias K. Gobbert33110.72
Jianwu Wang421526.72