Title
Integrating Deep Learning in Domain Sciences at Exascale.
Abstract
This paper presents some of the current challenges in designing deep learning artificial intelligence (AI) and integrating it with traditional high-performance computing (HPC) simulations. We evaluate existing packages for their ability to run deep learning models and applications on large-scale HPC systems efficiently, identify challenges, and propose new asynchronous parallelization and optimization techniques for current large-scale heterogeneous systems and upcoming exascale systems. These developments, along with existing HPC AI software capabilities, have been integrated into MagmaDNN, an open-source HPC deep learning framework. Many deep learning frameworks are targeted at data scientists and fall short in providing quality integration into existing HPC workflows. This paper discusses the necessities of an HPC deep learning framework and how those needs can be provided (e.g., as in MagmaDNN) through a deep integration with existing HPC libraries, such as MAGMA and its modular memory management, MPI, CuBLAS, CuDNN, MKL, and HIP. Advancements are also illustrated through the use of algorithmic enhancements in reduced- and mixed-precision, as well as asynchronous optimization methods. Finally, we present illustrations and potential solutions for enhancing traditional compute- and data-intensive applications at ORNL and UTK with AI. The approaches and future challenges are illustrated in materials science, imaging, and climate applications.
Year
DOI
Venue
2020
10.1007/978-3-030-63393-6_3
SMC
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
11
Name
Order
Citations
PageRank
Rick Archibald100.34
Edmond Chow242840.58
Eduardo F. D'Azevedo300.34
Jack J. Dongarra4176252615.79
Markus Eisenbach594.17
Rocco Febbo600.34
Florent Lopez700.34
Daniel Nichols800.34
Stanimire Tomov91214102.02
Kwai Wong1000.34
Junqi Yin11115.45