Title
Learning Everywhere: Pervasive Machine Learning for Effective High-Performance Computation
Abstract
The convergence of HPC and data intensive methodologies provide a promising approach to major performance improvements. This paper provides a general description of the interaction between traditional HPC and ML approaches and motivates the "Learning Everywhere" paradigm for HPC. We introduce the concept of "effective performance" that one can achieve by combining learning methodologies with simulation based approaches, and distinguish between traditional performance as measured by benchmark scores. To support the promise of integrating HPC and learning methods, this paper examines specific examples and opportunities across a series of domains. It concludes with a series of open software systems, methods and infrastructure challenges that the Learning Everywhere paradigm presents.
Year
DOI
Venue
2019
10.1109/IPDPSW.2019.00081
2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
Keywords
Field
DocType
Effective Performance,Machine learning driven HPC
Convergence (routing),Data science,Computer science,Cyberinfrastructure,Computation,Distributed computing
Journal
Volume
ISSN
ISBN
abs/1902.10810
2164-7062
978-1-7281-3511-3
Citations 
PageRank 
References 
6
0.49
8
Authors
13
Name
Order
Citations
PageRank
Geoffrey Fox1274.42
James Glazier29314.35
J. C. S. Kadupitiya391.25
Vikram Jadhao4153.02
Minje Kim539432.57
Judy Qiu674343.25
James P. Sluka781.25
endre t somogyi8212.61
Madhav Marathe92775262.17
Madhav Marathe102775262.17
Abhijin Adiga1112619.57
Jiangzhuo Chen1262.18
Oliver Beckstein13969.01