Abstract | ||
---|---|---|
In this paper, we propose an efficient GPU-training framework for the large-scale wide models, named cuWide. To fully benefit from the memory hierarchy of GPU, cuWide applies a new flow-based schema for training, which leverages the spatial and temporal locality of wide models to drastically reduce the amount of communication with GPU global memory. Comprehensive experiments show that cuWide can be up to more than 20x faster than the state-of-the-art GPU solutions and multi-core CPU solutions. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/ICDE51399.2021.00251 | 2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021) |
DocType | ISSN | Citations |
Conference | 1084-4627 | 0 |
PageRank | References | Authors |
0.34 | 0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xupeng Miao | 1 | 14 | 3.33 |
Lingxiao Ma | 2 | 11 | 2.86 |
Zhi Yang | 3 | 371 | 41.32 |
Yingxia Shao | 4 | 213 | 24.25 |
Bin Cui | 5 | 1843 | 124.59 |
Lele Yu | 6 | 70 | 6.93 |
Jiawei Jiang | 7 | 89 | 14.60 |