Title
Parallelizing Back Propagation Neural Network On Speculative Multicores
Abstract
Applications typically exhibit extremely different performance characteristics depending on the accelerator. Back propagation neural network (BPNN) has been parallelized into different platforms. However, it has not yet been explored on speculative multicore architecture thoroughly. This paper presents a study of parallelizing BPNN on a speculative multicore architecture, including its speculative execution model, hardware design and programming model. The implementation was analyzed with seven well-known benchmark data sets. Furthermore, it trades off several important design factors in coming speculative multicore architecture. The experimental results show that: (1) the BPNN performs well on speculative multicore platform. It can achieve similar speedup (17.7x to 57.4x) compared with graphics processors (GPU) while provides a more friendly programmability. (2) 64 cores' computing resources can be used efficiently and 4k is the proper speculative buffer capacity in the model.
Year
DOI
Venue
2016
10.1109/ICPADS.2016.119
2016 IEEE 22ND INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS)
Keywords
Field
DocType
thread level speculation, parallel programming, back propagation, multicore
Computer science,Back propagation neural network,Speculative multithreading,Real-time computing,Multi-core processor,Distributed computing,Speedup,Graphics,Computer architecture,Programming paradigm,Speculative execution,Parallel computing,Backpropagation
Conference
ISSN
Citations 
PageRank 
1521-9097
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Yaobin Wang100.34
Hong An211.73
Zhi-qin Liu3124.93
Tao Liu400.34
Dongmei Zhao500.34