Title
Training Large Scale Deep Neural Networks on the Intel Xeon Phi Many-Core Coprocessor
Abstract
As a new area of machine learning research, the deep learning algorithm has attracted a lot of attention from the research community. It may bring human beings to a higher cognitive level of data. Its unsupervised pre-training step allows us to find high-dimensional representations or abstract features which work much better than the principal component analysis (PCA) method. However, it will face problems when being applied to deal with large scale data due to its intensive computation from many levels of training process against large scale data. The sequential deep learning algorithms usually can not finish the computation in an acceptable time. In this paper, we propose a many-core algorithm which is based on a parallel method and is used in the Intel Xeon Phi many-core systems to speed up the unsupervised training process of Sparse Autoencoder and Restricted Boltzmann Machine (RBM). Using the sequential training algorithm as a baseline to compare, we adopted several optimization methods to parallelize the algorithm. The experimental results show that our fully-optimized algorithm gains more than 300-fold speedup on parallelized Sparse Autoencoder compared with the original sequential algorithm on the Intel Xeon Phi coprocessor. Also, we ran the fully-optimized code on both the Intel Xeon Phi coprocessor and an expensive Intel Xeon CPU. Our method on the Intel Xeon Phi coprocessor is 7 to 10 times faster than the Intel Xeon CPU for this application. In addition to this, we compared our fully-optimized code on the Intel Xeon Phi with a Matlab code running on single Intel Xeon CPU. Our method on the Intel Xeon Phi runs 16 times faster than the Matlab implementation. The result also suggests that the Intel Xeon Phi can offer an efficient but more general-purposed way to parallelize the deep learning algorithm compared to GPU. It also achieves faster speed with better parallelism than the Intel Xeon CPU.
Year
DOI
Venue
2014
10.1109/IPDPSW.2014.194
IPDPS Workshops
Keywords
Field
DocType
restricted boltzmann machine,optimisation,sequential training algorithm,parallel processing,intel xeon phi coprocessor,machine learning research,optimization methods,parallelized sparse autoencoder,intel xeon phi many-core coprocessor,learning (artificial intelligence),sequential deep learning algorithms,many-core,deep neural network training,matlab code,fully-optimized algorithm,boltzmann machines,unsupervised training process,deep architecture,intel xeon phi,deep learning,parallel algorithm,deep learning, unsupervised learning, deep architecture, sparse autoencoder, restricted boltzmann machine, parallel algorithm, intel xeon phi, many-core,coprocessors,sparse autoencoder,parallel method,rbm,unsupervised learning,feature extraction,vectors,learning artificial intelligence,algorithm design and analysis,neural networks
MMX,x86,Computer science,Xeon Phi,Parallel computing,Hyper-threading,Machine Check Architecture,Pentium,Coprocessor,Xeon
Conference
Citations 
PageRank 
References 
10
0.62
12
Authors
5
Name
Order
Citations
PageRank
Lei Jin1100.96
Zhaokang Wang2175.17
Rong Gu311017.77
Chunfeng Yuan441830.84
Huang, Yihua516722.07