Title
mmCNN: A Novel Method for Large Convolutional Neural Network on Memory-Limited Devices.
Abstract
Deep learning recently has been widely used in many interactive application fields including but not limited to object recognition, speech recognition, natural language processing and so on. At the same time more and more attractive interactive applications (face recognition and augmented reality) are available on wearable and mobile devices. However, traditional deep learning methods such as CNN cost a lot of memory resources. This challenge makes it difficult to apply the powerful deep learning method on mobile memory limited platforms. In this paper we present a novel memory management strategy called mmCNN to solve this problem. This method helps us deploy a trained large size CNN on an any memory size platform including GPU, FPGA and memory-limited mobile devices. In our experiments, we run a feed-forward CNN process in an extremely small memory size (as low as 5MB) on a GPU platform. The result shows that our method saves more than 98% memory compared to a traditional CNN algorithm and further saves more than 90% compared to the sate-of-the-art related work vDNN. Our work improve the computing scalability of interaction applications and break the memory bottleneck of using deep learning method on a memory-limited devices.
Year
Venue
Field
2018
COMPSAC
Bottleneck,Computer architecture,Computer science,Convolutional neural network,Wearable computer,Real-time computing,Augmented reality,Mobile device,Memory management,Artificial intelligence,Deep learning,Scalability
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Shijie Li1415.56
Yong Dou263289.67
Jinwei Xu3245.00
Qiang Wang463.54
Xin Niu55611.39