Title
A Survey of Near-Data Processing Architectures for Neural Networks
Abstract
Data-intensive workloads and applications, such as machine learning (ML), are fundamentally limited by traditional computing systems based on the von-Neumann architecture. As data movement operations and energy consumption become key bottlenecks in the design of computing systems, the interest in unconventional approaches such as Near-Data Processing (NDP), machine learning, and especially neural network (NN)-based accelerators has grown significantly. Emerging memory technologies, such as ReRAM and 3D-stacked, are promising for efficiently architecting NDP-based accelerators for NN due to their capabilities to work as both high-density/low-energy storage and in/near-memory computation/search engine. In this paper, we present a survey of techniques for designing NDP architectures for NN. By classifying the techniques based on the memory technology employed, we underscore their similarities and differences. Finally, we discuss open challenges and future perspectives that need to be explored in order to improve and extend the adoption of NDP architectures for future computing platforms. This paper will be valuable for computer architects, chip designers, and researchers in the area of machine learning.
Year
DOI
Venue
2022
10.3390/make4010004
MACHINE LEARNING AND KNOWLEDGE EXTRACTION
Keywords
DocType
Volume
machine learning, deep neural networks, near-data processing, near-memory-processing, processing-in-memory, conventional memory technology, emerging memory technology, hardware architecture
Journal
4
Issue
ISSN
Citations 
1
2504-4990
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Mehdi Hassanpour100.34
Marc Riera2222.54
Antonio González33178229.66