Abstract | ||
---|---|---|
The recent emergence of IoT has led to a substantial increase in the amount of data processed. Today, a large number of applications are data intensive, involving massive data transfers between processing core and memory. These transfers act as a bottleneck mainly due to the limited data bandwidth between memory and the processing core. Processing in memory (PIM) avoids this latency problem by doing computations at the source of data.
In this paper, we propose designs which enable PIM in the three major memory technologies, i.e. SRAM, DRAM, and the newly emerging non-volatile memories (NVMs). We exploit the analog properties of different memories to implement simple logic functions, namely OR, AND, and majority inside memory. We then extend them further to implement in-memory addition and multiplication. We compare the three memory technologies with GPU by running general applications on them. Our evaluations show that SRAM, NVM, and DRAM are 29.8x (36.3x), 17.6x (20.3x) and 1.7x (2.7x) better in performance (energy consumption) as compared to AMD GPU.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1145/3299874.3317977 | Proceedings of the 2019 on Great Lakes Symposium on VLSI |
Keywords | Field | DocType |
analog computing, dram, energy efficiency, memristors, non-volatile memories, processing in memory, sram | Dram,Bottleneck,Memristor,Computer science,Efficient energy use,Latency (engineering),Real-time computing,Static random-access memory,Bandwidth (signal processing),Energy consumption,Embedded system | Conference |
ISSN | ISBN | Citations |
1066-1395 | 978-1-4503-6252-8 | 0 |
PageRank | References | Authors |
0.34 | 0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Saransh Gupta | 1 | 101 | 11.58 |
Mohsen Imani | 2 | 341 | 48.13 |
Tajana Simunic | 3 | 3198 | 266.23 |