A Runtime Reconfigurable Design of Compute-in-Memory based Hardware Accelerator. | 0 | 0.34 | 2021 |
RRAM for Compute-in-Memory: From Inference to Training | 9 | 0.55 | 2021 |
Impact of Multilevel Retention Characteristics on RRAM based DNN Inference Engine | 0 | 0.34 | 2021 |
DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-Chip Training | 1 | 0.37 | 2021 |
Exploiting Process Variations to Protect Machine Learning Inference Engine from Chip Cloning | 0 | 0.34 | 2021 |
Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Networks | 3 | 0.39 | 2021 |
Cryogenic Performance for Compute-in-Memory based Deep Neural Network Accelerator | 0 | 0.34 | 2021 |
A Runtime Reconfigurable Design of Compute-in-Memory–Based Hardware Accelerator for Deep Learning Inference | 0 | 0.34 | 2021 |
Compute-in-RRAM with Limited On-chip Resources | 0 | 0.34 | 2021 |
Secure XOR-CIM Engine: Compute-In-Memory SRAM Architecture With Embedded XOR Encryption | 0 | 0.34 | 2021 |
NeuroSim Validation with 40nm RRAM Compute-in-Memory Macro | 1 | 0.35 | 2021 |
MINT: Mixed-Precision RRAM-Based IN-Memory Training Architecture | 0 | 0.34 | 2020 |
Benchmark of the Compute-in-Memory-Based DNN Accelerator With Area Constraint | 0 | 0.34 | 2020 |
Architectural Design of 3D NAND Flash based Compute-in-Memory for Inference Engine. | 1 | 0.35 | 2020 |
A Two-Way Sram Array Based Accelerator For Deep Neural Network On-Chip Training | 0 | 0.34 | 2020 |
CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays | 7 | 0.49 | 2020 |
XOR-CIM: Compute-In-Memory SRAM Architecture with Embedded XOR Encryption | 1 | 0.37 | 2020 |
Compute-in-Memory with Emerging Nonvolatile-Memories: Challenges and Prospects | 0 | 0.34 | 2020 |
A Variation Robust Inference Engine Based on STT-MRAM with Parallel Read-Out | 0 | 0.34 | 2020 |
Optimizing Weight Mapping and Data Flow for Convolutional Neural Networks on Processing-in-Memory Architectures | 7 | 0.50 | 2020 |
MAX2: An ReRAM-based Neural Network Accelerator that Maximizes Data Reuse and Area Utilization | 0 | 0.34 | 2019 |
Design Guidelines of RRAM based Neural-Processing-Unit: A Joint Device-Circuit-Algorithm Analysis | 2 | 0.50 | 2019 |
MLP+NeuroSimV3.0: Improving On-chip Learning Performance with Device to Algorithm Optimizations | 0 | 0.34 | 2019 |
Inference engine benchmarking across technological platforms from CMOS to RRAM | 0 | 0.34 | 2019 |
CIMAT: a transpose SRAM-based compute-in-memory architecture for deep neural network on-chip training | 0 | 0.34 | 2019 |
Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons. | 4 | 0.45 | 2018 |
NeuroSim: A Circuit-Level Macro Model for Benchmarking Neuro-Inspired Architectures in Online Learning. | 18 | 0.90 | 2018 |
Xnor-Rram: A Scalable And Parallel Resistive Synaptic Architecture For Binary Neural Networks | 6 | 0.49 | 2018 |
X-Point PUF: Exploiting Sneak Paths for a Strong Physical Unclonable Function Design. | 0 | 0.34 | 2018 |
Benchmark of RRAM based Architectures for Dot-Product Computation | 0 | 0.34 | 2018 |
A Versatile ReRAM-based Accelerator for Convolutional Neural Networks | 1 | 0.36 | 2018 |