Name
Affiliation
Papers
XIAOCHEN PENG
Arizona State University
31
Collaborators
Citations 
PageRank 
51
61
12.17
Referers 
Referees 
References 
260
112
27
Search Limit
100260
Title
Citations
PageRank
Year
A Runtime Reconfigurable Design of Compute-in-Memory based Hardware Accelerator.00.342021
RRAM for Compute-in-Memory: From Inference to Training90.552021
Impact of Multilevel Retention Characteristics on RRAM based DNN Inference Engine00.342021
DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-Chip Training10.372021
Exploiting Process Variations to Protect Machine Learning Inference Engine from Chip Cloning00.342021
Structured Pruning of RRAM Crossbars for Efficient In-Memory Computing Acceleration of Deep Neural Networks30.392021
Cryogenic Performance for Compute-in-Memory based Deep Neural Network Accelerator00.342021
A Runtime Reconfigurable Design of Compute-in-Memory–Based Hardware Accelerator for Deep Learning Inference00.342021
Compute-in-RRAM with Limited On-chip Resources00.342021
Secure XOR-CIM Engine: Compute-In-Memory SRAM Architecture With Embedded XOR Encryption00.342021
NeuroSim Validation with 40nm RRAM Compute-in-Memory Macro10.352021
MINT: Mixed-Precision RRAM-Based IN-Memory Training Architecture00.342020
Benchmark of the Compute-in-Memory-Based DNN Accelerator With Area Constraint00.342020
Architectural Design of 3D NAND Flash based Compute-in-Memory for Inference Engine.10.352020
A Two-Way Sram Array Based Accelerator For Deep Neural Network On-Chip Training00.342020
CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays70.492020
XOR-CIM: Compute-In-Memory SRAM Architecture with Embedded XOR Encryption10.372020
Compute-in-Memory with Emerging Nonvolatile-Memories: Challenges and Prospects00.342020
A Variation Robust Inference Engine Based on STT-MRAM with Parallel Read-Out00.342020
Optimizing Weight Mapping and Data Flow for Convolutional Neural Networks on Processing-in-Memory Architectures70.502020
MAX2: An ReRAM-based Neural Network Accelerator that Maximizes Data Reuse and Area Utilization00.342019
Design Guidelines of RRAM based Neural-Processing-Unit: A Joint Device-Circuit-Algorithm Analysis20.502019
MLP+NeuroSimV3.0: Improving On-chip Learning Performance with Device to Algorithm Optimizations00.342019
Inference engine benchmarking across technological platforms from CMOS to RRAM00.342019
CIMAT: a transpose SRAM-based compute-in-memory architecture for deep neural network on-chip training00.342019
Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons.40.452018
NeuroSim: A Circuit-Level Macro Model for Benchmarking Neuro-Inspired Architectures in Online Learning.180.902018
Xnor-Rram: A Scalable And Parallel Resistive Synaptic Architecture For Binary Neural Networks60.492018
X-Point PUF: Exploiting Sneak Paths for a Strong Physical Unclonable Function Design.00.342018
Benchmark of RRAM based Architectures for Dot-Product Computation00.342018
A Versatile ReRAM-based Accelerator for Convolutional Neural Networks10.362018