Name
Affiliation
Papers
ZHEZHI HE
Univ Cent Florida, Dept Elect & Comp Engn, Orlando, FL 32816 USA
66
Collaborators
Citations 
PageRank 
125
136
25.37
Referers 
Referees 
References 
354
1220
499
Search Limit
1001000
Title
Citations
PageRank
Year
Non-Structured DNN Weight Pruning—Is It Beneficial in Any Platform?00.342022
DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture00.342022
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning00.342022
T-BFA: <underline>T</underline>argeted <underline>B</underline>it-<underline>F</underline>lip Adversarial Weight <underline>A</underline>ttack20.412022
Self-Terminating Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing00.342022
N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores00.342022
PIM-DH: ReRAM-based processing-in-memory architecture for deep hashing acceleration00.342022
EBSP: evolving bit sparsity patterns for hardware-friendly inference of quantized deep neural networks00.342022
SATO: spiking neural network acceleration via temporal-oriented dataflow and architecture00.342022
Elf: accelerate high-resolution mobile deep vision with content-aware parallel offloading50.442021
Reram-Sharing: Fine-Grained Weight Sharing For Reram-Based Deep Neural Network Accelerator00.342021
Unary Coding and Variation-Aware Optimal Mapping Scheme for Reliable ReRAM-Based Neuromorphic Computing10.362021
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point.00.342021
Energy-Efficient Hybrid-RAM with Hybrid Bit-Serial based VMM Support00.342021
MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning10.352021
Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator00.342021
PIMGCN: A ReRAM-Based PIM Design for Graph Convolutional Network Acceleration00.342021
RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery00.342021
AdaptiveGCN: Efficient GCN Through Adaptively Sparsifying Graphs10.362021
KSM: Fast Multiple Task Adaption via Kernel-wise Soft Mask Learning00.342021
BISWSRBS: A Winograd-based CNN Accelerator with a Fine-grained Regular Sparsity Pattern and Mixed Precision Quantization10.352021
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network00.342021
Re2PIM: A Reconfigurable ReRAM-Based PIM Design for Variable-Sized Vector-Matrix Multiplication10.352021
Robust Sparse Regularization: Defending Adversarial Attacks Via Regularized Sparse Network00.342020
Defending Bit-Flip Attack Through Dnn Weight Reconstruction00.342020
Non-Uniform Dnn Structured Subnets Sampling For Dynamic Inference00.342020
Processing-in-Memory Accelerator for Dynamic Neural Network with Run-Time Tuning of Accuracy, Power and Latency00.342020
Network-based multi-task learning models for biomarker selection and cancer outcome prediction.00.342020
Harmonious Coexistence Of Structured Weight Pruning And Ternarization For Deep Neural Networks10.352020
Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack00.342020
MRIMA: An MRAM-Based In-Memory Accelerator70.472020
TBT: Targeted Neural Network Attack with Bit Trojan00.342020
Sparse BD-Net: A Multiplication-less DNN with Sparse Binarized Depth-wise Separable Convolution40.372020
Accelerating Deep Neural Networks in Processing-in-Memory Platforms: Analog or Digital Approach?10.372019
Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search40.412019
Bit-Flip Attack: Crushing Neural Network withProgressive Bit Search.00.342019
Binarized Depthwise Separable Neural Network for Object Tracking in FPGA10.382019
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack40.412019
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy.30.402019
ParaPIM - a parallel processing-in-memory accelerator for binary-weight deep neural networks.60.392019
Noise Injection Adaption: End-to-End ReRAM Crossbar Non-ideal Effect Adaption for Neural Network Mapping130.882019
Artificial Neuron using Ag/2D-MoS<inf>2</inf>/Au Threshold Switching Memristor00.342019
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness.00.342019
Simultaneously Optimizing Weight And Quantizer Of Ternary Neural Network Using Truncated Gaussian Approximation70.422018
Accelerating Low Bit-Width Deep Convolution Neural Network in MRAM.10.362018
Exploring a SOT-MRAM Based In-Memory Computing for Data Processing.40.492018
Leveraging Spintronic Devices for Efficient Approximate Logic and Stochastic Neural Networks.00.342018
PIM-TGAN: A Processing-in-Memory Accelerator for Ternary Generative Adversarial Networks00.342018
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack.80.452018
A Fully Onchip Binarized Convolutional Neural Network FPGA Impelmentation with Accurate Inference00.342018
  • 1
  • 2