Non-Structured DNN Weight Pruning—Is It Beneficial in Any Platform? | 0 | 0.34 | 2022 |
DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture | 0 | 0.34 | 2022 |
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning | 0 | 0.34 | 2022 |
T-BFA: <underline>T</underline>argeted <underline>B</underline>it-<underline>F</underline>lip Adversarial Weight <underline>A</underline>ttack | 2 | 0.41 | 2022 |
Self-Terminating Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing | 0 | 0.34 | 2022 |
N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores | 0 | 0.34 | 2022 |
PIM-DH: ReRAM-based processing-in-memory architecture for deep hashing acceleration | 0 | 0.34 | 2022 |
EBSP: evolving bit sparsity patterns for hardware-friendly inference of quantized deep neural networks | 0 | 0.34 | 2022 |
SATO: spiking neural network acceleration via temporal-oriented dataflow and architecture | 0 | 0.34 | 2022 |
Elf: accelerate high-resolution mobile deep vision with content-aware parallel offloading | 5 | 0.44 | 2021 |
Reram-Sharing: Fine-Grained Weight Sharing For Reram-Based Deep Neural Network Accelerator | 0 | 0.34 | 2021 |
Unary Coding and Variation-Aware Optimal Mapping Scheme for Reliable ReRAM-Based Neuromorphic Computing | 1 | 0.36 | 2021 |
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point. | 0 | 0.34 | 2021 |
Energy-Efficient Hybrid-RAM with Hybrid Bit-Serial based VMM Support | 0 | 0.34 | 2021 |
MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning | 1 | 0.35 | 2021 |
Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator | 0 | 0.34 | 2021 |
PIMGCN: A ReRAM-Based PIM Design for Graph Convolutional Network Acceleration | 0 | 0.34 | 2021 |
RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery | 0 | 0.34 | 2021 |
AdaptiveGCN: Efficient GCN Through Adaptively Sparsifying Graphs | 1 | 0.36 | 2021 |
KSM: Fast Multiple Task Adaption via Kernel-wise Soft Mask Learning | 0 | 0.34 | 2021 |
BISWSRBS: A Winograd-based CNN Accelerator with a Fine-grained Regular Sparsity Pattern and Mixed Precision Quantization | 1 | 0.35 | 2021 |
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network | 0 | 0.34 | 2021 |
Re2PIM: A Reconfigurable ReRAM-Based PIM Design for Variable-Sized Vector-Matrix Multiplication | 1 | 0.35 | 2021 |
Robust Sparse Regularization: Defending Adversarial Attacks Via Regularized Sparse Network | 0 | 0.34 | 2020 |
Defending Bit-Flip Attack Through Dnn Weight Reconstruction | 0 | 0.34 | 2020 |
Non-Uniform Dnn Structured Subnets Sampling For Dynamic Inference | 0 | 0.34 | 2020 |
Processing-in-Memory Accelerator for Dynamic Neural Network with Run-Time Tuning of Accuracy, Power and Latency | 0 | 0.34 | 2020 |
Network-based multi-task learning models for biomarker selection and cancer outcome prediction. | 0 | 0.34 | 2020 |
Harmonious Coexistence Of Structured Weight Pruning And Ternarization For Deep Neural Networks | 1 | 0.35 | 2020 |
Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack | 0 | 0.34 | 2020 |
MRIMA: An MRAM-Based In-Memory Accelerator | 7 | 0.47 | 2020 |
TBT: Targeted Neural Network Attack with Bit Trojan | 0 | 0.34 | 2020 |
Sparse BD-Net: A Multiplication-less DNN with Sparse Binarized Depth-wise Separable Convolution | 4 | 0.37 | 2020 |
Accelerating Deep Neural Networks in Processing-in-Memory Platforms: Analog or Digital Approach? | 1 | 0.37 | 2019 |
Bit-Flip Attack: Crushing Neural Network With Progressive Bit Search | 4 | 0.41 | 2019 |
Bit-Flip Attack: Crushing Neural Network withProgressive Bit Search. | 0 | 0.34 | 2019 |
Binarized Depthwise Separable Neural Network for Object Tracking in FPGA | 1 | 0.38 | 2019 |
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack | 4 | 0.41 | 2019 |
Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy. | 3 | 0.40 | 2019 |
ParaPIM - a parallel processing-in-memory accelerator for binary-weight deep neural networks. | 6 | 0.39 | 2019 |
Noise Injection Adaption: End-to-End ReRAM Crossbar Non-ideal Effect Adaption for Neural Network Mapping | 13 | 0.88 | 2019 |
Artificial Neuron using Ag/2D-MoS<inf>2</inf>/Au Threshold Switching Memristor | 0 | 0.34 | 2019 |
Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and Compactness. | 0 | 0.34 | 2019 |
Simultaneously Optimizing Weight And Quantizer Of Ternary Neural Network Using Truncated Gaussian Approximation | 7 | 0.42 | 2018 |
Accelerating Low Bit-Width Deep Convolution Neural Network in MRAM. | 1 | 0.36 | 2018 |
Exploring a SOT-MRAM Based In-Memory Computing for Data Processing. | 4 | 0.49 | 2018 |
Leveraging Spintronic Devices for Efficient Approximate Logic and Stochastic Neural Networks. | 0 | 0.34 | 2018 |
PIM-TGAN: A Processing-in-Memory Accelerator for Ternary Generative Adversarial Networks | 0 | 0.34 | 2018 |
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack. | 8 | 0.45 | 2018 |
A Fully Onchip Binarized Convolutional Neural Network FPGA Impelmentation with Accurate Inference | 0 | 0.34 | 2018 |