Accelerating attention through gradient-based learned runtime pruning | 1 | 0.34 | 2022 |
Glimpse: mathematical embedding of hardware specification for neural compilation | 0 | 0.34 | 2022 |
FastStereoNet: A Fast Neural Architecture Search for Improving the Inference of Disparity Estimation on Resource-Limited Platforms | 0 | 0.34 | 2022 |
Yin-Yang: Programming Abstractions for Cross-Domain Multi-Acceleration | 0 | 0.34 | 2022 |
VeriGOOD-ML: An Open-Source Flow for Automated ML Hardware Synthesis | 0 | 0.34 | 2021 |
Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy | 0 | 0.34 | 2021 |
A Computational Stack for Cross-Domain Acceleration | 0 | 0.34 | 2021 |
Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation | 0 | 0.34 | 2020 |
Bit-Parallel Vector Composability For Neural Acceleration | 0 | 0.34 | 2020 |
<sc>ReLeQ</sc> : A Reinforcement Learning Approach for Automatic Deep Quantization of Neural Networks | 0 | 0.34 | 2020 |
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks | 0 | 0.34 | 2020 |
Ordering Chaos: Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices | 0 | 0.34 | 2020 |
Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks | 5 | 0.44 | 2020 |
Mixed-Signal Charge-Domain Acceleration of Deep Neural networks through Interleaved Bit-Partitioned Arithmetic. | 2 | 0.35 | 2020 |
Shredder: Learning Noise Distributions to Protect Inference Privacy | 0 | 0.34 | 2020 |
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks. | 0 | 0.34 | 2019 |
Machine Learning Acceleration | 0 | 0.34 | 2019 |
Shredder: Learning Noise to Protect Privacy with Partial DNN Inference on the Edge. | 0 | 0.34 | 2019 |
SinReQ: Generalized Sinusoidal Regularization for Automatic Low-Bitwidth Deep Quantized Training. | 0 | 0.34 | 2019 |
AxMemo: hardware-compiler co-design for approximate code memoization | 0 | 0.34 | 2019 |
Reinforcement Learning and Adaptive Sampling for Optimized DNN Compilation. | 0 | 0.34 | 2019 |
ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks. | 0 | 0.34 | 2018 |
A Network-Centric Hardware/Algorithm Co-Design to Accelerate Distributed Training of Deep Neural Networks. | 10 | 0.58 | 2018 |
SnaPEA: Predictive Early Activation for Reducing Computation in Deep Convolutional Neural Networks. | 12 | 0.51 | 2018 |
In-RDBMS Hardware Acceleration of Advanced Analytics. | 0 | 0.34 | 2018 |
GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks. | 5 | 0.42 | 2018 |
In-DRAM near-data approximate acceleration for GPUs | 4 | 0.41 | 2018 |
SiMul: An Algorithm-Driven Approximate Multiplier Design for Machine Learning. | 1 | 0.35 | 2018 |
In-RDBMS hardware acceleration of advanced analytics | 3 | 0.35 | 2018 |
FlexiGAN: An End-to-End Solution for FPGA Acceleration of Generative Adversarial Networks | 6 | 0.47 | 2018 |
RoboX: An End-to-End Solution to Accelerate Autonomous Control in Robotics. | 1 | 0.35 | 2018 |
Proving Flow Security of Sequential Logic via Automatically-Synthesized Relational Invariants | 0 | 0.34 | 2017 |
Bit fusion: bit-level dynamically composable architecture for accelerating deep neural networks | 42 | 0.93 | 2017 |
Scale-out acceleration for machine learning. | 17 | 0.62 | 2017 |
AxBench: A Multiplatform Benchmark Suite for Approximate Computing. | 27 | 0.96 | 2017 |
Mitigating the Memory Bottleneck With Approximate Load Value Prediction. | 8 | 0.42 | 2016 |
RFVP: Rollback-Free Value Prediction with Safe-to-Approximate Loads. | 18 | 0.66 | 2016 |
AxGames: Towards Crowdsourcing Quality Target Determination in Approximate Computing. | 3 | 0.38 | 2016 |
Towards Statistical Guarantees in Controlling Quality Tradeoffs for Approximate Acceleration. | 13 | 0.56 | 2016 |
Grater: An approximation workflow for exploiting data-level parallelism in FPGA acceleration. | 5 | 0.45 | 2016 |
Error correction for approximate computing. | 0 | 0.34 | 2016 |
From high-level deep neural models to FPGAs. | 35 | 0.96 | 2016 |
FlexJava: language support for safe and modular approximate programming | 24 | 0.82 | 2015 |
Approximate acceleration: A path through the era of dark silicon and big data | 0 | 0.34 | 2015 |
SNNAP: Approximate computing on programmable SoCs via neural acceleration | 40 | 1.33 | 2015 |
Neural acceleration for GPU throughput processors | 25 | 0.76 | 2015 |
Axilog: language support for approximate hardware design | 20 | 0.83 | 2015 |
Axilog: Abstractions for Approximate Hardware Design and Reuse | 3 | 0.38 | 2015 |
A reconfigurable fabric for accelerating large-scale datacenter services | 109 | 4.72 | 2015 |
General-purpose code acceleration with limited-precision analog computation | 60 | 1.90 | 2014 |