Name
Affiliation
Papers
H. ESMAEILZADEH
Electr. & Comput. Eng. Dept., Tehran Univ., Iran
69
Collaborators
Citations 
PageRank 
182
1443
69.71
Referers 
Referees 
References 
3567
1842
1067
Search Limit
1001000
Title
Citations
PageRank
Year
Accelerating attention through gradient-based learned runtime pruning10.342022
Glimpse: mathematical embedding of hardware specification for neural compilation00.342022
FastStereoNet: A Fast Neural Architecture Search for Improving the Inference of Disparity Estimation on Resource-Limited Platforms00.342022
Yin-Yang: Programming Abstractions for Cross-Domain Multi-Acceleration00.342022
VeriGOOD-ML: An Open-Source Flow for Automated ML Hardware Synthesis00.342021
Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy00.342021
A Computational Stack for Cross-Domain Acceleration00.342021
Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation00.342020
Bit-Parallel Vector Composability For Neural Acceleration00.342020
<sc>ReLeQ</sc> : A Reinforcement Learning Approach for Automatic Deep Quantization of Neural Networks00.342020
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks00.342020
Ordering Chaos: Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices00.342020
Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks50.442020
Mixed-Signal Charge-Domain Acceleration of Deep Neural networks through Interleaved Bit-Partitioned Arithmetic.20.352020
Shredder: Learning Noise Distributions to Protect Inference Privacy00.342020
Divide and Conquer: Leveraging Intermediate Feature Representations for Quantized Training of Neural Networks.00.342019
Machine Learning Acceleration00.342019
Shredder: Learning Noise to Protect Privacy with Partial DNN Inference on the Edge.00.342019
SinReQ: Generalized Sinusoidal Regularization for Automatic Low-Bitwidth Deep Quantized Training.00.342019
AxMemo: hardware-compiler co-design for approximate code memoization00.342019
Reinforcement Learning and Adaptive Sampling for Optimized DNN Compilation.00.342019
ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks.00.342018
A Network-Centric Hardware/Algorithm Co-Design to Accelerate Distributed Training of Deep Neural Networks.100.582018
SnaPEA: Predictive Early Activation for Reducing Computation in Deep Convolutional Neural Networks.120.512018
In-RDBMS Hardware Acceleration of Advanced Analytics.00.342018
GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks.50.422018
In-DRAM near-data approximate acceleration for GPUs40.412018
SiMul: An Algorithm-Driven Approximate Multiplier Design for Machine Learning.10.352018
In-RDBMS hardware acceleration of advanced analytics30.352018
FlexiGAN: An End-to-End Solution for FPGA Acceleration of Generative Adversarial Networks60.472018
RoboX: An End-to-End Solution to Accelerate Autonomous Control in Robotics.10.352018
Proving Flow Security of Sequential Logic via Automatically-Synthesized Relational Invariants00.342017
Bit fusion: bit-level dynamically composable architecture for accelerating deep neural networks420.932017
Scale-out acceleration for machine learning.170.622017
AxBench: A Multiplatform Benchmark Suite for Approximate Computing.270.962017
Mitigating the Memory Bottleneck With Approximate Load Value Prediction.80.422016
RFVP: Rollback-Free Value Prediction with Safe-to-Approximate Loads.180.662016
AxGames: Towards Crowdsourcing Quality Target Determination in Approximate Computing.30.382016
Towards Statistical Guarantees in Controlling Quality Tradeoffs for Approximate Acceleration.130.562016
Grater: An approximation workflow for exploiting data-level parallelism in FPGA acceleration.50.452016
Error correction for approximate computing.00.342016
From high-level deep neural models to FPGAs.350.962016
FlexJava: language support for safe and modular approximate programming240.822015
Approximate acceleration: A path through the era of dark silicon and big data00.342015
SNNAP: Approximate computing on programmable SoCs via neural acceleration401.332015
Neural acceleration for GPU throughput processors250.762015
Axilog: language support for approximate hardware design200.832015
Axilog: Abstractions for Approximate Hardware Design and Reuse30.382015
A reconfigurable fabric for accelerating large-scale datacenter services1094.722015
General-purpose code acceleration with limited-precision analog computation601.902014
  • 1
  • 2