Name
Affiliation
Papers
KOTA ANDO
Hokkaido Univ, Sapporo, Hokkaido, Japan
17
Collaborators
Citations 
PageRank 
43
24
6.81
Referers 
Referees 
References 
113
121
24
Search Limit
100121
Title
Citations
PageRank
Year
Multicoated Supermasks Enhance Hidden Networks.00.342022
Hiddenite: 4K-PE Hidden Network Inference 4D-Tensor Engine Exploiting On-Chip Model Construction Achieving 34.8-to-16.0TOPS/W for CIFAR-100 and ImageNet.00.342022
ProgressiveNN: Achieving Computational Scalability with Dynamic Bit-Precision Adjustment by MSB-first Accumulative Computation.00.342021
Edge Inference Engine for Deep & Random Sparse Neural Networks with 4-bit Cartesian-Product MAC Array and Pipelined Activation Aligner10.372021
STATICA: A 512-Spin 0.25M-Weight Annealing Processor With an All-Spin-Updates-at-Once Architecture for Combinatorial Optimization With Complete Spin–Spin Interactions20.522021
7.3 STATICA: A 512-Spin 0.25M-Weight Full-Digital Annealing Processor with a Near-Memory All-Spin-Updates-at-Once Architecture for Combinatorial Optimization with Complete Spin-Spin Interactions10.432020
ProgressiveNN: Achieving Computational Scalability without Network Alteration by MSB-first Accumulative Computation00.342020
A 3D-Stacked SRAM using Inductive Coupling with Low-Voltage Transmitter and 12:1 SerDes00.342020
QUEST: Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96-MB 3-D SRAM Using Inductive Coupling Technology in 40-nm CMOS30.442019
DeltaNet: Differential Binary Neural Network00.342019
Dither Nn: Hardware/Algorithm Co-Design For Accurate Quantized Neural Networks00.342019
BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W.150.852018
Area and Energy Optimization for Bit-Serial Log-Quantized DNN Accelerator with Shared Accumulators00.342018
Dither NN: An Accurate Neural Network with Dithering for Low Bit-Precision Hardware10.432018
Accelerating deep learning by binarized hardware.00.342017
In-memory area-efficient signal streaming processor design for binary neural networks10.382017
Logarithmic Compression for Memory Footprint Reduction in Neural Network Training00.342017