Name
Playground
About
FAQ
GitHub
Playground
Shortest Path Finder
Community Detector
Connected Papers
Author Trending
Souad Bezzaoucha Rebai
Abdelhak Dida
Daniel P. Kennedy
Roland Zumkeller
Maximilian Dürr
Dan Graur
Liangliang Shang
Chen Ma
Barbara Aquilani
Bart M. P. Jansen
Home
/
Author
/
KOTA ANDO
Author Info
Open Visualization
Name
Affiliation
Papers
KOTA ANDO
Hokkaido Univ, Sapporo, Hokkaido, Japan
17
Collaborators
Citations
PageRank
43
24
6.81
Referers
Referees
References
113
121
24
Search Limit
100
121
Publications (17 rows)
Collaborators (43 rows)
Referers (100 rows)
Referees (100 rows)
Title
Citations
PageRank
Year
Multicoated Supermasks Enhance Hidden Networks.
0
0.34
2022
Hiddenite: 4K-PE Hidden Network Inference 4D-Tensor Engine Exploiting On-Chip Model Construction Achieving 34.8-to-16.0TOPS/W for CIFAR-100 and ImageNet.
0
0.34
2022
ProgressiveNN: Achieving Computational Scalability with Dynamic Bit-Precision Adjustment by MSB-first Accumulative Computation.
0
0.34
2021
Edge Inference Engine for Deep & Random Sparse Neural Networks with 4-bit Cartesian-Product MAC Array and Pipelined Activation Aligner
1
0.37
2021
STATICA: A 512-Spin 0.25M-Weight Annealing Processor With an All-Spin-Updates-at-Once Architecture for Combinatorial Optimization With Complete Spin–Spin Interactions
2
0.52
2021
7.3 STATICA: A 512-Spin 0.25M-Weight Full-Digital Annealing Processor with a Near-Memory All-Spin-Updates-at-Once Architecture for Combinatorial Optimization with Complete Spin-Spin Interactions
1
0.43
2020
ProgressiveNN: Achieving Computational Scalability without Network Alteration by MSB-first Accumulative Computation
0
0.34
2020
A 3D-Stacked SRAM using Inductive Coupling with Low-Voltage Transmitter and 12:1 SerDes
0
0.34
2020
QUEST: Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96-MB 3-D SRAM Using Inductive Coupling Technology in 40-nm CMOS
3
0.44
2019
DeltaNet: Differential Binary Neural Network
0
0.34
2019
Dither Nn: Hardware/Algorithm Co-Design For Accurate Quantized Neural Networks
0
0.34
2019
BRein Memory: A Single-Chip Binary/Ternary Reconfigurable in-Memory Deep Neural Network Accelerator Achieving 1.4 TOPS at 0.6 W.
15
0.85
2018
Area and Energy Optimization for Bit-Serial Log-Quantized DNN Accelerator with Shared Accumulators
0
0.34
2018
Dither NN: An Accurate Neural Network with Dithering for Low Bit-Precision Hardware
1
0.43
2018
Accelerating deep learning by binarized hardware.
0
0.34
2017
In-memory area-efficient signal streaming processor design for binary neural networks
1
0.38
2017
Logarithmic Compression for Memory Footprint Reduction in Neural Network Training
0
0.34
2017
1