A 28nm 27.5TOPS/W Approximate-Computing-Based Transformer Processor with Asymptotic Sparsity Speculating and Out-of-Order Computing. | 0 | 0.34 | 2022 |
PL-NPU: An Energy-Efficient Edge-Device DNN Training Processor With Posit-Based Logarithm-Domain Computing | 0 | 0.34 | 2022 |
Trainer: An Energy-Efficient Edge-Device Training Processor Supporting Dynamic Weight Pruning | 0 | 0.34 | 2022 |
A 28nm 276.55TFLOPS/W Sparse Deep-Neural-Network Training Processor with Implicit Redundancy Speculation and Batch Normalization Reformulation. | 0 | 0.34 | 2021 |
LPE: Logarithm Posit Processing Element for Energy-Efficient Edge-Device Training | 0 | 0.34 | 2021 |