Name
Playground
About
FAQ
GitHub
Playground
Shortest Path Finder
Community Detector
Connected Papers
Author Trending
Bhupendra Singh
henry xu
Marco Vannucci
Laura Recalde
Silvia Scirpoli
Poornima Nataraja
Songhua Li
Sebastian Magda
David MacDonald
Meng Jiang
Home
/
Author
/
WIN-SAN KHWA
Author Info
Open Visualization
Name
Affiliation
Papers
WIN-SAN KHWA
National Tsing Hua University, Hsinchu
13
Collaborators
Citations
PageRank
106
29
5.14
Referers
Referees
References
118
46
10
Search Limit
100
118
Publications (13 rows)
Collaborators (100 rows)
Referers (100 rows)
Referees (46 rows)
Title
Citations
PageRank
Year
A 40-nm, 64-Kb, 56.67 TOPS/W Voltage-Sensing Computing-In-Memory/Digital RRAM Macro Supporting Iterative Write With Verification and Online Read-Disturb Detection
3
0.40
2022
An 8-Mb DC-Current-Free Binary-to-8b Precision ReRAM Nonvolatile Computing-in-Memory Macro using Time-Space-Readout with 1286.4-21.6TOPS/W for Edge-AI Devices
1
0.41
2022
A 40nm 60.64TOPS/W ECC-Capable Compute-in-Memory/Digital 2.25MB/768KB RRAM/SRAM System with Embedded Cortex M3 Microprocessor for Edge Recommendation Systems.
0
0.34
2022
A 22nm 4Mb STT-MRAM Data-Encrypted Near-Memory Computation Macro with a 192GB/s Read-and-Decryption Bandwidth and 25.1-55.1TOPS/W 8b MAC for AI Operations
0
0.34
2022
A 40nm 64kb 26.56TOPS/W 2.37Mb/mm
0
0.34
2022
A 40nm 100Kb 118.44TOPS/W Ternary-weight Computein-Memory RRAM Macro with Voltage-sensing Read and Write Verification for reliable multi-bit RRAM operation
2
0.37
2021
A 40nm 64kb 56.67tops/W Read-Disturb-Tolerant Compute-In-Memory/Digital Rram Macro With Active-Feedback-Based Read And In-Situ Write Verification
0
0.34
2021
A 7-nm Compute-in-Memory SRAM Macro Supporting Multi-Bit Input, Weight and Output and Achieving 351 TOPS/W and 372.4 GOPS
3
0.41
2021
A 22nm 4mb 8b-Precision Reram Computing-In-Memory Macro With 11.91 To 195.7tops/W For Tiny Ai Edge Devices
1
0.34
2021
CHIMERA - A 0.92 TOPS, 2.2 TOPS/W Edge AI Accelerator with 2 MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference.
0
0.34
2021
15.3 A 351tops/W And 372.4gops Compute-In-Memory Sram Macro In 7nm Finfet Cmos For Machine-Learning Applications
2
0.40
2020
A 5.1pJ/Neuron 127.3us/Inference RNN-based Speech Recognition Processor using 16 Computing-in-Memory SRAM Macros in 65nm CMOS
4
0.44
2019
A Dual-Split 6T SRAM-Based Computing-in-Memory Unit-Macro With Fully Parallel Product-Sum Operation for Binarized DNN Edge Processors
13
0.67
2019
1