Title
LVQ neural network SoC adaptable to different on-chip learning and recognition applications
Abstract
The developed SoC in 180nm for the implementation of a Learning Vector Quantization (LVQ) neural network is based on a concept of hardware/software co-design for on-chip learning and recognition. Minimal Euclidean distance search, which is the most time consuming operation in the competition layer of the LVQ algorithm, is solved by a pipeline with parallel p-word input architecture. Very high flexibility is achieved because input number, neuron number in the competition layer, weight values, and output number are scalable for satisfying the requirements of different applications without changing the designed hardware. For example, in the case of a d-dimensional input vector, the classification can be completed in [d/p] +R clock cycles where R is the pipeline depth. An embedded 32-bit RISC CPU is mainly used for adjusting the values of the feature vectors which is not a time-critical operation in the LVQ algorithm.
Year
DOI
Venue
2014
10.1109/APCCAS.2014.7032858
APCCAS
Keywords
Field
DocType
on-chip recognition application,learning vector quantization algorithm,pipeline depth,learning (artificial intelligence),dimensional input vector,reduced instruction set computing,minimal euclidean distance search,lvq neural network soc,central processing unit,risc cpu,vector quantisation,system-on-chip,neuron number,weight value,hardware-software codesign,competition layer,size 180 nm,on-chip learning,neural nets,pipelines,computer architecture,system on chip,registers,hardware,vectors
Feature vector,Pipeline transport,System on a chip,Computer science,Euclidean distance,Learning vector quantization,Algorithm,Electronic engineering,Software,Artificial neural network,Computer engineering,Scalability
Conference
Citations 
PageRank 
References 
0
0.34
9
Authors
5
Name
Order
Citations
PageRank
Fengwei An1289.61
Toshinobu Akazawa241.48
Shogo Yamazaki300.34
Lei Chen4177.06
Hans Jürgen Mattausch59632.93