Title
Theories Of Neural Networks Leading To Unsupervised Learning
Abstract
To review theories of Artificial Neural Networks (ANN), we begin with Wiener's auto-regression (AR) leading naturally to supervised and unsupervised learning, which generalizes Independent Component Analysis (ICA) with a new Brain Neural Nets(BNN). BNN model is based on two necessary and sufficient observations;(i) power of pairs sensory inputs as vector time series X(t) and (ii) isothermal brain at a constant average temperature 37C for most of us for effortless unsupervised learning. We derive Hebbs and Sigmoid rules from (i) the vector time series in the (ii) minimum of Helmholtz free energy H = E-T0S without any other assumptions. Furthermore, value of BNN model is its ability to predict that the dendrites ion current denoted by the Lagrange parameters 4 in Ampere involve explicitly in learning energy besides the synaptic junction [W-ij] in Volts. In other words, we have for the first time incorporated house-keeping glial cells "missing half of Einstein brain" in unsupervised learning as an interior teacher. We have applied successively such a homeostasis BNN solving single-pixel ill-posed inverse problem of blind sources separation (BSS). BNN is nonlinear, non-stationary, and capable to solve spatiotemporal-variant BSS. This methodology had been demonstrated early to real world applications, e.g. space-variant remote sensing and early detection of breast cancers. In this paper, we derive an exact single-pixel BSS solution for two components. Furthermore, we prove the solution for n components to be unique and stable by means of the augmented Lagrange or Karush, Kuhn and Tucker methodology [S 07]. Our constant-temperature free energy can estimate the neuronal population of brain's grey matter which is responsible for the consciousness activities identified by Crick & Koch as the Claustrum accomplishing binding among firing rates (similar to C-node tuning in the beginning of an orchestra performance). Furthermore, the retinal neuronal response Mexican hat functions could be explained by finite resource sharing for replenishment..
Year
DOI
Venue
2007
10.1109/IJCNN.2007.4371458
2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6
Keywords
Field
DocType
neural network,blind source separation,free energy,learning artificial intelligence,artificial neural network,resource sharing,unsupervised learning,independent component analysis,neural nets,time series
Competitive learning,Population,Pattern recognition,Computer science,Self-organizing map,Types of artificial neural networks,Unsupervised learning,Time delay neural network,Artificial intelligence,Deep learning,Artificial neural network,Machine learning
Conference
ISSN
Citations 
PageRank 
1098-7576
1
0.43
References 
Authors
4
1
Name
Order
Citations
PageRank
Harold Szu114938.33