Title
Algebraic transformations of objective functions
Abstract
Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost and increase the set of objective functions that are neurally implementable. The transformations include simplification of products of expressions, functions of one or two expressions, and sparse matrix products (all of which may be interpreted as Legendre transformations); also the minimum and maximum of a set of expressions. These transformations introduce new interneurons which force the network to seek a saddle point rather than a minimum. Other transformations allow control of the network dynamics, by reconciling the Lagrangian formalism with the need for fixpoints. We apply the transformations to simplify a number of structured neural networks, beginning with the standard reduction of the winner-take-all network from ∂(N2) connections to ∂(N). Also susceptible are inexact graph-matching, random dot matching, convolutions and coordinate transformations, and sorting. Simulations show that fixpoint-preserving transformations may be applied repeatedly and elaborately, and the example networks still robustly converge.
Year
DOI
Venue
1990
10.1016/0893-6080(90)90055-P
Neural Networks
Keywords
Field
DocType
Objective function,Structured neural network,Analog circuit,Transformation of objective,Fixpoint-preserving transformation,Lagrangian dynamics,Graph-matching neural net,Winner-take-all neural net
Saddle point,Network dynamics,Expression (mathematics),Convolution,Computer science,Legendre polynomials,Sorting,Artificial intelligence,Artificial neural network,Machine learning,Sparse matrix
Journal
Volume
Issue
ISSN
3
6
0893-6080
Citations 
PageRank 
References 
31
5.90
4
Authors
2
Name
Order
Citations
PageRank
Eric Mjolsness11058140.00
Charles Garrett2315.90