Abstract | ||
---|---|---|
Among the large number of possible optimization algorithms, Hopfield Neural Networks (HNN) propose interesting characteristics for an in-line use. Indeed, this particular optimization algorithm can produce solutions in brief delay. These solutions are produced by the HNN convergence which was originally defined for a sequential evaluation of neurons. While this sequential evaluation leads to long convergence time, we assume that this convergence can be accelerated through the parallel evaluation of neurons. However, the original constraints do not any longer ensure the convergence of the HNN evaluated in parallel. This article aims to show how the neurons can be evaluated in parallel in order to accelerate a hardware or multiprocessor implementation and to ensure the convergence. The parallelization method is illustrated on a simple task scheduling problem where we obtain an important acceleration related to the number of tasks. For instance, with a number of tasks equals to 20 the speedup factor is about 25. |
Year | Venue | Keywords |
---|---|---|
2011 | NCTA 2011: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NEURAL COMPUTATION THEORY AND APPLICATIONS | Hopfield neural networks,Parallelization,Stability,Optimization problems |
Field | DocType | Citations |
Pattern recognition,Computer science,Types of artificial neural networks,Artificial intelligence,Artificial neural network,Hopfield network,Cellular neural network | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Antoine Eiche | 1 | 32 | 3.03 |
Daniel Chillet | 2 | 193 | 26.12 |
Sébastien Pillement | 3 | 100 | 17.33 |
Olivier Sentieys | 4 | 597 | 73.35 |