Abstract | ||
---|---|---|
Particle Swarm Optimization (PSO) has typically been used with small swarms of about 50 particles. However, PSO is more efficiently parallelized with large swarms. We formally describe existing topologies and identify variations which are better suited to large swarms in both sequential and parallel computing environments. We examine the performance of PSO for benchmark functions with respect to swarm size and topology. We develop and demonstrate a new PSO variant which leverages the unique strengths of large swarms. "Hearsay PSO" allows for information to flow quickly through the swarm, even with very loosely connected topologies. These loosely connected topologies are well suited to large scale parallel computing environments because they require very little communication between particles. We consider the case where function evaluations are expensive with respect to communication as well as the case where function evaluations are relatively inexpensive. We also consider a situation where local communication is inexpensive compared to external communication, such as multicore systems in a cluster. |
Year | DOI | Venue |
---|---|---|
2009 | 10.1109/CEC.2009.4983015 | IEEE Congress on Evolutionary Computation |
Keywords | Field | DocType |
topology,communication,clustering algorithms,convergence,multicore processing,parallel processing,benchmark testing,parallel computer,particle swarm optimization,particle swarm,evolutionary computation,data mining,bioinformatics | Convergence (routing),Particle swarm optimization,Mathematical optimization,Swarm behaviour,Computer science,Evolutionary computation,Network topology,Artificial intelligence,Machine learning,Benchmark (computing),Particle,Multicore systems | Conference |
Citations | PageRank | References |
8 | 0.73 | 5 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Andrew McNabb | 1 | 13 | 2.27 |
Matthew Gardner | 2 | 704 | 38.49 |
Kevin D. Seppi | 3 | 335 | 41.46 |