Title
Serial PSO results are irrelevant in a multi-core parallel world
Abstract
From multi-core processors to parallel GPUs to computing clusters, computing resources are increasingly parallel. These parallel resources are being used to address increasingly challenging applications. This presents an opportunity to design optimization algorithms that use parallel processors efficiently. In spite of the intuitively parallel nature of Particle Swarm Optimization (PSO), most PSO variants are not evaluated from a parallel perspective and introduce extra communication and bottlenecks that are inefficient in a parallel environment. We argue that the standard practice of evaluating a PSO variant by reporting function values with respect to the number of function evaluations is inadequate for evaluating PSO in a parallel environment. Evaluating the parallel performance of a PSO variant instead requires reporting function values with respect to the number of iterations to show how the algorithm scales with the number of processors, along with an implementation-independent description of task interactions and communication. Furthermore, it is important to acknowledge the dependence of performance on specific properties of the objective function and computational resources. We discuss parallel evaluation of PSO, and we review approaches for increasing concurrency and for reducing communication which should be considered when discussing the scalability of a PSO variant. This discussion is essential both for designers who are defending the performance of an algorithm and for practitioners who are determining how to apply PSO for a given objective function and parallel environment.
Year
DOI
Venue
2014
10.1109/CEC.2014.6900226
IEEE Congress on Evolutionary Computation
Keywords
Field
DocType
graphics processing units,iterative methods,mathematics computing,multiprocessing systems,particle swarm optimisation,PSO variant,computational resources,computing clusters,computing resources,function evaluations,function values,implementation-independent description,multicore parallel world,multicore processors,objective function,optimization algorithms,parallel GPU,parallel processors,parallel resources,particle swarm optimization,task communication,task interactions
Particle swarm optimization,Mathematical optimization,Concurrency,Computer science,Embarrassingly parallel,Parallel computing,Artificial intelligence,Optimization algorithm,Parallel universe,Multi-core processor,Machine learning,Scalability
Conference
Citations 
PageRank 
References 
0
0.34
10
Authors
2
Name
Order
Citations
PageRank
Andrew McNabb1132.27
Kevin D. Seppi233541.46