Title
On Gradients and Hybrid Evolutionary Algorithms for Real-Valued Multiobjective Optimization
Abstract
Algorithms that make use of the gradient, i.e., the direction of maximum improvement, to search for the optimum of a single-objective function have been around for decades. They are commonly accepted to be important and have been applied to tackle single-objective optimization problems in many fields. For multiobjective optimization problems, much less is known about the gradient and its algorithmic use. In this paper, we aim to contribute to the understanding of gradients for numerical, i.e., real-valued, multiobjective optimization. Specifically, we provide an analytical parametric description of the set of all nondominated, i.e., most promising, directions in which a solution can be moved such that the objective values either improve or remain the same. This result completes previous work where this set is described only for one particular case, namely when some of the nondominated directions have positive, i.e., nonimproving, components and the final set can be computed by taking the subset of directions that are all nonpositive. In addition we use our result to assess the utility of using gradient information for multiobjective optimization where the goal is to obtain a Pareto set of solutions that approximates the optimal Pareto set. To this end, we design and consider various multiobjective gradient-based optimization algorithms. One of these algorithms uses the description of the multiobjective gradient provided here. Also, we hybridize an existing multiobjective evolutionary algorithm (MOEA) with the various multiobjective gradient-based optimization algorithms. During optimization, the performance of the gradient-based optimization algorithms is monitored and the available computational resources are redistributed to allow the (currently) most effective algorithm to spend the most resources. We perform an experimental analysis using a few well-known benchmark problems to compare the performance of different optimization methods. The results underline that the use of a population of solutions that is characteristic of MOEAs is a powerful concept if the goal is to obtain a good Pareto set, i.e., instead of only a single solution. This makes it hard to increase convergence speed in the initial phase using gradient information to improve any single solution. However, in the longer run, the use of gradient information does ultimately allow for better fine-tuning of the results and thus better overall convergence.
Year
DOI
Venue
2012
10.1109/TEVC.2010.2051445
IEEE Trans. Evolutionary Computation
Keywords
Field
DocType
different optimization method,optimization algorithm,single-objective optimization problem,hybrid evolutionary algorithms,gradient-based optimization algorithm,multiobjective optimization problem,single solution,real-valued multiobjective optimization,gradient information,multiobjective optimization,existing multiobjective evolutionary algorithm,various multiobjective,objective function,evolutionary computing,experimental analysis,algorithm design and analysis,memetic algorithm,evolutionary computation,optimization,optimization problem,minimization,gradient method,evolutionary algorithm,approximation algorithms,memetic algorithms
Memetic algorithm,Approximation algorithm,Mathematical optimization,Evolutionary algorithm,Evolutionary computation,Multi-objective optimization,Artificial intelligence,Random optimization,Optimization problem,Machine learning,Mathematics,Pareto principle
Journal
Volume
Issue
ISSN
16
1
1089-778X
Citations 
PageRank 
References 
27
0.97
20
Authors
1
Name
Order
Citations
PageRank
Peter A. N. Bosman150749.04