Title
Can Onemax Help Optimizing Leadingones Using The Ea Plus Rl Method?
Abstract
There exist optimization problems with the target objective, which is to be optimized, and several extra objectives, which can be helpful in the optimization process. The EA+RL method is designed to control optimization algorithms which solve problems with extra objectives. The method is based on the use of reinforcement learning for adaptive online selection of objectives.In this paper we investigate whether ONEMAX helps to optimize LEADINGONES when the EA+RL method is used. We consider LEADINGONES+ONEMAX problem where the target objective is LEADINGONES and the only extra objective is ONEMAX.The following theoretical results are proven for the expected running times when optimization starts from a random vector in the case of randomized local search (RLS): n(2)/(2) for LEADINGONES, n(2)/3 for LEADINGONES+ONEMAX when reinforcement learning state is equal to the LEADINGONES fitness or when random objective selection is performed, and n(2)/(4)+o(n(2)) when there is one reinforcement learning state and the greedy exploration strategy is used. The case of starting with all bits set to zero is also considered.So, ONEMAX helps, although not too much, to optimize LEADINGONES when RLS is used. However, it is not true when using the (1 + 1) evolutionary algorithm, which is shown experimentally.
Year
Venue
Keywords
2015
2015 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC)
silicon,algorithm design and analysis,learning artificial intelligence,optimization,markov processes,evolutionary computation,switches
Field
DocType
Citations 
Mathematical optimization,Algorithm design,Markov process,Evolutionary algorithm,Computer science,Evolutionary computation,Artificial intelligence,Optimization algorithm,Optimization problem,Machine learning,Reinforcement learning,Metaheuristic
Conference
1
PageRank 
References 
Authors
0.38
14
2
Name
Order
Citations
PageRank
Maxim Buzdalov114125.29
Arina Buzdalova2619.42