Abstract | ||
---|---|---|
Efficiency of evolutionary algorithms may be increased using multi-objectivization. Multi-objectivization is performed by adding some auxiliary objectives. We consider selection of these objectives during a run of an evolutionary algorithm. One of the selection methods is based on reinforcement learning. There are several types of rewards previously used in reinforcement learning for adjusting of evolutionary algorithms. However, there is no superior reward. At the same time, reinforcement learning itself may be enhanced by multi-objectivization. So we propose a method for selection of auxiliary objectives based on multi-objective reinforcement learning, where the reward is composed of the previously used single rewards. Hence, we have double multi-objectivization: several rewards are involved in selection of several auxiliary objectives. We run the proposed method on different benchmark problems and compare it with a conventional evolutionary algorithm and a method based on single-objective reinforcement learning. Multi-objective reinforcement shows competitive behavior and is especially useful in the case when we do not know in advance which of the single rewards is efficient. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1145/2739482.2768473 | GECCO (Companion) |
Field | DocType | Citations |
Mathematical optimization,Evolutionary algorithm,Computer science,Artificial intelligence,Error-driven learning,Reinforcement,Machine learning,Reinforcement learning,Learning classifier system | Conference | 1 |
PageRank | References | Authors |
0.34 | 10 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Arina Buzdalova | 1 | 61 | 9.42 |
Anna Matveeva | 2 | 1 | 0.34 |
Georgiy Korneev | 3 | 1 | 0.34 |