Abstract | ||
---|---|---|
The Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-objective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a baseline, we benchmark a pure random search on this bi-objective family bbob-biobj test suite of the COCO platform. For each combination of function, dimension n, and instance of the test suite, 106 ⋅ n candidate solutions are sampled uniformly within the sampling box [-5,5]n. |
Year | DOI | Venue |
---|---|---|
2016 | 10.1145/2908961.2931704 | GECCO (Companion) |
Keywords | Field | DocType |
Benchmarking, Black-box optimization, Bi-objective optimization | Test suite,Random search,Mathematical optimization,Computer science,Testbed,Optimization algorithm,Sampling (statistics),Coco,Benchmarking | Conference |
Citations | PageRank | References |
2 | 0.41 | 6 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Anne Auger | 1 | 1198 | 77.81 |
Dimo Brockhoff | 2 | 948 | 53.97 |
Nikolaus Hansen | 3 | 723 | 51.44 |
Dejan Tusar | 4 | 10 | 2.34 |
Tea Tusar | 5 | 181 | 19.91 |
Tobias Wagner | 6 | 137 | 9.96 |