Abstract | ||
---|---|---|
Optimization methods, including evolutionary algorithms, need increasing efficiency in order to solve more and more complex problems in the shortest possible time. Apart from tuning the structure of those algorithms and adapting them to the given problem, the best way to speed up computations is introducing more parallelism, either at the hardware level (by using accelerators), or at the architecture level (by using multiple compute nodes). In this paper, we propose a method for expressing evolutionary computations, based on the tensor computational model. The presented approach enables cross-platform hardware acceleration on CPUs, GPUs and TPUs. To validate the new method, we contribute an open, extensible evolutionary framework, with support for distributed, accelerated execution in heterogeneous environments. Finally, we demonstrate results of the conducted tests that confirm the efficiency of the proposed approach, also in comparison to other existing frameworks. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1145/3512290.3528753 | PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'22) |
Keywords | DocType | Citations |
evolutionary computing, computing framework, distributed computing, GPGPU | Conference | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jonatan Klosko | 1 | 0 | 0.34 |
Mateusz Benecki | 2 | 0 | 0.34 |
Grzegorz Wcislo | 3 | 0 | 0.34 |
Jacek Dajda | 4 | 0 | 0.34 |
Wojciech Turek | 5 | 0 | 0.34 |