Abstract | ||
---|---|---|
A detailed review of the dynamic search trajectory methods for global optimization is given. In addition, a family of dynamic search trajectories methods that are created using numerical methods for solving autonomous ordinary differential equations is presented. Furthermore, a strategy for developing globally convergent methods that is applicable to the proposed family of methods is given and the corresponding theorem is proved. Finally, theoretical results for obtaining nonmonotone convergent methods that exploit the accumulated information with regard to the most recent values of the objective function are given. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1007/s10472-019-09661-7 | Annals of Mathematics and Artificial Intelligence |
Keywords | Field | DocType |
Dynamic search trajectories, Trajectory methods, Autonomous initial value problems, Globally convergent algorithms, Nonmonotone convergent strategies, Global optimization, Neural networks training, 65K05, 65K10, 65L05, 68T05, 68T20 | Mathematical optimization,Global optimization,Ordinary differential equation,Exploit,Artificial intelligence,Numerical analysis,Trajectory,Machine learning,Mathematics | Journal |
Volume | Issue | ISSN |
88 | 1 | 1012-2443 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Stamatios-Aggelos N. Alexandropoulos | 1 | 1 | 1.70 |
Panos M. Pardalos | 2 | 141 | 19.60 |
M.N. Vrahatis | 3 | 1740 | 151.65 |