Abstract | ||
---|---|---|
Several known heuristic functions can capture the input at different levels of precision, and support relaxation-refinement operations guaranteeing to converge to exact information in a finite number of steps. A natural idea is to use such refinement online, during search, yet this has barely been addressed. We do so here for local search, where relaxation refinement is particularly appealing: escape local minima not by search, but by removing them from the search surface. Thanks to convergence, such an escape is always possible. We design a family of hill-climbing algorithms along these lines. We show that these are complete, even when using helpful actions pruning. Using them with the partial delete relaxation heuristic h(CFF), the best-performing variant outclasses FF's enforced hill-climbing, outperforms FF, outperforms dual-queue greedy best-first search with h(FF), and in 6 IPC domains outperforms both LAMA and Mercury. |
Year | Venue | Field |
---|---|---|
2017 | Proceedings of the International Conference on Automated Planning and Scheduling | Hill climbing,Mathematical optimization,Computer science,Boosting (machine learning),Artificial intelligence,Local search (optimization),Machine learning |
DocType | ISSN | Citations |
Conference | 2334-0835 | 0 |
PageRank | References | Authors |
0.34 | 0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Maximilian Fickert | 1 | 0 | 2.70 |
Jörg Hoffmann | 2 | 2702 | 189.88 |