Abstract | ||
---|---|---|
This paper describes the results of an empirical evaluation comparing the performance of five different algorithms in a pursuit and evasion game. The pursuit and evasion game was played using two robots. The task of the pursuer was to catch the other robot (the evader). The algorithms tested were a random player, the optimal player, a genetic algorithm learner, a k-nearest neighbor learner, and a reinforcement learner. The k-nearest neighbor learner performed best overall, but a closer analysis of the results showed that the genetic algorithm suffered from an exploration-exploitation problem. |
Year | DOI | Venue |
---|---|---|
2001 | 10.1007/3-540-45603-1_29 | RoboCup 2009 |
Keywords | Field | DocType |
different algorithm,evasion game,optimal player,pursuit-evasion games,genetic algorithm,genetic algorithm learner,closer analysis,k-nearest neighbor learner,reinforcement learner,random player,empirical evaluation,machine learning,k nearest neighbor | Nearest neighbour,Pursuer,Computer science,Simulation,Pursuit-evasion,Artificial intelligence,Robot,Genetic algorithm,Machine learning,Robotics,Reinforcement learning | Conference |
ISBN | Citations | PageRank |
3-540-43912-9 | 2 | 0.38 |
References | Authors | |
4 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jacky Baltes | 1 | 294 | 57.76 |
Yongjoo Park | 2 | 99 | 5.93 |