Abstract | ||
---|---|---|
Intelligent agents in games and simulators often operate in environments subject to symmetric transformations that produce new but equally legitimate environments, such as reflections or rotations of maps. That fact suggests two hypotheses of interest for machine-learning approaches to creating intelligent agents for use in such environments. First, that exploiting symmetric transformations can broaden the range of experience made available to the agents during training, and thus result in improved performance at the task for which they are trained. Second, that exploiting symmetric transformations during train- ing can make the agents' response to environments not seen during training measurably more consistent. In this paper the two hypotheses are evaluated experimentally by exploiting sensor symmetries and potential symmetries of the environment while training intelligent agents for a strategy game. The experiments reveal that when a corpus of human-generated training examples is supplemented with artificial examples generated by means of reflections and rotations, improvement is obtained in both task performance and consistency of behavior. |
Year | DOI | Venue |
---|---|---|
2006 | 10.1109/CIG.2006.311686 | CIG |
Keywords | Field | DocType |
symmetries,sensors,games,legion ii,human-generated examples,multi-agent systems,agents,adaptive team of agents,simulators,multi agent systems,computational modeling,reflection,solid modeling,intelligent agents,intelligent agent,multi agent system,machine learning,intelligent sensors,computer simulation,multiagent systems | Management training,Intelligent agent,Intelligent sensor,Computer science,Simulation,Multi-agent system,Learning by example,Solid modeling,Artificial intelligence,Machine learning,Homogeneous space | Conference |
Citations | PageRank | References |
5 | 0.67 | 4 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bobby D. Bryant | 1 | 58 | 6.70 |
Risto Miikkulainen | 2 | 2981 | 224.85 |