Abstract | ||
---|---|---|
In this paper we propose a novel strategy for converging dynamic policies generated by adaptive agents, which receive and accumulate rewards for their actions. The goal of the proposed strategy is to speed up the convergence of such agents to a good policy in dynamic environments. Since it is difficult to have the good value for a state due to the continuous changing in the environment, previous policies are kept in memory for reuse in future policies, avoiding delays or unexpected speedups in the agent's learning. Experimental results on dynamic environments with different policies have shown that the proposed strategy is able to speed up the convergence of the agent while achieving good action policies. |
Year | DOI | Venue |
---|---|---|
2009 | 10.1109/IA.2009.4927511 | IA 2009: IEEE SYMPOSIUM ON INTELLIGENT AGENTS |
Keywords | Field | DocType |
Adaptive Agents,Dynamic Environments,Reinforcement Learning | Convergence (routing),Mathematical optimization,Algorithm design,Markov process,Computer science,Reuse,Markov decision process,Multi-agent system,Artificial intelligence,Distributed computing,Speedup,Reinforcement learning | Conference |
Citations | PageRank | References |
4 | 0.48 | 15 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Richardson Ribeiro | 1 | 43 | 11.12 |
Andre P. Borges | 2 | 4 | 0.48 |
Alessandro L. Koerich | 3 | 525 | 39.59 |
Edson Emílio Scalabrin | 4 | 36 | 14.52 |
Fabrício Enembreck | 5 | 274 | 38.42 |