Title | ||
---|---|---|
LC-Learning: Phased Method for Average Reward Reinforcement Learning - Analysis of Optimal Criteria |
Abstract | ||
---|---|---|
This paper presents an analysis of criteria which measure policy optimality for average reward reinforcement learning. In previous works for undiscounted tasks, two criteria, gain-optimality and bias-optimality have been presented. The former is one to measure an average reward and the latter is one to evaluate transient actions. However, a limit factor in the definition of the gain-optimality makes real meaning of the criterion unclear, and what si worse, the performance function for the bias-optimality does not always converge. Thus, previous methods calculate an optimal policy with approximation approaches, that is, they don't always acquire the optimal policy because of some finite errors. In addition, the theoretical proof of the convergence to the optimal policy is a difficult task. To eliminate ambiguity over these criteria, we show a necessary and sufficient condition of the gain-optimality: if and only if a policy is gain-optimal, it includes an optimal cycle-In other words, we only need to search a stationary cycle that has the highest average reward to find a gain optimal policy. We also make the performance function for the bias-optimality always converge by dividing it into two terms cycle-bias-value and path-bias-value. Finally, we build foundation of LC-learning, an algorithm for computing the bias optimal policy in a cyclic domain. |
Year | Venue | Keywords |
---|---|---|
2002 | PRICAI | optimal criteria,average reward,performance function,bias optimal policy,phased method,previous work,measure policy optimality,optimal policy,highest average reward,previous method,reinforcement learning,average reward reinforcement learning,gain optimal policy,artificial intelligence,markov decision process,machine learning,artificial intelligent,limiting factor |
Field | DocType | ISBN |
Convergence (routing),Division (mathematics),Computer science,Q-learning,Markov decision process,Artificial intelligence,If and only if,Ambiguity,Reward-based selection,Machine learning,Reinforcement learning | Conference | 3-540-44038-0 |
Citations | PageRank | References |
0 | 0.34 | 10 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Taro Konda | 1 | 12 | 3.78 |
Tomohiro Yamaguchi | 2 | 34 | 12.21 |