Title
Action discovery for reinforcement learning
Abstract
The design of reinforcement learning solutions to many problems artificially constrain the action set available to an agent, in order to limit the exploration/sample complexity. While exploring, if an agent can discover new actions that can break through the constraints of its basic/atomic action set, then the quality of the learned decision policy could improve. On the fipside, considering all possible non-atomic actions might explode the exploration complexity. We present a potential based solution to this dilemma, and empirically evaluate it in grid navigation tasks. In particular, we show that the sample complexity improves significantly when basic reinforcement learning is coupled with action discovery. Our approach relies on reducing the number of decision-points, which is particularly suited for multiagent coordination learning, since agents tend to learn more easily with fewer coordination problems (CPs). To demonstrate this we extend action discovery to multi-agent reinforcement learning. We show that Joint Action Learners (JALs) indeed learn coordination policies of higher quality with lower sample complexity when coupled with action discovery, in a multi-agent box-pushing task.
Year
DOI
Venue
2010
10.5555/1838206.1838493
AAMAS
Keywords
Field
DocType
exploration complexity,sample complexity,possible non-atomic action,new action,atomic action set,action discovery,lower sample complexity,reinforcement learning,multiagent coordination learning,basic reinforcement learning
Computer science,Q-learning,Action learning,Artificial intelligence,Dilemma,Error-driven learning,Atomic actions,Sample complexity,Machine learning,Grid,Reinforcement learning
Conference
Citations 
PageRank 
References 
1
0.35
4
Authors
2
Name
Order
Citations
PageRank
Bikramjit Banerjee128432.63
Landon Kraemer28910.03