Title
Network Distributed POMDP with Communication
Abstract
While Distributed POMDPs have become popular for modeling multiagent systems in uncertain domains, it is the Network Distributed POMDPs (ND-POMDPs) model that has begun to scale-up the number of agents. The ND-POMDPs can utilize the locality in agents' interactions. However, prior work in ND-POMDPs has failed to address communication. Without communication, the size of a local policy at each agent within the ND-POMDPs grows exponentially in the time horizon. To overcome this problem, we extend existing algorithms so that agents periodically communicate their observation and action histories with each other. After communication, agents can start from new synchronized belief state. Thus, we can avoid the exponential growth in the size of local policies at agents. Furthermore, we introduce an idea that is similar the Point-based Value Iteration algorithm to approximate the value function with a fixed number of representative points. Our experimental results show that we can obtain much longer policies than existing algorithms as long as the interval between communications is small.
Year
DOI
Venue
2008
10.1007/978-3-642-00609-8_4
JSAI
Keywords
Field
DocType
exponential growth,longer policy,local policy,belief state,point-based value iteration algorithm,multiagent system,prior work,fixed number,action history,value function
Heuristic function,Mathematical optimization,Locality,Time horizon,Partially observable Markov decision process,Markov decision process,Bellman equation,Multi-agent system,Mathematics,Exponential growth
Conference
Volume
ISSN
Citations 
5447
0302-9743
0
PageRank 
References 
Authors
0.34
9
4
Name
Order
Citations
PageRank
Yuki Iwanari100.68
Yuichi Yabu2282.09
Makoto Tasaki301.01
Makoto Yokoo43632421.99