Title
Distributed value functions for the coordination of decentralized decision makers
Abstract
In this paper, we propose an approach based on an interaction-oriented resolution of decentralized Markov decision processes (Dec-MDPs) primary motivated by a real-world application of decentralized decision makers to explore and map an unknown environment. This interaction-oriented resolution is based on distributed value functions (DVF) techniques that decouple the multi-agent problem into a set of individual agent problems and consider possible interactions among agents as a separate layer. This leads to a significant reduction of the computational complexity by solving Dec-MDPs as a collection of MDPs. Using this model in multi-robot exploration scenarios, we show that each robot computes locally a strategy that minimizes the interactions between the robots and maximizes the space coverage of the team. Our technique has been implemented and evaluated in simulation and in real-world scenarios during a robotic challenge for the exploration and mapping of an unknown environment by mobile robots. Experimental results from real-world scenarios and from the challenge are given where our system was vice-champion.
Year
DOI
Venue
2012
10.5555/2343896.2343925
AAMAS
Keywords
Field
DocType
multi-robot exploration scenario,decentralized markov decision process,unknown environment,interaction-oriented resolution,decentralized decision maker,real-world application,value function,real-world scenario,computational complexity,robotic challenge,planning
Computer science,Markov decision process,Artificial intelligence,Robot,Mobile robot,Machine learning,Robot planning,Computational complexity theory,Distributed computing
Conference
ISBN
Citations 
PageRank 
0-9817381-3-3
1
0.36
References 
Authors
8
3
Name
Order
Citations
PageRank
Laëtitia Matignon1889.43
Laurent Jeanpierre2659.16
Abdel-Illah Mouaddib331044.84