Solving Large-Scale and Sparse-Reward DEC-POMDPs with Correlation-MDPs

,

Within a group of cooperating agents the decision making of an individual agent depends on the actions of the other agents. A lot of effort has been made to solve this problem with additional assumptions on the communication abilities of agents. However, in some realworld applications, communication is limited and the assumptions are rarely satisfied. An alternative approach newly developed is to employ a correlation device to correlate the agents' behavior without exchanging information during execution. In this paper, we apply correlation device to large-scale and spare-reward domains. As a basis we use the framework of infinite-horizon DEC-POMDPs which represent policies as joint stochastic finite-state controllers. To solve any problem of this kind, a correlation device is firstly calculated by solving Correlation Markov Decision Processes (Correlation-MDPs) and then used to improve the local controller for each agent. By using this method, we are able to achieve a tradeoff between computational complexity and the quality of the approximation. In addition, we demonstrate that, adversarial problems can be solved by encoding the information of opponents'ddate behavior in the correlation device.We have successfully implemented the proposed method into our 2D simulated robot soccer team and the performance in RoboCup-2006 was encouraging.

» Read on
Feng Wu, Xiaoping Chen. Solving Large-Scale and Sparse-Reward DEC-POMDPs with Correlation-MDPs. In Proceedings of the Robot Soccer World Cup XI Symposium (RoboCup), pages 208-219, Atlanta, United States, July 2007.
Save as file
@inproceedings{WCrobocup07,
 address = {Atlanta, United States},
 author = {Feng Wu and Xiaoping Chen},
 booktitle = {Proceedings of the Robot Soccer World Cup XI Symposium (RoboCup)},
 doi = {10.1007/978-3-540-68847-1_18},
 month = {July},
 pages = {208-219},
 title = {Solving Large-Scale and Sparse-Reward DEC-POMDPs with Correlation-MDPs},
 year = {2007}
}