Decision-Theoretic Planning for Multi-Agent Systems

Planning under uncertainty is one of the fundamental challenges in Artificial Intelligence. Decision theory offers a mathematical framework for optimizing decisions in these domains. In recent years, researchers have made rapid progresses for decision making in single-agent cases. This enables relatively large problems to be solved by state-of-the-art MDP or POMDP algorithms. However, the research of decentralized decision making is still in its infancy and existing solutions can only solve very small "toy" problems. As a nature extension of Markov decision theory to multi-agent systems, the DEC-POMDP model has very high computational complexity, namely NEXP-hard. This is not surprising given that agents in multi-agent settings not only have to keep tracking on the state transition of the environment but also the potential behaviors of the other agents. As a result, the joint policy space can be huge. Hence how to search over the large joint policy space and find the best one is a key challenge when solving a large DEC-POMDP. Due to the problem complexity, exact algorithms can solve merely tiny problems. Therefore, my thesis work focuses on developing effect algorithms for solving general DEC-POMDPs approximately. Generally, planning in multi-agent systems can work either in online or offline fashions. This thesis contributes to the literature by proposing both online and offline algorithms. Furthermore, a model-free algorithm is presented for planning when the exact model is not available.

» Read on
Feng Wu. Decision-Theoretic Planning for Multi-Agent Systems. University of Science and Technology of China (USTC), Hefei, China, July 2011.
Save as file
@phdthesis{Wustc11,
 address = {Hefei, China},
 author = {Feng Wu},
 doi = {10.7666/d.d141397},
 month = {July},
 school = {University of Science and Technology of China (USTC)},
 title = {Decision-Theoretic Planning for Multi-Agent Systems},
 year = {2011}
}