Thompson Sampling based Monte-Carlo Planning in POMDPs

, , ,

Monte-Carlo tree search (MCTS) has been drawing great interest in recent years for planning under uncertainty. One of the key challenges is the trade-off between exploration and exploitation. To address this, we introduce a novel online planning algorithm for large POMDPs using Thompson sampling based MCTS that balances between cumulative and simple regrets. The proposed algorithm Dirichlet-Dirichlet-Normal Gamma based Partially Observable Monte-Carlo Planning (D2NG-POMCP) treats the accumulated reward of performing an action from a belief state in the MCTS search tree as a random variable following an unknown distribution with hidden parameters. Bayesian method is used to model and infer the posterior distribution of these parameters by choosing the conjugate prior in the form of a combination of two Dirichlet and one NormalGamma distributions. Thompson sampling is exploited to guide the action selection in the search tree. Experimental results confirmed that our algorithm outperforms the state-of-the-art approaches on several common benchmark problems.

» Read on
Aijun Bai, Feng Wu, Zongzhang Zhang, Xiaoping Chen. Thompson Sampling based Monte-Carlo Planning in POMDPs. In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS), pages 29-37, Portsmouth, United States, June 2014.
Save as file
@inproceedings{BWZicaps14,
 address = {Portsmouth, United States},
 author = {Aijun Bai and Feng Wu and Zongzhang Zhang and Xiaoping Chen},
 booktitle = {Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS)},
 month = {June},
 pages = {29-37},
 title = {Thompson Sampling based Monte-Carlo Planning in POMDPs},
 year = {2014}
}