Posterior Sampling for Monte Carlo Planning under Uncertainty

, ,

Monte Carlo tree search (MCTS) has recently been drawing great interest in the domain of planning and learning under uncertainty. One of the fundamental challenges is the trade-off between exploration and exploitation. To address this problem, we propose to balance between exploration and exploitation via posterior sampling in the contexts of Markov decision process (MDP) and partially observable Markov decision process (POMDP). Specifically, we treat the cumulative reward returned by taking an action from a search node in the MCTS search tree as a random variable following an unknown distribution. We parametrize this distribution by introducing necessary hidden parameters, and infer the posterior distribution of the hidden parameters in a Bayesian way. We further expand a node in the search tree by using Thompson sampling to select an action based on its posterior probability of being optimal. Following this idea, we develop Dirichlet-NormalGamma based Monte Carlo tree search (DNG-MCTS) and Dirichlet-Dirichlet-NormalGamma based partially observable Monte Carlo planning (D2NG-POMCP) algorithms respectively for Monte Carlo planning in MDPs and POMDPs. Experimental results show that the proposed algorithms outperform the state-of-the-art with better values on several benchmark problems.

» Read on
Aijun Bai, Feng Wu, Xiaoping Chen. Posterior Sampling for Monte Carlo Planning under Uncertainty. Applied Intelligence (Appl Intell), 48(12):4998-5018, 2018.
Save as file
@article{BWCai18,
 author = {Aijun Bai and Feng Wu and Xiaoping Chen},
 doi = {10.1007/s10489-018-1248-5},
 journal = {Applied Intelligence (Appl Intell)},
 number = {12},
 pages = {4998-5018},
 title = {Posterior Sampling for Monte Carlo Planning under Uncertainty},
 volume = {48},
 year = {2018}
}