Abstract.
We present a novel simulation-based algorithm, as an extension of the well-known policy iteration algorithm, by combining multi-policy improvement with a distributed simulation-based voting policy evaluation, for approximately solving Markov Decision Processes (MDPs) with infinite horizon discounted reward criterion, and analyze its performance relative to the optimal value.
Similar content being viewed by others
Author information
Authors and Affiliations
Corresponding author
Additional information
Manuscript received: June 2003/Final version received: December 2003
Acknowledgement. The author is grateful to Peter Auer for his helpful discussions. This work was supported by the Sogang University Special Research Grants in 2003.
Rights and permissions
About this article
Cite this article
Chang, H. Multi-policy iteration with a distributed voting. Math Meth Oper Res 60, 299–310 (2004). https://doi.org/10.1007/s001860400362
Issue Date:
DOI: https://doi.org/10.1007/s001860400362