Log in

Multi-policy iteration with a distributed voting

  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract.

We present a novel simulation-based algorithm, as an extension of the well-known policy iteration algorithm, by combining multi-policy improvement with a distributed simulation-based voting policy evaluation, for approximately solving Markov Decision Processes (MDPs) with infinite horizon discounted reward criterion, and analyze its performance relative to the optimal value.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyeong Soo Chang.

Additional information

Manuscript received: June 2003/Final version received: December 2003

Acknowledgement. The author is grateful to Peter Auer for his helpful discussions. This work was supported by the Sogang University Special Research Grants in 2003.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chang, H. Multi-policy iteration with a distributed voting. Math Meth Oper Res 60, 299–310 (2004). https://doi.org/10.1007/s001860400362

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s001860400362

Keywords

Navigation