Enhancing Mario Gaming Using Optimized Reinforcement Learning

  • Conference paper
  • First Online:
Distributed Computing and Intelligent Technology (ICDCIT 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14501))

  • 253 Accesses

Abstract

“In the realm of classic gaming, Mario has held a special place in the hearts of players for generations. This study, titled ‘Enhancing Mario Gaming using Optimized Reinforcement Learning’, ventures into the uncharted territory of machine learning to elevate the Mario gaming experience to new heights. Our research employs state-of-the-art techniques, including the Proximal Policy Optimization (PPO) algorithm and Convolutional Neural Networks (CNN), to infuse intelligence into the Mario gameplay. By optimizing reinforcement learning, we aim to create an immersive and engaging experience for players. In addition to the technical aspects, we delve into the concept of game appeal, a pivotal component in capturing player engagement. Our innovative approach blends the prowess of PPO, CNN, and reinforcement learning to unlock unique insights and methodologies for enhancing Mario games. This comprehensive analysis provides actionable guidance for selecting the most suitable techniques for distinct facets of Mario games. The culmination is an enriched, captivating, and optimized gaming experience that befits the title, ‘Enhancing Mario Gaming using Optimized Reinforcement Learning’.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now
Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 53.49
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 70.61
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Liao, Y., Yi, K., Yang, Z.: CS229 Final Report Reinforcement Learning to Play Mario. 2012 Stanford University. https://cs229.stanford.edu/proj2012/LiaoYiYang-RLtoPlayMario.pdf

  2. Alaniz, S.: Deep reinforcement learning with model learning and Monte Carlo tree search in minecraft. ar**v preprint ar**v:1803.08456 (2018)

  3. Azizzadenesheli, K., Yang, B., Liu, W., Brunskill, E., Lipton, Z.C., Anandkumar, A.: Sample-efficient deep RL with generative adversarial tree search. CoRR, abs/1806.05780 (2018)

    Google Scholar 

  4. Babaeizadeh, M., Finn, C., Erhan, D., Campbell, R.H., Levine, S.: Stochastic variational video prediction. In: ICLR (2017)

    Google Scholar 

  5. Babaeizadeh, M., Frosio, I., Tyree, S., Clemons, J., Kautz, J.: Reinforcement learning through asynchronous advantage actor-critic on a GPU. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017, Conference Track Proceedings. OpenReview.net (2017). https://openreview.net/forum?id=r1VGvBcxl

  6. Bellemare, M.G., Naddaf, Y., Veness, J., Bowling, M.: The arcade learning environment: an evaluation platform for general agents (extended abstract). In: Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI, pp. 4148–4152 (2015)

    Google Scholar 

  7. Bengio, S., Vinyals, O., Jaitly, N., Shazeer, N.: Scheduled sampling for sequence prediction with recurrent neural networks. In: Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, 7–12 December 2015, Montreal, Quebec, Canada, pp. 1171–1179 (2015)

    Google Scholar 

  8. Buesing, L., et al.: Woulda, coulda, shoulda: counterfactually-guided policy search. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=BJG0voC9YQ

  9. Castro, P.S., Moitra, S., Gelada, C., Kumar, S., Bellemare, M.G.: Dopamine: a research framework for deep reinforcement learning. CoRR, abs/1812.06110 (2018). Chiappa, S., Racaniere, S., Wierstra, D., Mohamed, S.: Recurrent environment simulators. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017 (2017)

    Google Scholar 

  10. Chua, K., Calandra, R., McAllister, R., Levine, S.: Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In: Advances in Neural Information Processing Systems, pp. 4759–4770 (2018)

    Google Scholar 

  11. Deisenroth, M.P., Neumann, G., Peters, J.: A survey on policy search for robotics. Found. Trends Robot. 2(1–2), 1–142 (2013)

    Google Scholar 

  12. Ebert, F., Finn, C., Lee, A.X., Levine, S.: Self-supervised visual planning with temporal skip connections. In: 1st Annual Conference on Robot Learning, CoRL (2017)

    Google Scholar 

  13. Mountain View, California, USA, 13–15 November 2017, Proceedings. Proceedings of Machine Learning Research, vol. 78, pp. 344–356. PMLR (2017)

    Google Scholar 

  14. Ebert, F., Finn, C., Dasari, S., **e, A., Lee, A., Levine, S.: Visual foresight: model-based deep reinforcement learning for vision-based robotic control. ar**v preprint ar**v:1812.00568 (2018)

  15. Ersen, M., Sariel, S.: Learning behaviors of and interactions among objects through spatio-temporal reasoning. IEEE Trans. Comput. Intell. AI Games 7(1), 75–87 (2014)

    Article  Google Scholar 

  16. Espeholt, L., et al.: IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. In: Proceedings of the 35th International Conference on Machine Learning, ICML, pp. 1406–1415 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sumit Kumar Sah .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sah, S.K., Fidele, H. (2024). Enhancing Mario Gaming Using Optimized Reinforcement Learning. In: Devismes, S., Mandal, P.S., Saradhi, V.V., Prasad, B., Molla, A.R., Sharma, G. (eds) Distributed Computing and Intelligent Technology. ICDCIT 2024. Lecture Notes in Computer Science, vol 14501. Springer, Cham. https://doi.org/10.1007/978-3-031-50583-6_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50583-6_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50582-9

  • Online ISBN: 978-3-031-50583-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation