GPU-Based Molecular Dynamics of Turbulent Liquid Flows with OpenMM

  • Conference paper
  • First Online:
Parallel Processing and Applied Mathematics (PPAM 2022)

Abstract

In this paper we describe the computational framework for GPU-based molecular dynamics of turbulent flows. The framework is based on the open-source molecular dynamics library OpenMM. The implementation of a special type of open boundary conditions is presented together with a generic case of a turbulent flow of Lennard-Jones liquid. We compare the computational efficiency of OpenMM with another popular MD library LAMMPS and other legacy MD programs used for studying turbulence.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 58.84
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 74.89
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. https://github.com/dann239/openmm/tree/open-boundary

  2. https://github.com/openmm/openmm/pull/3577

  3. Abraham, M., et al.: GROMACS: high performance molecular simulations through multi-level parallelism from laptops to supercomputers. SoftwareX 1–2, 19–25 (2015). https://doi.org/10.1016/j.softx.2015.06.001

    Article  Google Scholar 

  4. Anderson, J.A., Lorenz, C.D., Travesset, A.: General purpose molecular dynamics simulations fully implemented on graphics processing units. J. Comput. Phys. 227(10), 5342–5359 (2008). https://doi.org/10.1016/j.jcp.2008.01.047

    Article  MATH  Google Scholar 

  5. Berendsen, H., van der Spoel, D., van Drunen, R.: GROMACS: a message-passing parallel molecular dynamics implementation. Comput. Phys. Commun. 91(1), 43–56 (1995). https://doi.org/10.1016/0010-4655(95)00042-E

    Article  Google Scholar 

  6. Brown, W.M., Kohlmeyer, A., Plimpton, S.J., Tharrington, A.N.: Implementing molecular dynamics on hybrid high performance computers – Particle-particle particle-mesh. Comput. Phys. Commun. 183(3), 449–459 (2012). https://doi.org/10.1016/j.cpc.2011.10.012

    Article  Google Scholar 

  7. Brown, W.M., Wang, P., Plimpton, S.J., Tharrington, A.N.: Implementing molecular dynamics on hybrid high performance computers – short range forces. Comput. Phys. Commun. 182(4), 898–911 (2011). https://doi.org/10.1016/j.cpc.2010.12.021

    Article  MATH  Google Scholar 

  8. Brown, W.M., Yamada, M.: Implementing molecular dynamics on hybrid high performance computers-three-body potentials. Comput. Phys. Commun. 184(12), 2785–2793 (2013). https://doi.org/10.1016/j.cpc.2013.08.002

    Article  Google Scholar 

  9. Eastman, P., et al.: OpenMM 4: a reusable, extensible, hardware independent library for high performance molecular simulation. J. Chem. Theory Comput. 9(1), 461–469 (2013). https://doi.org/10.1021/ct300857j

    Article  Google Scholar 

  10. Eastman, P., Pande, V.S.: Efficient nonbonded interactions for molecular dynamics on a graphics processing unit. J. Comput. Chem. 31, 1268–1272 (2009). https://doi.org/10.1002/jcc.21413

  11. Eastman, P., et al.:OpenMM 7: rapid development of high performance algorithms for molecular dynamics. PLOS Comput. Biol. 13, 1–17 ( 2017). https://doi.org/10.1371/journal.pcbi.1005659

  12. Glaser, J., et al.: Strong scaling of general-purpose molecular dynamics simulations on GPUs. Comput. Phys. Commun. 192, 97–107 (2015). https://doi.org/10.1016/j.cpc.2015.02.028

    Article  Google Scholar 

  13. Grinberg, L., et al.: A new computational paradigm in multiscale simulations: Application to brain blood flow. In: Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–5 (2011)

    Google Scholar 

  14. Hitz, T., Heinen, M., Vrabec, J., Munz, C.D.: Comparison of macro-and microscopic solutions of the riemann problem I. supercritical shock tube and expansion into vacuum. J. Comput. Phys. 402, 109077 (2020)

    Google Scholar 

  15. Hitz, T., Jöns, S., Heinen, M., Vrabec, J., Munz, C.D.: Comparison of macro-and microscopic solutions of the riemann problem II. two-phase shock tube. J. Comput Phys 429, 110027 (2021)

    Google Scholar 

  16. Johar, A.: Final HIP Platform implementation for AMD GPUs on ROCm 3338 (2021). https://github.com/openmm/openmm/pull/3338

  17. Kadau, K., Barber, J.L., Germann, T.C., Holian, B.L., Alder, B.J.: Atomistic methods in fluid simulation. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 368(1916), 1547–1560 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kondratyuk, N., Nikolskiy, V., Pavlov, D., Stegailov, V.: GPU-accelerated molecular dynamics: State-of-art software performance and porting from nvidia CUDA to AMD HIP. The International Journal of High Performance Computing Applications 35(4), 312–324 (2021). https://doi.org/10.1177/10943420211008288

    Article  Google Scholar 

  19. Kostenetskiy, P., Chulkevich, R., Kozyrev, V.: HPC resources of the Higher School of Economics. J. Phys. Conf. Ser. 1740, 012050. IOP Publishing (2021)

    Google Scholar 

  20. Kutzner, C., Páll, S., Fechner, M., Esztermann, A., de Groot, B.L., Grubmüller, H.: Best bang for your buck: GPU nodes for GROMACS biomolecular simulations. J. Comput. Chem. 36(26), 1990–2008 (2015)

    Article  Google Scholar 

  21. Kutzner, C., Páll, S., Fechner, M., Esztermann, A., de Groot, B.L., Grubmüller, H.: More bang for your buck: Improved use of GPU nodes for GROMACS 2018. J. Comput. Chem. 40(27), 2418–2431 (2019)

    Article  Google Scholar 

  22. Moon, B., Jagadish, H., Faloutsos, C., Saltz, J.: Analysis of the clustering properties of the Hilbert space-filling curve. IEEE Trans. Knowl. Data Eng. 13(1), 124–141 (2001). https://doi.org/10.1109/69.908985

    Article  Google Scholar 

  23. Nikolskiy, V.P., Stegailov, V.V., Vecher, V.S.: Efficiency of the Tegra K1 and X1 systems-on-chip for classical molecular dynamics. In: 2016 International Conference on High Performance Computing & Simulation (HPCS), pp. 682–689. IEEE (2016)

    Google Scholar 

  24. OpenMM team: OpenMM application layer python API http://docs.openmm.org/latest/api-python/app.html

  25. OpenMM team: OpenMM library level C++/Python API http://docs.openmm.org/development/api-c++/

  26. Perdikaris, P., Grinberg, L., Karniadakis, G.E.: Multiscale modeling and simulation of brain blood flow. Phys. Fluids 28(2), 021304 (2016)

    Article  Google Scholar 

  27. Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117(1), 1–19 (1995). https://doi.org/10.1006/jcph.1995.1039

    Article  MATH  Google Scholar 

  28. Rapaport, D.C., Clementi, E.: Eddy formation in obstructed fluid flow: A molecular-dynamics study. Phys. Rev. Lett. 57, 695–698 (1986). https://doi.org/10.1103/PhysRevLett.57.695

  29. Shamsutdinov, A., et al.: Performance of supercomputers based on Angara interconnect and novel AMD CPUs/GPUs. In: Balandin, D., Barkalov, K., Gergel, V., Meyerov, I. (eds.) MMST 2020. CCIS, vol. 1413, pp. 401–416. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78759-2_33

    Chapter  Google Scholar 

  30. Smith, E.: A molecular dynamics simulation of the turbulent Couette minimal flow unit. Phys. Fluids 27(11), 115105 (2015)

    Article  Google Scholar 

  31. Smith, E., Trevelyan, D., Ramos-Fernandez, E., Sufian, A., O’Sullivan, C., Dini, D.: CPL library – a minimal framework for coupled particle and continuum simulation. Comput. Phys. Commun. 250, 107068 (2020)

    Article  Google Scholar 

  32. Stegailov, M., et al.: Angara interconnect makes GPU-based Desmos supercomputer an efficient tool for molecular dynamics calculations. Int. J. High Perform. Comput. Appl. 33(3), 507–521 (2019). https://doi.org/10.1177/1094342019826667

    Article  Google Scholar 

  33. Tchipev, N., et al.: Twetris: twenty trillion-atom simulation. Int. J. High Perf. Comp. Appl. 0(0), 1094342018819741 (2019). https://doi.org/10.1177/1094342018819741

  34. Thompson, A.P. et al.: LAMMPS – a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. Comput. Phys. Commun. 271, 108171 (2022)

    Google Scholar 

  35. Trott, C.R., et al.: Kokkos 3: programming model extensions for the exascale era. IEEE Trans. Parallel Distrib. Syst. 33(4), 805–817 (2022). https://doi.org/10.1109/TPDS.2021.3097283

Download references

Acknowledgment

This research was supported in part through computational resources of the Supercomputer Centre of JIHT RAS and HPC facilities at HSE University. The study was supported by the Russian Science Foundation (project no. 20-71-10127).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniil Pavlov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pavlov, D., Kolotinskii, D., Stegailov, V. (2023). GPU-Based Molecular Dynamics of Turbulent Liquid Flows with OpenMM. In: Wyrzykowski, R., Dongarra, J., Deelman, E., Karczewski, K. (eds) Parallel Processing and Applied Mathematics. PPAM 2022. Lecture Notes in Computer Science, vol 13826. Springer, Cham. https://doi.org/10.1007/978-3-031-30442-2_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30442-2_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30441-5

  • Online ISBN: 978-3-031-30442-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation