HPX with Spack and Singularity Containers: Evaluating Overheads for HPX/Kokkos Using an Astrophysics Application

  • Conference paper
  • First Online:
Asynchronous Many-Task Systems and Applications (WAMTA 2024)

Abstract

Cloud computing for high performance computing resources is an emerging topic. This service is of interest to researchers who care about reproducible computing, for software packages with complex installations, and for companies or researchers who need the compute resources only occasionally or do not want to run and maintain a supercomputer on their own. The connection between HPC and containers is exemplified by the fact that Microsoft Azure’s Eagle cloud service machine is number three on the November 23 Top 500 list. For cloud services, the HPC application and dependencies are installed in containers, e.g. Docker, Singularity, or something else, and these containers are executed on the physical hardware. Although containerization leverages the existing Linux kernel and should not impose overheads on the computation, there is the possibility that machine-specific optimizations might be lost, particularly machine-specific installs of commonly used packages. In this paper, we will use an astrophysics application using HPX-Kokkos and measure overheads on homogeneous resources, e.g. Supercomputer Fugaku, using CPUs only and on heterogenous resources, e.g. LSU’s hybrid CPU and GPU system. We will report on challenges in compiling, running, and using the containers as well as performance differences.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/STEllAR-GROUP/OctoTigerBuildChain.

  2. 2.

    https://github.com/G-071/octotiger-spack.

  3. 3.

    https://spack.readthedocs.io/en/latest/containers.html.

  4. 4.

    https://spack.readthedocs.io/en/latest/features.html.

References

  1. Abraham, S., et al.: On the use of containers in high performance computing environments. In: 2020 IEEE 13th International Conference on Cloud Computing, pp. 284–293 (2020)

    Google Scholar 

  2. Alles, G.R., et al.: Assessing the computation and communication overhead of Linux containers for HPC applications. In: 2018 Symposium on High Performance Computing Systems, pp. 116–123. IEEE (2018)

    Google Scholar 

  3. Azab, A.: Enabling docker containers for high-performance and many-task computing. In: 2017 IEEE International Conference on Cloud Engineering, pp. 279–285 (2017)

    Google Scholar 

  4. Bauer, M., et al.: Legion: expressing locality and independence with logical regions. In: SC2012: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, pp. 1–11. IEEE (2012)

    Google Scholar 

  5. de Bayser, M., et al.: Integrating MPI with docker for HPC. In: 2017 IEEE International Conference on Cloud Engineering, pp. 259–265 (2017)

    Google Scholar 

  6. Benedicic, L., Cruz, F.A., Madonna, A., Mariotti, K.: Sarus: highly scalable docker containers for HPC systems. In: Weiland, M., Juckeland, G., Alam, S., Jagode, H. (eds.) ISC High Performance 2019. LNCS, vol. 11887, pp. 46–60. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-34356-9_5

    Chapter  Google Scholar 

  7. Bosilca, G., et al.: Parsec: exploiting heterogeneity to enhance scalability. Comput. Sci. Eng. 15(6), 36–45 (2013)

    Article  Google Scholar 

  8. Casalicchio, E., Perciballi, V.: Measuring docker performance: what a mess!!! In: Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering Companion, pp. 11–16 (2017)

    Google Scholar 

  9. Chamberlain, B.L., et al.: Parallel programmability and the chapel language. Int. J. High Perform. Comput. Appl. 21(3), 291–312 (2007)

    Article  Google Scholar 

  10. Chung, M.T., Quang-Hung, et al.: Using Docker in high performance computing applications. In: 2016 IEEE Sixth International Conference on Communications and Electronics, pp. 52–57 (2016)

    Google Scholar 

  11. Chung, M.T., et al.: Using docker in high performance computing applications. In: 2016 IEEE Sixth International Conference on Communications and Electronics, pp. 52–57. IEEE (2016)

    Google Scholar 

  12. Courtes, L.: Reproducibility and Performance: why choose? Comput. Sci. Eng. 24(03), 77–80 (2022)

    Article  Google Scholar 

  13. Daiß, G., et al.: From piz daint to the stars: simulation of stellar mergers using high-level abstractions. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. SC 2019, ACM, New York, NY, USA (2019)

    Google Scholar 

  14. Daiß, G., et al.: Beyond fork-join: integration of performance portable Kokkos kernels with HPX. In: 2021 IEEE International Parallel and Distributed Processing Symposium Workshops, pp. 377–386. IEEE (2021)

    Google Scholar 

  15. Daiß, G., et al.: Stellar mergers with hpx-kokkos and SYCL: methods of using an asynchronous many-task runtime system with SYCL. In: Proceedings of the 2023 International Workshop on OpenCL. ACM, New York, NY, USA (2023)

    Google Scholar 

  16. Daiß, G., et al.: From merging frameworks to merging stars: experiences using HPX, Kokkos and SIMD types. In: 2022 IEEE/ACM 7th International Workshop on Extreme Scale Programming Models and Middleware, pp. 10–19. IEEE, Los Alamitos, CA, USA (2022)

    Google Scholar 

  17. Daiß, G., et al.: From task-based gpu work aggregation to stellar mergers: turning fine-grained CPU tasks into portable GPU kernels. In: 2022 IEEE/ACM International Workshop on Performance. Portability and Productivity in HPC, pp. 89–99. IEEE, Los Alamitos, CA, USA (2022)

    Google Scholar 

  18. Diehl, P., et al.: Simulating Stellar Merger using HPX/Kokkos on A64FX on Supercomputer Fugaku (2023)

    Google Scholar 

  19. Gamblin, T., et al.: The Spack package manager: bringing order to HPC software chaos. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–12 (2015)

    Google Scholar 

  20. Germain, J.D.D.S., et al.: Uintah: a massively parallel problem solving environment. In: Proceedings the Ninth International Symposium on High-Performance Distributed Computing, pp. 33–41. IEEE (2000)

    Google Scholar 

  21. Hartmut, K., et al.: HPX-the C++ standard library for parallelism and concurrency. J. Open Source Softw. 5(53), 2352 (2020)

    Article  Google Scholar 

  22. Higgins, J., et al.: Orchestrating docker containers in the HPC environment. In: Kunkel, J.M., Ludwig, T. (eds.) ISC High Performance 2015. LNCS, pp. 506–513. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20119-1_36

    Chapter  Google Scholar 

  23. Kadam, K., et al.: Numerical simulations of mass transfer in binaries with bipolytropic components. MNRAS 481(3), 3683–3707 (2018)

    Article  Google Scholar 

  24. Kale, L.V., Krishnan, S.: Charm++ a portable concurrent object oriented system based on C++. In: Proceedings of the Eighth Annual Conference on Object-Oriented Programming Systems, Languages, and Applications, pp. 91–108 (1993)

    Google Scholar 

  25. Li, Z., et al.: Performance overhead comparison between hypervisor and container based virtualization. In: 2017 IEEE 31st International Conference on Advanced Information Networking and Applications, pp. 955–962 (2017)

    Google Scholar 

  26. Marcello, D.C., et al.: Octo-Tiger: a new, 3D hydrodynamic code for stellar mergers that uses HPX parallelization. MNRAS 504(4), 5345–5382 (2021)

    Article  Google Scholar 

  27. Merkel, D., et al.: Docker: lightweight linux containers for consistent development and deployment. Linux j 239(2), 2 (2014)

    Google Scholar 

  28. Plale, B.A., Malik, T., Pouchard, L.C.: Reproducibility practice in high-performance computing: community survey results. Comput. Sci. Eng. 23(05), 55–60 (2021)

    Article  Google Scholar 

  29. Rad, B.B., et al.: An introduction to docker and analysis of its performance. Int. J. Comput. Sci. Netw. Secur. 17(3), 228 (2017)

    Google Scholar 

  30. Rezende Alles, G., et al.: Assessing the computation and communication overhead of Linux containers for HPC applications. In: 2018 Symposium on High Performance Computing Systems, pp. 116–123 (2018)

    Google Scholar 

  31. Rudyy, O., et al.: Containers in HPC: a scalability and portability study in production biological simulations. In: 2019 IEEE International Parallel and Distributed Processing Symposium, pp. 567–577 (2019)

    Google Scholar 

  32. Saha, P., et al.: Evaluation of Docker containers for scientific workloads in the cloud. In: Proceedings of the Practice and Experience on Advanced Research Computing. PEARC 2018, ACM, New York, NY, USA (2018)

    Google Scholar 

  33. Sahasrabudhe, D., Phipps, E.T., Rajamanickam, S., Berzins, M.: A portable SIMD primitive Using Kokkos for heterogeneous architectures. In: Wienke, S., Bhalachandra, S. (eds.) WACCPD 2019. LNCS, vol. 12017, pp. 140–163. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49943-3_7

    Chapter  Google Scholar 

  34. Sparks, J.: Enabling docker for HPC. Concurrency Comput. Pract. Exp. 31(16), e5018 (2019)

    Article  Google Scholar 

  35. Thoman, P., et al.: A taxonomy of task-based parallel programming technologies for high-performance computing. J. Supercomput. 74(4), 1422–1434 (2018)

    Article  Google Scholar 

  36. Torrez, A., et al.: HPC container runtimes have minimal or no performance impact. In: 2019 IEEE/ACM International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC, pp. 37–42 (2019)

    Google Scholar 

  37. Trott, C.R., et al.: Kokkos 3: programming model extensions for the Exascale era. IEEE Trans. Parallel Distrib. Syst. 33(4), 805–817 (2022)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

Funded partly by NSF #229751: POSE: Phase 1: Constellation: A Pathway to Establish the STE||AR Open-Source Organization. Computational resources of the Supercomputer Fugaku provided by the RIKEN Center for Computational Science were used.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patrick Diehl .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Diehl, P., Brandt, S.R., Daiß, G., Kaiser, H. (2024). HPX with Spack and Singularity Containers: Evaluating Overheads for HPX/Kokkos Using an Astrophysics Application. In: Diehl, P., Schuchart, J., Valero-Lara, P., Bosilca, G. (eds) Asynchronous Many-Task Systems and Applications. WAMTA 2024. Lecture Notes in Computer Science, vol 14626. Springer, Cham. https://doi.org/10.1007/978-3-031-61763-8_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-61763-8_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-61762-1

  • Online ISBN: 978-3-031-61763-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation