Log in

Hyperparameter autotuning of programs with HybridTuner

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Algorithms must often be tailored to a specific architecture and application in order to fully harness the capabilities of sophisticated computer architectures and computational implementations. However, the relationship between tuning parameters and performance is complicated and non-intuitive, having no explicit algebraic description. This is true particularly for programs such as GPU applications and compiler tuning, both of which have discrete and often nonlinear interactions between parameters and performance. After assessing a few alternative algorithmic configurations, we present two hybrid derivative-free optimization (DFO) approaches to maximize the performance of an algorithm. We demonstrate how we use our method to solve problems with up to 50 hyperparameters. When compared to state-of-the-art autotuners, our autotuner (a) reduces the execution time of dense matrix multiplication by a factor of 1.4x, (b) identifies high-quality tuning parameters in only 5% of the computational effort required by other autotuners, and (c) can be applied to any computer architecture. Our implementations of Bandit DFO and Hybrid DFO are publicly available at https://github.com/bsauk/HybridTuner.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

Data for problems was generated randomly.

Code Availability

All codes used for computations in this paper can be obtained from https://github.com/bsauk/HybridTuner.

References

  1. Adams, B.M., Ebeida, M.S., Eldred, M.S., Geraci, G., Jakeman, J.D., Maupin, K.A., Monschke, J.A., Swiler, L.P., Stephens, J.A., Vigil, D.M., Wildey, T.M., Bohnhoff, W.J., Dalbey, K.R., Eddy, J.P., Hooper, R.W., Hu, K.T., Hough, P.D., Ridgway, E.M., Rushdi, A.: DAKOTA, A multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis: Version 6.5 User’s Manual. Sandia national laboratories, Albuquerque, NM and Livermore, CA. https://dakota.sandia.gov/ (2016)

  2. Ansel, J., Chan, C., Wong, Y., Olszewski, M., Zhao, Q., Edelman, A., Amarasinghe, S.: PetaBricks: A language and compiler for algorithmic choice. In: Proceedings of the 30th ACM SIGPLAN Conference on Programming Language Design and Implementation, pp 38–49. Association for Computing Machinery, New York (2009)

  3. Ansel, J., Kamil, S., Veeramachaneni, K., Ragan-Kelly, J., Bosboom, J., O’Reilly, U.M., Amarasinghe, S.: Opentuner: An extensible framework for program autotuning. In: Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, pp 303–316. Association for Computing Machinery, New York (2014)

  4. Ashouri, A., Killian, W., Cavazos, J., Palermo, G., Silvano, C.: A survey on compiler autotuning using machine learning. ACM Comput. Surv. (CSUR) 51, 1–42 (2018)

    Article  Google Scholar 

  5. Ashouri, A., Mariani, G., Palermo, G., Park, E., Cavazos, J., Silvano, C.: COBAYN: Compiler autotuning framework using Bayesian networks. ACM Trans. Archit. Code Optim. (TACO) 13, 1–26 (2016)

    Article  Google Scholar 

  6. Audet, C., Dang, C.-K., Orban, D.: Algorithmic parameter optimization of the DFO method with the OPAL framework. In: Suda, R., Naono, K., Teranishi, K., Cavazos, J. (eds.) Software Automatic Tuning, pp 255–274 (2011)

  7. Audet, C., Orban, D.: Finding optimal algorithmic parameters using derivative-free optimization. Soc. Indust. Appl. Math. 17, 642–664 (2006)

    MathSciNet  MATH  Google Scholar 

  8. Balandat, M., Karrer, B., Jiang, D.R., Daulton, S., Letham, B.B., Wilson, A., Bakshy, E.: BoTorch: Programmable Bayesian Optimization in PyTorch, 1–20. ar**v:1910.06403 (2019)

  9. Bergstra, J, Bardenet, R., Bengio, Y., Kégl, B.: Algorithms for hyper-parameter optimization. In: Shawe-Taylor, J., Zemel, R.S., Bartlett, P.L., Pereira, F., Weinberger, K.Q. (eds.) Proceedings of the 24th International Conference on Neural Information Processing Systems, pp 2546–2554. Curran Associates Inc, Red Hook (2011)

  10. Birattari, M., Yuan, Z., Balaprakash, P., Stü”tzle, T.: F-Race and iterated F-Race: An overview. Experimental Methods for the Analysis of Optimization Algorithms, 311–336 (2010)

  11. Bruel, P., Gonzalez, M., Goldman, A.: Autotuning GPU compiler parameter using OpenTuner. In: XXII Symposium of Systems of High Performance Computing, pp 1–12. IEEE Bangalore, India (2015)

  12. Carter, R., Gablonsky, J., Patrick, A., Kelley, C., Eslinger, O.: Algorithms for noisy problems in gas transmission pipeline optimization. Optim. Eng. 2, 139–157 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  13. Custódio, A.L., Vicente, L. N.: SID-PSM A Pattern Search Method Guided by Simplex Derivatives for Use in Derivative-Free Optimization. Departamento De Matemática. Universidade De Coimbra, Portugal (2008)

    Google Scholar 

  14. Davidson, A., Owens, J.: Toward techniques for auto-tuning GPU algorithms. In: Jónasson, K. (ed.) Applied Parallel and Scientific Computing, pp 110–119. Springer, Berlin (2012)

  15. Fan, S.S., Zahara, E.: A hybrid simplex search and particle swarm optimization for unconstrained optimization. Eur. J. Oper. Res. 181, 527–548 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  16. Fialho, A., Da Costa, L., Schoenauer, M., Sebag, M.: Analyzing bandit-based adaptive operator selection mechanisms. Ann. Math. Artif. Intell. 60, 25–64 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gray, G.A., Kolda, T.G.: 856: Algorithm APPSPACK 4.0: Parallel pattern search for derivative-free optimization. ACM Trans. Math. Softw. 32, 485–507 (2006)

    Article  MATH  Google Scholar 

  18. Griffin, J.D., Kolda, T.G.: Asynchronous parallel hybrid optimization combining DIRECT and GSS. Optim. Methods Softw. 25, 797–817 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  19. Hemker, T., Werner, C.: DIRECT Using local search on surrogates. Pacific J. Optim. 7, 443–466 (2011)

    MathSciNet  MATH  Google Scholar 

  20. Holmström, K., Göran, A.O., Edvall, M.M.: User’s Guide for TOMLAB 7. Tomlab Optimization.http://tomopt.com (2010). Accessed 19 July 2019

  21. Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configurations. In: Learning and Intelligent Optimization, pp 507–523. Springer (2011)

  22. Hutter, F., Hoos, H.H., Leyton-Brown, K., Stützle, T.: ParamILS An antomatic algorithm configuration framework. J. Artif. Intell. Res. 36, 267–306 (2009)

    Article  MATH  Google Scholar 

  23. Huyer, W., Neumaier, A.: SNOBFIT–Stable noisy optimization by branch and fit. ACM Trans. Math. Softw. 35, 1–25 (2008)

    Article  MathSciNet  Google Scholar 

  24. Jones, D.R.: The DIRECT global optimization algorithm. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, vol. 1, pp 431–440. Kluwer Academic Publishers, Boston (2001)

  25. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of the IEEE International Conference on Neural Networks, pp 1942–1948. Piscataway, USA (1995)

  26. Li, Y., Dongarra, J., Tomov, S.: A note on auto-tuning GEMM for GPUs. In: Allen, G., Nabrzyski, J., Seidel, E., Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds.) Computational Science - ICCS 2009, pp 884–892. Springer, Berlin (2009)

  27. Liu, J., Ploskas, N., Sahinidis, N.: Tuning baron using derivative-free optimization algorithms. J. Glob. Optim. 74(4), 611–637 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  28. López-Ibáñez, M., Dubois-Lacoste, J., Cáceres, L., Birattari, M., Stützle, T.: The irace package: Iterated racing for automatic algorithm configuration. Operations Research Perspectives, 43–58 (2016)

  29. Loshchilov, I., Hutter, F.: CMA-ES for hyperparameter optimization of deep neural networks. 1–15. ar**v:1604.07269 (2016)

  30. Maturana, J., Fialho, A., Saubion, F., Schoenauer, M., Sebag, M.: Extreme compass and dynamic multi-armed bandits for adaptive operator selection. In: Zhang, Q., Mahfouf, M. (eds.) Proceedings of the 2009 IEEE Congress on Evolutionary Computation, pp 365–372 (2009)

  31. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E.: Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1092 (1953)

    Article  MATH  Google Scholar 

  32. Nath, R., Tomov, S., Dongarra, J.: An improved MAGMA GEMM for Fermi graphics processing units. Int. J. High Perform. Comput. Appl. 24, 511–515 (2010)

    Article  Google Scholar 

  33. Nelder, J.A., Mead, R.: A simplex method for function minimization. Comput. J. 7, 308–313 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  34. Nystrom, N., Levine, M., Roskies, R., Scott, J.: Bridges: A uniquely flexible HPC resource for new communities and data analytics. In: Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure, pp 1–8. Association for Computing Machinery, New York (2015)

  35. Pacula, M., Ansel, J., Amarasinghe, S., O’Reilly, U.: Hyperparameter tuning in bandit-based adaptive operator selection. In: Chio, C., Agapitos, A., Cagnoni, S., Cotta, C., Vega, F. (eds.) Proceedings of the 2012t European Conference on Applications of Evolutionary Computation, pp 73–82. Springer, Berlin (2012)

  36. Paulavičius, R., žilinskas, J.J.: Simplicial Global Optimization. Springer, Berlin (2014)

    Book  MATH  Google Scholar 

  37. Plantenga, T.D.: HOPSPACK 2.0 User Manual. Technical report SAND2009-6265, Sandia national laboratories, Albuquerque, NM and Livermore, CA. https://software.sandia.gov/trac/hopspack/ (2009)

  38. Ploskas, N., Sahinidis, N.V.: Review and comparison of algorithms and software for mixed-integer derivative-free optimization. Journal of Global Optimization. https://doi.org/10.1007/s10898-021-01085-0 (2021)

  39. Powell, M.J.D.: UOBYQA: Unconstrained Optimization BY quadratic approximation. Math. Program. 92, 555–582 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  40. Rios, L.M., Sahinidis, N.V.: Derivative-free optimization: A review of algorithms and comparison of software implementations. J. Glob. Optim. 56, 1247–1293 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  41. Sauk, B., Ploskas, N., Sahinidis, N.V.: GPU Paramter tuning for tall and skinny dense linear least squares problems. Optim. Methods Softw. 35, 638–660 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  42. Sauk, B., Sahinidis, N.V.: HybridTuner: Tuning with hybrid derivative-free optimization initialization strategies. In: Pardalos, P.M., Simos, D.E., Kotsireas, I. (eds.) Proceedings of the 15th Learning and Intelligent Optimization Conference, Lecture Notes in Computer Science, pp 1–13 (2021)

  43. Sergeyev, Y a D, Kvasov, D.E., Mukhametzhanov, M.S.: On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Scientific Reports 8, 1–9 (2018)

    Article  Google Scholar 

  44. Sergeyev, Y.D., Kvasov, D.E.: Deterministic Global Optimization: An Introduction to the Diagonal Approach. Springer, Berlin (2017)

  45. Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Proceedings of the 25th International Conference on Neural Information Processing Systems, pp 2951–2959. Curran Associates Inc, Red Hook (2012)

  46. Tan, G., Li, L., Triechle, S., Phillips, E., Bao, Y., Sun, N.: Fast implementation of DGEMM on Fermi GPU. In: Proceedings of International Conference for High Performance Computing, Networking, Storage and Analysis, vol. 2011, pp 35–46. Association for Computing Machinery, New York (2011)

  47. Ţăpuş, C., Chung, I., Hollingsworth, J.: Active harmony: Towards automated performance tuning. In: Proceedings of the ACM/IEEE Conference on Supercomputing, pp 1–11. IEEE Computer Society Press, Washington (2002)

  48. Tartara, M., Reghizzi, S.: Continuous learning of compiler heuristics. ACM Trans. Archit. Code Optim. (TACO) 9, 1–25 (2013)

    Article  Google Scholar 

  49. Towns, J., Cockerill, T., Dahan, M., foster, I., Gaither, K., Grimshaw, A., Hazlewood, V., Lathrop, S., Lifka, D., Peterson, G., Roskies, R., Scott, J.R., Wilkens-Diehr, N.: XSEDE: Accelerating scientific discovery. Computing in science & engineering. Comput. Sci. Eng. 16, 62–74 (2014)

    Article  Google Scholar 

  50. Vaz, A.I.F., Vicente, L.N.: A particle swarm pattern search method for bound constrained global optimization. J. Glob. Optim. 39, 197–219 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  51. Vuduc, R., Demmel, J., Yelick, K.: OSKI: A library of automatically tuned sparse matrix kernels. J. Phys.: Conf. Ser. 16, 521–530 (2005)

    Google Scholar 

  52. Whaley, R., Petitet, A., Dongarra, J.: Automated empirical optimizations of software and the ATLAS project. Parallel Comput. 27, 3–35 (2001)

    Article  MATH  Google Scholar 

  53. Yuki, T., Pouchet, L.N.: https://www.cs.colostate.edu/pouchet/software/polybench/polybench-fortran.html (2016). Accessed 1 June 2020

Download references

Funding

This work was conducted as part of the Institute for the Design of Advanced Energy Systems (IDAES) with support through the Simulation-Based Engineering, Crosscutting Research Program within the U.S. Department of Energy’s Office of Fossil Energy and Carbon Management. We also gratefully acknowledge NVIDIA Corporation donation of the NVIDIA Tesla K40 GPU used in this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nikolaos V. Sahinidis.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was conducted as part of the Institute for the Design of Advanced Energy Systems (IDAES) with funding from the Office of Fossil Energy, Cross-Cutting Research, U.S. Department of Energy. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC). We also gratefully acknowledge the support of the NVIDIA Corporation with the donation of the NVIDIA Tesla K40 GPU used for this research.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sauk, B., Sahinidis, N.V. Hyperparameter autotuning of programs with HybridTuner. Ann Math Artif Intell 91, 133–151 (2023). https://doi.org/10.1007/s10472-022-09793-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-022-09793-3

Keywords

Mathematics Subject Classification (2010)

Navigation