A Robust Statistical Framework for the Analysis of the Performances of Stochastic Optimization Algorithms Using the Principles of Severity

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2023)

Abstract

Meta-heuristic stochastic optimization algorithms are predominantly used to solve complex real-world problems. Numerous new nature-inspired meta-heuristics are being proposed to address various open challenges. Since many heuristics are stochastic, they could yield different solutions to the same problem for different runs. Hence, there is a need for stringent in-depth statistical analysis of the performances of stochastic optimization algorithms. The proposed severity framework enables researchers and practitioners to define application-specific and meaningful performance evaluation metric that evaluates the magnitude of the performance improvement achieved, which is not only of statistical significance but also of practical relevance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Note that in the proposed framework, we do not assume normality of the data. Here it is assumed to simplify the explanation of the concept and without the loss of generality, the concept can be adapted to the cases where the distribution is not known.

References

  1. Bartz-Beielstein, T., et al.: Benchmarking in optimization: best practice and open issues. ar**v preprint ar**v:2007.03488 (2020)

  2. Bartz-Beielstein, T., Mersmann, O., Chandrasekaran, S.: Ranking and result aggregation. In: Bartz, E., Bartz-Beielstein, T., Zaefferer, M., Mersmann, O. (eds.) Hyperparameter Tuning for Machine and Deep Learning with R: A Practical Guide, chap. 5, pp. 121–161. Springer Nature (2023). https://doi.org/10.1007/978-981-19-5170-1_5

  3. Ben-Shachar, M.S., Lüdecke, D., Makowski, D.: Effectsize: estimation of effect size indices and standardized parameters. J. Open Source Softw. 5(56), 2815 (2020)

    Article  Google Scholar 

  4. Benavoli, A., Corani, G., Demšar, J., Zaffalon, M.: Time for a change: a tutorial for comparing multiple classifiers through Bayesian analysis. J. Mach. Learn. Res. 18(1), 2653–2688 (2017)

    MathSciNet  MATH  Google Scholar 

  5. Berger, J.O., Sellke, T.: Testing a point null hypothesis: The irreconcilability of p values and evidence. J. Am. Stat. Assoc. 82(397), 112–122 (1987)

    MathSciNet  MATH  Google Scholar 

  6. Calvo, B., Shir, O.M., Ceberio, J., Doerr, C., Wang, H., Bäck, T., Lozano, J.A.: Bayesian performance analysis for black-box optimization benchmarking. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1789–1797 (2019)

    Google Scholar 

  7. Carrano, E.G., Wanner, E.F., Takahashi, R.H.: A multicriteria statistical based comparison methodology for evaluating evolutionary algorithms. IEEE Trans. Evol. Comput. 15(6), 848–870 (2011)

    Article  Google Scholar 

  8. Cenikj, G., Lang, R.D., Engelbrecht, A.P., Doerr, C., Korošec, P., Eftimov, T.: Selector: selecting a representative benchmark suite for reproducible statistical comparison. ar**v preprint ar**v:2204.11527 (2022)

  9. Christensen, S., Wineberg, M.: Using appropriate statistics-statistics for artificial intelligence. In: Tutorial Program of the Genetic and Evolutionary Computation Conference, Seattle, WA, pp. 544–564 (2004)

    Google Scholar 

  10. Cohen, J.: Statistical power analysis for the behavioral sciences (revised ed.) (1977)

    Google Scholar 

  11. Derrac, J., García, S., Molina, D., Herrera, F.: A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 1(1), 3–18 (2011)

    Article  Google Scholar 

  12. Doerr, C., Ye, F., Horesh, N., Wang, H., Shir, O.M., Bäck, T.: Benchmarking discrete optimization heuristics with IOHprofiler. Appl. Soft Comput. 88, 106027 (2020)

    Article  Google Scholar 

  13. Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. CRC Press, Boca Raton (1994)

    Book  MATH  Google Scholar 

  14. Eftimov, T., Korošec, P.: Identifying practical significance through statistical comparison of meta-heuristic stochastic optimization algorithms. Appl. Soft Comput. 85, 105862 (2019)

    Article  MATH  Google Scholar 

  15. Eftimov, T., Korošec, P.: A novel statistical approach for comparing meta-heuristic stochastic optimization algorithms according to the distribution of solutions in the search space. Inf. Sci. 489, 255–273 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  16. García, S., Molina, D., Lozano, M., Herrera, F.: A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec’2005 special session on real parameter optimization. J. Heuristics 15(6), 617–644 (2009)

    Article  MATH  Google Scholar 

  17. Gelman, A.: Objections to Bayesian statistics. Bayesian. Analysis 3(3), 445–449 (2008)

    MathSciNet  MATH  Google Scholar 

  18. Hansen, N., Auger, A., Ros, R., Mersmann, O., Tušar, T., Brockhoff, D.: COCO: a platform for comparing continuous optimizers in a black-box setting. Optim. Methods Softw. 36(1), 114–144 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  19. Hansen, N., Finck, S., Ros, R., Auger, A.: Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions. Ph.D. thesis, INRIA (2009)

    Google Scholar 

  20. Head, M.L., Holman, L., Lanfear, R., Kahn, A.T., Jennions, M.D.: The extent and consequences of p-hacking in science. PLOS Bio. 13(3), 1–15 (2015)

    Article  Google Scholar 

  21. Hedges, L.V., Olkin, I.: Statistical Methods for Meta-Analysis. Academic Press, New York (1985)

    MATH  Google Scholar 

  22. Lecoutre, B., Lecoutre, M.P., Poitevineau, J.: Uses, abuses and misuses of significance tests in the scientific community: won’t the Bayesian choice be unavoidable? Int. Stat. Rev. 69(3), 399–417 (2001)

    Article  MATH  Google Scholar 

  23. Lehmann, E.L., Romano, J.P.: Testing Statistical Hypotheses. Springer, New York (2006). https://doi.org/10.1007/0-387-27605-X

    Book  MATH  Google Scholar 

  24. Liang, J.J., Qu, B.Y., Suganthan, P.N.: Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. In: Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore, vol. 635, p. 490 (2013)

    Google Scholar 

  25. Liang, J.J., Qu, B., Suganthan, P.N., Hernández-Díaz, A.G.: Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization. In: Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou, China and Nanyang Technological University, Singapore, Technical Report, vol. 201212, iss. 34, pp. 281–295 (2013)

    Google Scholar 

  26. Liang, J., Qu, B., Suganthan, P., Chen, Q.: Problem definitions and evaluation criteria for the cec 2015 competition on learning-based real-parameter single objective optimization. In: Technical Report201411A, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore, vol. 29, pp. 625–640 (2014)

    Google Scholar 

  27. Macbeth, G., Razumiejczyk, E., Ledesma, R.D.: Cliff’s delta calculator: a non-parametric effect size program for two groups of observations. Universitas Psychologica 10(2), 545–555 (2011)

    Article  Google Scholar 

  28. Mammen, E., Nandi, S.: Bootstrap and resampling 111.2. Handbook of Computational Statistics: Concepts and Methods, p. 467 (2004)

    Google Scholar 

  29. Mayo, D.G., Spanos, A.: Severe testing as a basic concept in a neyman-pearson philosophy of induction. British J. Philos. Sci. 57(2), 323–357 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  30. Molina, D., LaTorre, A.: Toolkit for the automatic comparison of optimizers: comparing large-scale global optimizers made easy. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8. IEEE (2018)

    Google Scholar 

  31. Neyman, J., Pearson, E.S.: On the use and interpretation of certain test criteria for purposes of statistical inference: Part i. Biometrika, pp. 175–240 (1928)

    Google Scholar 

  32. Rojas-Delgado, J., Ceberio, J., Calvo, B., Lozano, J.A.: Bayesian performance analysis for algorithm ranking comparison. IEEE Trans. Evol. Comput. 26(6), 1281–1292 (2022)

    Article  Google Scholar 

  33. Senn, S.S.: Statistical issues in drug development, vol. 69. John Wiley & Sons (2008)

    Google Scholar 

  34. Shilane, D., Martikainen, J., Dudoit, S., Ovaska, S.J.: A general framework for statistical performance comparison of evolutionary computation algorithms. Inf. Sci. 178(14), 2870–2879 (2008)

    Article  Google Scholar 

  35. Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Global Optim. 11(4), 341–359 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  36. Veček, N., Mernik, M., Črepinšek, M.: A chess rating system for evolutionary algorithms: a new method for the comparison and ranking of evolutionary algorithms. Inf. Sci. 277, 656–679 (2014)

    Article  MathSciNet  Google Scholar 

  37. Wasserstein, R.L., Lazar, N.A.: The ASA’s statement on p-values: context, process, and purpose. Am. Stat. 70(2), 129–133 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  38. Wu, G., Mallipeddi, R., Suganthan, P.N.: Problem definitions and evaluation criteria for the CEC 2017 competition on constrained real-parameter optimization. National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sowmya Chandrasekaran .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chandrasekaran, S., Bartz-Beielstein, T. (2023). A Robust Statistical Framework for the Analysis of the Performances of Stochastic Optimization Algorithms Using the Principles of Severity. In: Correia, J., Smith, S., Qaddoura, R. (eds) Applications of Evolutionary Computation. EvoApplications 2023. Lecture Notes in Computer Science, vol 13989. Springer, Cham. https://doi.org/10.1007/978-3-031-30229-9_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-30229-9_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-30228-2

  • Online ISBN: 978-3-031-30229-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation