Part of the book series: Genetic and Evolutionary Computation ((GEVO))

  • 103 Accesses

Abstract

This chapter focuses on an important aspect of learning the preference structure of the objectives, inherent in multi- and many-objective optimization problem formulations. This involves identifying the non-essential (redundant) objectives, and also determining the relative importance of the essential objectives. Such an approach to knowledge discovery is based on the following rationale. Modeling an optimization problem, analytically or through experiments, involves a lot of time and physical resources, possibly from multiple disciplines, in conjunction or isolation from each other. Often, it can be intriguing for analysts or decision makers (DMs) to know if the developed model represents the underlying problem in a minimal form or is marked by redundancy. Any redundancy among objectives, if revealed, could shed insightful light on the physics of the underlying problem, in addition to reducing its complexity and promising greater search efficiency for evolutionary multi- and many-objective optimization algorithms (EMâOAs). Furthermore, the revelation of the relative preferences among the essential objectives that are inherent in the problem models could also be significantly useful, as highlighted below.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 136.95
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
EUR 171.19
Price includes VAT (Germany)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For a given \(0 \le \delta \le 0\), there may be multiple subsets of objectives which ensure that the error associated with the the omission of the remaining objectives does not exceed \(\delta \). Each such subset is referred to as a \(\delta \)-minimal objective subset. However, the \(\delta \)-minimal objective subset having the smallest size is referred to as the \(\delta \)-minimum objective subset.

  2. 2.

    Here, dimensionality refers to the number of objectives that are essential to characterize the complete \(P\!F\).

  3. 3.

    This contribution can be given by \(sc_i = \sum _{j=1}^{N_v} e_jf_{ij}^2\) (Sect. 4.2.2). However, \(e_j\) and \(f_{ij}\) being less than one, may lead to indiscriminately low values for \(sc_i\), hence, adapted in Eq. 4.6.

  4. 4.

    This is also true for DTLZ7 but it is precluded here due to the inconsistency between its formulation and its \(P\!F\), as presented in [14].

References

  1. Beer, S.: Platform for change. John Wiley & Sons Inc, New York, NY, USA (1975)

    Google Scholar 

  2. Brockhoff, D., Zitzler, E.: Are all objectives necessary? on dimensionality reduction in evolutionary multiobjective optimization. In: Runarsson, T.P., Beyer, H.G., Burke, E., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) Parallel Problem Solving from Nature - PPSN IX, pp. 533–542. Springer, Berlin Heidelberg, Berlin, Heidelberg (2006)

    Chapter  Google Scholar 

  3. Brockhoff, D., Zitzler, E.: Objective Reduction in Evolutionary Multiobjective Optimization: Theory and Applications. Evol. Comput. 17(2), 135–166 (2009). https://doi.org/10.1162/evco.2009.17.2.135

    Article  Google Scholar 

  4. Cohen, J.: Statistical power analysis for the behavioral sciences, 2nd edn. Erlbaum, Hillsdale, NJ (1988)

    Google Scholar 

  5. Deb, K., Agrawal, R.B.: Simulated binary crossover for continuous search space. Complex Systems 9(2), 115–148 (1995)

    MathSciNet  Google Scholar 

  6. Deb, K., Goyal, M.: A combined genetic adaptive search (GeneAS) for engineering design. Computer Science and Informatics 26(4), 30–45 (1996)

    Google Scholar 

  7. Deb, K., Kumar, A.: Interactive evolutionary multi-objective optimization and decision-making using reference direction method. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, GECCO ’07, p. 781–788. Association for Computing Machinery, New York, NY, USA (2007). https://doi.org/10.1145/1276958.1277116

  8. Deb, K., Kumar, A.: Light beam search based multi-objective optimization using evolutionary algorithms. In: 2007 IEEE Congress on Evolutionary Computation, pp. 2125–2132 (2007). https://doi.org/10.1109/CEC.2007.4424735

  9. Deb, K., Mohan, M., Mishra, S.: Evaluating the \(\epsilon \)-domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions. Evol. Comput. 13(4), 501–525 (2005). https://doi.org/10.1162/106365605774666895

    Article  Google Scholar 

  10. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002). https://doi.org/10.1109/4235.996017

    Article  Google Scholar 

  11. Deb, K., Saxena, D.K.: Searching for Pareto-optimal solutions through dimensionality reduction for certain large-dimensional multi-objective optimization problems. In: IEEE Congress on Evolutionary Computation, pp. 3353–3360 (2006)

    Google Scholar 

  12. Deb, K., Sinha, A., Korhonen, P.J., Wallenius, J.: An interactive evolutionary multiobjective optimization method based on progressively approximated value functions. IEEE Trans. Evol. Comput. 14(5), 723–739 (2010). https://doi.org/10.1109/TEVC.2010.2064323

    Article  Google Scholar 

  13. Deb, K., Sundar, J.: Reference point based multi-objective optimization using evolutionary algorithms. International Journal of Computational Intelligence Research (IJCIR) 2(6), 273–286 (2006)

    MathSciNet  Google Scholar 

  14. Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable test problems for evolutionary multiobjective optimization. In: Evolutionary Multiobjective Optimization: Theoretical Advances and Applications, pp. 105–145. Springer London, London (2005). https://doi.org/10.1007/1-84628-137-7_6

  15. Ding, R., Dong, H.B., Yin, G.S., Sun, J., Yu, X.D., Feng, X.B.: An objective reduction method based on advanced clustering for many-objective optimization problems and its human-computer interaction visualization of Pareto front. Computers & Electrical Engineering 93, 107,266 (2021). https://doi.org/10.1016/j.compeleceng.2021.107266

  16. Duro, J.A., Saxena, D.K., Deb, K., Zhang, Q.: Machine learning based decision support for many-objective optimization problems. Neurocomputing 146, 30–47 (2014). https://doi.org/10.1016/j.neucom.2014.06.076

    Article  Google Scholar 

  17. Gupta, R., Nanda, S.J.: Objective reduction in many-objective optimization with social spider algorithm for cloud detection in satellite images. Soft. Comput. 26, 2935–2958 (2022). https://doi.org/10.1007/s00500-021-06655-8

    Article  Google Scholar 

  18. Heisenberg, W.: Über den anschaulichen inhalt der quantentheoretischen kinematik und mechanik. Z. Phys. 43, 172–198 (1927)

    Article  Google Scholar 

  19. Huband, S., Hingston, P., Barone, L., While, L.: A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10(5), 477–506 (2006)

    Article  Google Scholar 

  20. Hughes, E.: Many-objective radar design software. Online (2007). Available: http://code.evanhughes.org

  21. Hughes, E.J.: MSOPS-II: A general-purpose many-objective optimiser. In: IEEE Congress on Evolutionary Computation, pp. 3944–3951. IEEE Press (2007)

    Google Scholar 

  22. Hughes, E.J.: Radar waveform optimization as a many-objective application benchmark. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science, vol. 4403, pp. 700–714. Springer, Berlin / Heidelberg (2007)

    Chapter  Google Scholar 

  23. Jaimes, A.L., Coello, C.A.C., Chakraborty, D.: Objective reduction using a feature selection technique. In: Genetic and Evolutionary Computation Conference (GECCO), pp. 673–680 (2008)

    Google Scholar 

  24. J.F.Sturm: Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software 11(1), 625–653 (1999)

    Google Scholar 

  25. Li, K., Lai, G., Yao, X.: Interactive evolutionary multiobjective optimization via learning to rank. IEEE Trans. Evol. Comput. 27(4), 749–763 (2023). https://doi.org/10.1109/TEVC.2023.3234269

    Article  Google Scholar 

  26. Luo, N., Li, X., Lin, Q.: Objective reduction for many-objective optimization problems using objective subspace extraction. Soft. Comput. 22, 1159–1173 (2018). https://doi.org/10.1007/s00500-017-2498-6

    Article  Google Scholar 

  27. Miller, G.A.: The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 63(2), 81–97 (1956)

    Article  Google Scholar 

  28. Musselman, K., Talavage, J.: A tradeoff cut approach to multiple objective optimization. Oper. Res. 28(6), 1424–1435 (1980). https://doi.org/10.1287/opre.28.6.1424

    Article  MathSciNet  Google Scholar 

  29. Nguyen, X.H., Bui, T.L., Tran, C.T.: An improvement of clustering-based objective reduction method for many-objective optimization problems. Journal of Science and Technique 8 (2019). https://doi.org/10.56651/lqdtu.jst.v8.n02.65.ict

  30. Nisbett, R.E., Wilson, T.D.: Telling more than we can know: Verbal reports on mental processes. Psychol. Rev. 84(3), 231–259 (1977)

    Article  Google Scholar 

  31. Purshouse, R.C., Fleming, P.J.: Evolutionary Many-Objective Optimization: An Exploratory Analysis. In: IEEE Congress on Evolutionary Computation, pp. 2066–2073 (2003)

    Google Scholar 

  32. Saul, L.K., Weinberger, K.Q., Ham, J.H., Sha, F., Lee, D.D.: Spectral methods for dimensionality reduction. In: Schoelkopf, O.C.B., Zien, A. (eds.) Semisupervised Learning. MIT Press, Cambridge, MA (2006)

    Google Scholar 

  33. Saxena, D., Deb, K.: Non-linear dimensionality reduction procedures for certain large-dimensional multi-objective optimization problems: Employing correntropy and a novel maximum variance unfolding. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science, vol. 4403, pp. 772–787. Springer, Berlin / Heidelberg (2007)

    Chapter  Google Scholar 

  34. Saxena, D.K., Duro, J.A., Tiwari, A., Deb, K., Zhang, Q.: Objective reduction in many-objective optimization: Linear and nonlinear algorithms. IEEE Trans. Evol. Comput. 77(1), 77–99 (2013)

    Article  Google Scholar 

  35. Scholkopf, B., Smola, A., Muller, K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10(5), 1299–1319 (1998)

    Article  Google Scholar 

  36. Shlens, J.: A tutorial on principal component analysis. Tech. rep., Center for Neural Science, New York University, available at:http://www.snl.salk.edu/\(\sim \)shlens/pca.pdf (accessed: May 2011) (2009)

    Google Scholar 

  37. Singh, H.K., Isaacs, A., Ray, T.: A Pareto corner search evolutionary algorithm and dimensionality reduction in many-objective optimization problems. IEEE Trans. Evol. Comput. 15(4), 539–556 (2011)

    Article  Google Scholar 

  38. Sinha, A., Deb, K., Korhonen, P., Wallenius, J.: Progressively interactive evolutionary multi-objective optimization method using generalized polynomial value functions. In: IEEE Congress on Evolutionary Computation, pp. 1–8. IEEE Press (2010)

    Google Scholar 

  39. Slovic, P., Lichtenstein, S.: Comparison of bayesian and regression approaches to the study of information processing in judgment. Organ. Behav. Hum. Perform. 6(6), 649–744 (1971). https://doi.org/10.1016/0030-5073(71)90033-X

    Article  Google Scholar 

  40. Thiele, L., Miettinen, K., Korhonen, P.J., Molina, J.: A preference-based evolutionary algorithm for multi-objective optimization. Evol. Comput. 17(3), 411–436 (2009). https://doi.org/10.1162/evco.2009.17.3.411

    Article  Google Scholar 

  41. Wang, H., Yao, X.: Objective reduction based on nonlinear correlation information entropy. Soft. Comput. 20, 2393–2407 (2016). https://doi.org/10.1007/s00500-015-1648-y

    Article  Google Scholar 

  42. Weinberger, K.Q., Saul, L.K.: Unsupervised learning of image manifolds by semidefinite programming. Int. J. Comput. Vision 70(1), 77–90 (2006). https://doi.org/10.1007/s11263-005-4939-z

    Article  Google Scholar 

  43. Yu, P.L.: Habitual domains. Oper. Res. 39(6), 869–876 (1991)

    Article  Google Scholar 

  44. Zhang, Q., Li, H.: MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11(6), 712–731 (2007). https://doi.org/10.1109/TEVC.2007.892759

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kalyanmoy Deb .

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Saxena, D.K., Mittal, S., Deb, K., Goodman, E.D. (2024). Learning to Understand the Problem Structure. In: Machine Learning Assisted Evolutionary Multi- and Many- Objective Optimization. Genetic and Evolutionary Computation. Springer, Singapore. https://doi.org/10.1007/978-981-99-2096-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-2096-9_4

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-2095-2

  • Online ISBN: 978-981-99-2096-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation