Part of the book series: Genetic and Evolutionary Computation ((GEVO))

  • 99 Accesses

Abstract

Many efficient evolutionary multi- and many-objective optimization algorithms, jointly referred to as EMâOAs, have been proposed in the last three decades. However, while solving complex real-world problems, EMâOAs that rely only on natural variation and selection operators may not produce an efficient search [14, 33, 45]. Therefore, it may be desirable or essential to enhance the capabilities of EMâOAs by introducing synergistic concepts from probability, statistics, machine learning (ML), etc. This chapter highlights some of the key studies that have laid the foundations for ML-based enhancements for EMâOAs and inspired further research that has been shared in subsequent chapters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bandaru, S., Deb, K.: Automated discovery of vital knowledge from Pareto-optimal solutions: first results from engineering design. In: IEEE Congress on Evolutionary Computation, pp. 1–8 (2010). https://doi.org/10.1109/CEC.2010.5586501

  2. Bandaru, S., Deb, K.: Towards automating the discovery of certain innovative design principles through a clustering based optimization technique. Eng. Optim. 43(9), 911–941 (2011)

    Article  Google Scholar 

  3. Bhattacharjee, K.S., Isaacs, A., Ray, T.: Multi-objective optimization using an evolutionary algorithm embedded with multiple spatially distributed surrogates. In: Multi-objective Optimization, pp. 135–155. World Scientific (2017). https://doi.org/10.1142/9789813148239_0005

  4. Bora, T.C., Mariani, V.C., dos Santos Coelho, L.: Multi-objective optimization of the environmental-economic dispatch with reinforcement learning based on non-dominated sorting genetic algorithm. Appl Thermal Eng 146, 688–700 (2019). https://doi.org/10.1016/j.applthermaleng.2018.10.020

    Article  Google Scholar 

  5. Chankong, V., Haimes, Y.Y.: Multiobjective Decision Making Theory and Methodology. North-Holland, New York (1983)

    Google Scholar 

  6. Chen, Y., Zhang, Y., Abraham, A.: Estimation of distribution algorithm for optimization of neural networks for intrusion detection system. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Żurada, J.M. (eds.), Artificial Intelligence and Soft Computing—ICAISC 2006. ICAISC 2006. Lecture Notes in Computer Science, vol. 4029. Springer, Berlin, Heidelberg (2006)

    Google Scholar 

  7. Cheng, R., **, Y., Narukawa, K., Sendhoff, B.: A multiobjective evolutionary algorithm using Gaussian process-based inverse modeling. IEEE Trans. Evol. Comput. 19(6), 838–856 (2015). https://doi.org/10.1109/TEVC.2015.2395073

    Article  Google Scholar 

  8. Chugh, T., **, Y., Miettinen, K., Hakanen, J., Sindhya, K.: A surrogate-assisted reference vector guided evolutionary algorithm for computationally expensive many-objective optimization. IEEE Trans. Evol. Comput. 22(1), 129–142 (2018)

    Article  Google Scholar 

  9. Coello, C.A.C., Lamont, G.B., Veldhuizen, D.A.V.: Evolutionary Algorithms for Solving Multi-objective Problems. Springer, New York (2007)

    Google Scholar 

  10. Dai, C., Wang, Y., Ye, M., Xue, X., Liu, H.: An orthogonal evolutionary algorithm with learning automata for multiobjective optimization. IEEE Trans. Cybern. 46(12), 3306–3319 (2016). https://doi.org/10.1109/TCYB.2015.2503433

    Article  Google Scholar 

  11. Deb, K.: Multi-objective Optimization using Evolutionary Algorithms. Wiley, Chichester, UK (2001)

    Google Scholar 

  12. Deb, K., Datta, R.: Hybrid evolutionary multi-objective optimization and analysis of machining operations. Eng. Optim. 44(6), 685–706 (2012). https://doi.org/10.1080/0305215X.2011.604316

    Article  MathSciNet  Google Scholar 

  13. Deb, K., Hussein, R., Roy, P.C., Toscano-Pulido, G.: A taxonomy for metamodeling frameworks for evolutionary multiobjective optimization. IEEE Trans. Evol. Comput. 23(1), 104–116 (2019). https://doi.org/10.1109/TEVC.2018.2828091

    Article  Google Scholar 

  14. Deb, K., Myburgh, C.: A population-based fast algorithm for a billion-dimensional resource allocation problem with integer variables. European J. Oper. Res. 261(2), 460–474 (2017). https://doi.org/10.1016/j.ejor.2017.02.015

    Article  MathSciNet  Google Scholar 

  15. Deb, K., Srinivasan, A.: Innovization: innovating design principles through optimization. In: Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, pp. 1629–1636. Association for Computing Machinery (ACM), New York, NY, USA (2006)

    Google Scholar 

  16. Deb, K., Srinivasan, A.: Innovization: discovery of innovative design principles through multiobjective evolutionary optimization. In: Knowles, J., Corne, D., Deb, K. (eds.) Multiobjective Problem Solving from Nature: From Concepts to Applications, pp. 243–262. Springer, Berlin (2008)

    Chapter  Google Scholar 

  17. Du, Y., Li, J.Q., Luo, C., Meng, L.L.: A hybrid estimation of distribution algorithm for distributed flexible job shop scheduling with crane transportations. Swarm Evol. Comput. 62, 100–861 (2021). https://doi.org/10.1016/j.swevo.2021.100861

  18. Dutta, S., Gandomi, A.H.: Surrogate model-driven evolutionary algorithms: theory and applications. In: Evolution in Action: Past, Present and Future: A Festschrift in Honor of Erik D. Goodman, pp. 435–451. Springer International Publishing, Cham (2020)

    Google Scholar 

  19. Ehrgott, M.: Multicriteria Optimization. Springer, Berlin, Heidelberg (2005)

    Google Scholar 

  20. El-Beltagy, M.A., Nair, P.B., Keane, A.J.: Metamodelling techniques for evolutionary optimization of computationally expensive problems: promises and limitations. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-1999), pp. 196–203. San Mateo, CA: Morgan Kaufman (1999)

    Google Scholar 

  21. Emmerich, M., Giannakoglou, K.C., Naujoks, B.: Single and multiobjective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006)

    Article  Google Scholar 

  22. Gaur, A., Deb, K.: Adaptive use of innovization principles for a faster convergence of evolutionary multi-objective optimization algorithms. In: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion, GECCO ’16 Companion, pp. 75–76. Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2908961.2909019

  23. Gaur, A., Deb, K.: Effect of size and order of variables in rules for multi-objective repair-based innovization procedure. In: 2017 IEEE Congress on Evolutionary Computation (CEC), pp. 2177–2184 (2017). https://doi.org/10.1109/CEC.2017.7969568

  24. Goldberg, D.E.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, USA (1989)

    Google Scholar 

  25. He, C., Huang, S., Cheng, R., Tan, K.C., **, Y.: Evolutionary multiobjective optimization driven by generative adversarial networks (GANs). IEEE Trans. Cybern. 51(6), 3129–3142 (2021). https://doi.org/10.1109/TCYB.2020.2985081

    Article  Google Scholar 

  26. Holland, J.H.: Adaptation in Natural and Artificial Systems. MIT Press, Ann Arbor, MI (1975)

    Google Scholar 

  27. Hussein, R., Roy, P., Deb, K.: Switching between metamodeling frameworks for efficient multi-objective optimization. In: IEEE Symposium Series on Computational Intelligence (SSCI-2018), pp. 1–8. IEEE Press, Piscatway, NJ (2018)

    Google Scholar 

  28. Inapakurthi, R.K., Mitra, K.: Optimal surrogate building using SVR for an industrial grinding process. Mater. Manuf. Proc. 37(15), 1701–1707 (2022). https://doi.org/10.1080/10426914.2022.2039699

    Article  Google Scholar 

  29. Jahn, J.: Vector Optimization. Springer, Berlin, Germany (2004)

    Book  Google Scholar 

  30. Jones, D.R.: A taxonomy of global optimization methods based on response surfaces. J. Glob. Optim. 21(4), 345–383 (2001)

    Article  MathSciNet  Google Scholar 

  31. Koçer, H.G., Uymaz, S.A.: A novel local search method for LSGO with golden ratio and dynamic search step. Soft Comput. 25, 2115–2130 (2021). https://doi.org/10.1007/s00500-020-05284-x

  32. Li, F., Gao, L., Shen, W., Cai, X., Huang, S.: A surrogate-assisted offspring generation method for expensive multi-objective optimization problems. In: 2020 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8 (2020). https://doi.org/10.1109/CEC48606.2020.9185691

  33. Li, L., Chen, H. et al.: C.L.: A robust hybrid approach based on estimation of distribution algorithm and support vector machine for hunting candidate disease genes. Sci. World J. 2013(393570), 7 (2013). https://doi.org/10.1155/2013/393570

  34. Lian, Y., Liou, M.S.: Multiobjective optimization using coupled response surface model and evolutionary algorithm. AIAA J. 43(6) (2005)

    Google Scholar 

  35. Lima, C., Pelikan, M., Lobo, F., Goldberg, D.: Loopy substructural local search for the bayesian optimization algorithm. In: Stützle T., Birattari M., Hoos H.H. (eds.), Engineering Stochastic Local Search Algorithms. Designing, Implementing and Analyzing Effective Heuristics. SLS 2009. Lecture Notes in Computer Science, vol. 5752. Springer, Berlin, Heidelberg (2009)

    Google Scholar 

  36. Lima, C., Pelikan, M., Sastry, K., Butz, M., Goldberg, D., Lobo, F.: Substructural neighborhoods for local search in the bayesian optimization algorithm. In: Runarsson T.P., Beyer HG., Burke E., Merelo-Guervós J.J., Whitley L.D., Yao X. (eds) Parallel Problem Solving from Nature—PPSN IX. Lecture Notes in Computer Science, vol. 4193. Springer, Berlin, Heidelberg (2006)

    Google Scholar 

  37. Mallipeddi, R., Lee, M.: Surrogate model assisted ensemble differential evolution algorithm. In: 2012 IEEE Congress on Evolutionary Computation, pp. 1–8 (2012). https://doi.org/10.1109/CEC.2012.6256479

  38. Martí, L., García, J., Berlanga, A., Molina, J.M.: Introducing MONEDA: scalable multiobjective optimization with a neural estimation of distribution algorithm. In: Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, GECCO ’08, pp. 689–696. Association for Computing Machinery, New York, NY, USA (2008). https://doi.org/10.1145/1389095.1389230

  39. Martins, M.S.R., Yafrani, M.E., Delgado, M., Lüders, R., Santana, R., Siqueira, H.V., Akcay, H.G., Ahiod, B.: Analysis of bayesian network learning techniques for a hybrid multi-objective bayesian estimation of distribution algorithm: a case study on MNK landscape. J. Heuristics 27, 549–573 (2021). https://doi.org/10.1007/s10732-021-09469-x

    Article  Google Scholar 

  40. Messac, A., Mattson, C.A.: Normal constraint method with guarantee of even representation of complete Pareto frontier. AIAA J. 42(10), 2101–2111 (2004). https://doi.org/10.2514/1.8977

    Article  Google Scholar 

  41. Miettinen, K.: Nonlinear Multiobjective Optimization. Kluwer, Boston (1999)

    Google Scholar 

  42. Mittal, S., Saxena, D.K., Deb, K., Goodman, E.D.: A learning-based innovized progress operator for faster convergence in evolutionary multi-objective optimization. ACM Trans. Evol. Learn. Optim. 2(1) (2021). https://doi.org/10.1145/3474059

  43. Miuhlenbein, H., Paaß, G.: From recombination of genes to the estimation of distributions I. binary parameters. In: Proceedings of the 4th International Conference on Parallel Problem Solving from Nature, pp. 178–187. London, UK (1996)

    Google Scholar 

  44. Mullur, A.A., Messac, A.: Metamodeling using extended radial basis functions: a comparative approach. Eng. Comput. 21(203) (2006). https://doi.org/10.1007/s00366-005-0005-7

  45. Pelikan, M., Goldberg, D., Lobo, F.: A survey of optimization by building and using probabilistic models. Comput. Optim. Appl. 21(1), 5–20 (2002). https://doi.org/10.1023/A:1013500812258

    Article  MathSciNet  Google Scholar 

  46. Pelikan, M., Goldberg, D.E., Cantú-Paz, E.: BOA: The bayesian optimization algorithm. In: Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation—Volume 1, GECCO’99, pp. 525–532. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1999)

    Google Scholar 

  47. Ren, Q., Luo, F., Ding, W., Lu, H.: An improved NSGAII algorithm based on site-directed mutagenesis method for multi-objective optimization. In: 2019 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 176–181 (2019). https://doi.org/10.1109/SSCI44817.2019.9002847

  48. Sinha, A., Bedi, S., Deb, K.: Bilevel optimization based on kriging approximations of lower level optimal value function. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8 (2018). https://doi.org/10.1109/CEC.2018.8477763

  49. Wang, H., **, Y.: A random forest-assisted evolutionary algorithm for data-driven constrained multiobjective combinatorial optimization of trauma systems. IEEE Trans. Cybern. 50(2), 536–549 (2020). https://doi.org/10.1109/TCYB.2018.2869674

    Article  Google Scholar 

  50. Wang, R., Dong, N.J., Gong, D.W., Zhou, Z.B., Cheng, S., Wu, G.H., Wang, L.: PCA-assisted reproduction for continuous multi-objective optimization with complicated Pareto optimal set. Swarm Evol. Comput. 60, 100–795 (2021). https://doi.org/10.1016/j.swevo.2020.100795

  51. Xu, Q., Zhang, C., Zhang, L.: A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization. Sci. World J. 2014(597278), 14 (2014). https://doi.org/10.1155/2014/597278

    Article  Google Scholar 

  52. Zhang, Q., Zhou, A., **, Y.: RM-MEDA: a regularity model-based multiobjective estimation of distribution algorithm. IEEE Trans. Evol. Comput. 12(1), 41–63 (2008). https://doi.org/10.1109/TEVC.2007.894202

    Article  Google Scholar 

  53. Zhao, H., Zhang, C.: An online-learning-based evolutionary many-objective algorithm. Inf. Sci. 509, 1–21 (2020). https://doi.org/10.1016/j.ins.2019.08.069

    Article  MathSciNet  Google Scholar 

  54. Zhou, A., **, Y., Zhang, Q., Sendhoff, B., Tsang, E.: Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion. In: 2006 IEEE International Conference on Evolutionary Computation, pp. 892–899 (2006). https://doi.org/10.1109/CEC.2006.1688406

  55. Zhou, A., Zhang, Q., **, Y., Tsang, E., Okabe, T.: A model-based evolutionary algorithm for bi-objective optimization. In: 2005 IEEE Congress on Evolutionary Computation, vol. 3, pp. 2568–2575 (2005). https://doi.org/10.1109/CEC.2005.1555016

  56. Zhou, Z., Wang, Z., Pang, T., Wei, J., Chen, Z.: A competition-cooperation evolutionary algorithm with bidirectional multi-population local search and local hypervolume-based strategy for multi-objective optimization. In: 2021 IEEE Congress on Evolutionary Computation (CEC), pp. 153–160 (2021). https://doi.org/10.1109/CEC45853.2021.9504689

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kalyanmoy Deb .

Appendices

Appendices for in this Chapter

The following sections discuss examples of the manual innovization task, the automated innovization task, and the innovized repair operator.

3.6 Examples of Manual Innovization Task

The basic working of the manual innovization procedure has been illustrated here through a three-variable, two-objective truss design problem, which was originally studied using the \(\epsilon \)-constraint approach [5, 41] and later using an evolutionary approach [11]. In that, the truss (Fig. 3.3) must carry a certain load without incurring an elastic failure. The two conflicting design objectives are to (i) minimize the total volume of the truss members and (ii) minimize the maximum stress developed in both members (AC and BC) due to the application of the load 100 kN.  Furthermore, the three decision variables are cross-sectional area of AC and BC (\(x_1\) and \(x_2\), respectively), measured in square meters, and the vertical distance between A (or B) and C (y), measured in meters. The optimization problem formulation is given as follows:

$$\begin{aligned} \begin{array}{rl} \text {Minimize} &{} f_1(\vec {x},y) = x_1\sqrt{16+y^2} + x_2\sqrt{1+y^2},\\ \text {Minimize} &{} f_2(\vec {x},y) = \max (\sigma _{AC},\sigma _{BC}),\\ \text {Subject to } &{} \max (\sigma _{AC},\sigma _{BC}) \le S_{\max }, \\ &{} 0\le x_1, x_2 \le A_{\max }, \\ &{} 1\le y \le 3.\\ \end{array} \end{aligned}$$
(3.8)
Fig. 3.3
figure 3

(taken from [16])

The design of two-bar truss

Using the dimensions and loading specified in Fig. 3.3, it can be observed that member AC is subjected to \(20\sqrt{16+y^2}/y\) kN load and member BC is subjected to \(80\sqrt{1+y^2}/y\) kN load. The stress values are calculated as follows:

$$\begin{aligned} \sigma _{AC} = \frac{20\sqrt{16+y^2}}{yx_1}, \quad \sigma _{BC} = \frac{80\sqrt{1+y^2}}{yx_2}. \end{aligned}$$
(3.9)

Here, the stress values and the cross-sectional areas are limited to \(S_{\max }=1(10^5)\) kPa and \(A_{\max }=0.01\) m\(^2\), respectively. All three variables are treated as real-valued. Simulated binary crossover (SBX) with \(\eta _c=10\) and polynomial mutation operator with \(\eta _m=50\) have been used. All constraints are handled using the constraint-tournament approach [11]. Figure 3.4 shows the final set of non-dominated solutions obtained by an algorithmic run of NSGA-II. Although the trade-off between the two objectives is evident in Fig. 3.4, these solutions are further analyzed, using two different studies, to gain more confidence in the Pareto-optimality of these solutions. First, a single-objective RGA is used to find the optimum of individual objective functions, subject to the same constraints and variable bounds. Figure 3.4 marks these two solutions (one per objective) as 1-obj solutions. It is evident that the front obtained using NSGA-II extends to these two extreme solutions. Next, the normal constraint method (NCM) [40] is used with different starting points from a line that joins the two extreme solutions. The solutions thus obtained, one at the end of each NCM procedure, are also shown in Fig. 3.4. Since these solutions lie on the front obtained using NSGA-II, it is confirmed that the non-dominated solutions obtained using NSGA-II are close to the true \(P\!F\).

Fig. 3.4
figure 4

(taken from [16])

NSGA-II solutions obtained for the two-bar truss problem

3.1.1 3.6.1 Theoretical Innovized Principles and Manual Innovization Results

Before applying the manual innovization procedure to the solutions obtained using NSGA-II, an exact analysis of this problem is presented to identify the true \(P\!F\), and the underlying innovized principles (theoretical), if any. The problem, although simple mathematically, is a typical optimization problem that has two resource terms in each objective, involving variables \(x_1\) and \(x_2\), and interlinking them with the third variable y. For such problems, the optimum occurs when identical resource allocation is made between the two terms in both objective and constraint functions, as shown below.

$$\begin{aligned} x_1\sqrt{16+y^2} = x_2\sqrt{1+y^2} &\implies & \frac{20\sqrt{16+y^2}}{yx_1} = \frac{80\sqrt{1+y^2}}{yx_2}. \end{aligned}$$
(3.10)
Fig. 3.5
figure 5

(taken from [16])

Variation of \(x_1\) and \(x_2\) for the truss design problem

Thus, every optimal solution is expected to satisfy both of the above equations, resulting in the following innovated rules:

$$\begin{aligned} \frac{x_1}{x_2} = 0.5, & \text{ and } & y = 2. \end{aligned}$$
(3.11)

Substituting \(y=2\) into the expression for the first objective (volume) leads to \(x_2=V/2\sqrt{5}\) m\(^2\), where V is the volume of the structure (in m\(^3\)). Similarly, substituting these values into the objective functions \(V=f_1\) and \(S=f_2\) leads to \(SV=400\)—an inverse relationship between the objectives. Thus, the solutions in the true \(P\!F\) are given in terms of volume V, as follows:

$$x_1^{*} = \frac{V^{*}}{4\sqrt{5}}\ \text{ m}^2,\quad x_2^{*} = \frac{V^{*}}{2\sqrt{5}}\ \text{ m}^2,\quad y^{*}=2\ \text{ m },\quad S^{*}=400/V^{*}\ \text{ kPa }.$$

When the variable \(x_2\) reaches its upper bound, that is, at the transition point T shown in Fig. 3.5, \(V_T=0.04472\) m\(^3\) and \(S_T=8944.26\) kPa, since \(x_2\) cannot be increased any further. The inset plot (drawn with a logarithmic scale of both axes) in Fig. 3.4 shows this interesting aspect of the front obtained. There are two distinct behaviors around the transition point T marked in the figure: (i) one that stretches from the smallest volume solution to a volume of about 0.04478 m\(^3\) (point T), and (ii) another that stretches from this transition point to the smallest stress solution.

The extreme solutions and this intermediate solution, obtained by NSGA-II, are tabulated in Table 3.1.

An investigation of the values of the decision variables reveals the following:

  1. 1.

    The inset plot in Fig. 3.4 reveals that for optimal structures, the maximum stress (S) developed is inversely proportional to the volume (V) of the structure, that is, \(SV=\textrm{constant}\), as predicted above. When a straight line is fitted through the logarithm of the two objective values, \(SV=402.2\), a relationship is found between these solutions obtained using NSGA-II. The obtained relationship is close to the theoretical relationship computed above (from the true \(P\!F\)).

  2. 2.

    The inset plot also reveals that the transition occurs at \(V=0.044779\) m\(^3\), which is also close to the exact theoretical value computed above.

  3. 3.

    To achieve an optimal solution with a lower maximum stress (and larger volume), both cross-sectional areas (AC and BC) should increase linearly with volume, as shown in Fig. 3.5. The figure also plots the mathematical relationships (\(x_1\) and \(x_2\) versus V) obtained earlier with solid lines, which can barely be seen, as the solutions obtained using NSGA-II fall on top of these lines.

  4. 4.

    A further investigation reveals that the ratio between these two cross-sectional areas is almost 1:2, and the vertical distance (y) takes a value close to 2 for all solutions.

  5. 5.

    Figure 3.6 reveals that the stress values on both members (AC and BC) are identical for any Pareto-optimal solution (Fig. 3.7).

Table 3.1 Two extreme solutions and an interesting intermediate solution (T) for the two-bar truss design problem are presented (taken from [16])
Fig. 3.6
figure 6

(taken from [16])

Variation of stresses in AC and BC of the two-bar truss problem

Fig. 3.7
figure 7

(taken from [16])

Variation of y for the two-bar truss design problem

The innovized rules illustrated above are some interesting properties of the original optimization problem that may not be intuitive to the designer. However, these principles can be explained from the mathematical formulation described above. Thus, although these optimality conditions can be derived mathematically from the problem formulation given in Eq. 3.8 in this simple problem, such optimality conditions may often be tedious and difficult to achieve exactly for large and complex problems. The application of a numerical optimization procedure and then investigating the obtained optimal solutions have the potential to reveal such important innovative design principles.

3.7 Examples of Automated Innnovization Task

The  same two-bar truss problem, discussed above, is chosen to illustrate the working of the AutoInn procedure. For the solutions obtained using NSGA-II, the AutoInn procedure finds four rules common to \(87\%\) to \(92\%\) of the non-dominated dataset:

$$\begin{aligned} SV = 400.770, \quad \frac{x_1}{V} = 0.111, \quad \frac{x_2}{V} = 0.224, \quad \frac{x_2}{x_1} = 1.984. \end{aligned}$$
(3.12)

Figure 3.8 shows the relevant non-dominated solutions obtained using NSGA-II. Some unclustered solutions are marked as red points. Figure 3.9 shows the distribution of \(c_k\) values for one of the rules obtained \(V^{-0.997}x_1^{1.000}=c\). It is clear that the values V and \(x_1\) of the majority (\(87\%\)) non-dominated solutions satisfy the rule. The clustering algorithm inbuilt in the AutoInn procedure found three clusters with slightly different \(c_k\)-values. But the non-dominated solutions that do not satisfy the rule have very different \(c_k\) values. For ease of understanding, the \(c_k\)-values are sorted from low to high in the figure shown.

Fig. 3.8
figure 8

Pareto front for the two-bar truss design problem. Red points are a few unclustered points for the \(V^{-0.997}x_1^{1.000}=c\) rule

Fig. 3.9
figure 9

(taken from [2])

\(c_k\) distribution for the rule \(V^{-0.997}x_1^{1.000}=c\) is shown, found to 87% of the non-dominated dataset. Unclustered non-dominated data are shown with a ‘X’

The respective distributions of the \(c_k\) values for two other rules are shown in Figs. 3.10 and 3.11.

Fig. 3.10
figure 10

c-value distribution for \(S^{1.0000}V^{0.9999}=c\) found to 92% of the non-dominated dataset

Fig. 3.11
figure 11

(taken from [2])

Cluster plot for \(V^{-0.9999}x_2^{1.0000}=c\) found to 88% of the non-dominated dataset

Although the AutoInn procedure finds multiple clusters, the respective \(c_k\) values are close to each other, and the difference in the c values from the unclustered points is significant.

3.8 Examples of Innovized Repair Operator

Fig. 3.12 shows the median generational distance (GD) and inverse generational distance (IGD) metrics [11] for the two-bar truss design problem. Notably, GD is an indicator of convergence, and IGD is a combined indicator of convergence and diversity. The plots in Fig. 3.12 reveal that NSGA-II-IR with repair preference given to short rules (SN repair strategy) performs much better than the no repair strategy (NI, i.e., base NSGA-II), in terms of GD (smaller the better). However, in terms of the IGD metric, the NI strategy performs marginally better. This is expected, as the NSGA-II with SN repair strategy is expected to focus more on improving the convergence than on maintaining the diversity.

Fig. 3.12
figure 12

Median GD and IGD results for the two-bar truss design problem over 30 runs

The rules extracted at the end of the NSGA-II run with the SN strategy, provided below, closely match the theoretical property of variables stated in Eq. 3.11:

$$\begin{aligned} x_1^{-1.008} x_2 = 2.0750, \quad x_3 = 1.9449 \pm 0.0674. \end{aligned}$$
(3.13)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Saxena, D.K., Mittal, S., Deb, K., Goodman, E.D. (2024). Foundational Studies on ML-Based Enhancements. In: Machine Learning Assisted Evolutionary Multi- and Many- Objective Optimization. Genetic and Evolutionary Computation. Springer, Singapore. https://doi.org/10.1007/978-981-99-2096-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-2096-9_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-2095-2

  • Online ISBN: 978-981-99-2096-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation