In many real-world situations, we have to deal with multiple objectives simultaneously in order to make appropriate decisions. The presence of multiple objectives in an optimization problem makes the problem challenging because most of the time these objectives are conflicting in nature. For example, we may want to maximize the return on investment of a portfolio and, on the other hand, minimize the risk associated with the assets in the portfolio. We may want to minimize the cost of a product while maximizing the performance of that particular product. Similarly, there are situations where we may want to maximize more than one objective at a time and minimize multiple objectives for a given optimization problem. For instance, a product manager in an XYZ mobile manufacturing company is supervising the launch of a new smartphone in the market. He/she will have to consider many features and configurations of the smartphone before launching. He/she might have to consider features like the screen resolution, size of the screen, thickness of the phone, camera resolution, battery life, operating system, and even aesthetics of the product. On the other hand, he/she might also want to minimize the amount of labor, time of production, and overall cost associated with the project. He/she knows that the objectives, in this case, are conflicting, and simultaneously achieving every objective in not possible. The solution to this dilemma is to look for some trade-off solutions so that the main motive of the problem can be served.

Even if we consider a simple problem of mobile-buying decision-making problem for an individual buyer or consumer who wishes to buy a smartphone from the available set of options, he/she might have to face the same kind of dilemma as the product manager in the company XYZ was facing. The individual smartphone buyer may want to maximize the quality and features, like size, camera quality, user-interface, aesthetics, reliability and, on the other hand, try to minimize the cost. The graphical representation of the alternative solutions in the mobile-buying decision-making problem is illustrated in Fig. 3.1.

Fig. 3.1
figure 1

Mobile-buying decision-making problem

From the above discussion, we may convince ourselves that the single-objective optimization problems are not sufficient to deal with a large class of decision-making problems, where multiple objectives are present in the problem. Unlike single-objective optimization problems, the optimal solution(s) may or may not exist in multi-objective optimization Problems (MOOP). Objectives in the MOOP are conflicting in nature to each other. The optimal value of one objective may not be the optimal value for other objectives, and sometimes, it may be even worse for other objectives. For example in the above mobile-buying decision-making problem, if a buyer wants to minimize the cost for buying, and chooses the cheapest option M1, then he/she has to give up on the quality and features. Similarly, if he/she chooses the option of M5 with maximum quality and features, he/she has to loosen the pocket, and have to bear the maximum cost. In the next section, MOOP is discussed mathematically in detail.

3.1 Multi-objective Optimization Problems (MOOP)

Optimization problems involving multiple objective functions (>1), are regarded as the multi-objective optimization problems (MOOP). The underlying objective functions in MOOP can be minimization type, maximization type, or a combination of both (min–max). The procedure of finding single or multiple optimal solutions for MOOP is called multi-objective optimization.

Mathematically, a MOOP can be written in the following general form:

$$\begin{aligned} \begin{aligned}&\text {Minimize}~F(\bar{X}) = \left[ f_{1}(\bar{X}), f_{2}(\bar{X}),\ldots , f_{M}(\bar{X})\right] ,\\&\text {s.t.} ~~~~~~ g_j(\bar{X}) \le 0 ~~~ j=1,2,\dots ,J \\&~~~~~~~~~~ h_k(\bar{X}) = 0 \ \ \ k=1,2,\dots ,K \\&~~~~~~~ x^{\text {l}}_{i} \le x_i \le x^{\text {u}}_{i} ~~~ \forall i = 1,2,\ldots D \end{aligned} \end{aligned}$$
(3.1)

where \(\bar{X}= (x_1,x_2,\ldots ,x_D)\) is the vector of decision variables, \(x_i\) (\(i = 1,2,\ldots D\)). \(g_j(\bar{X})\) and \(h_k(\bar{X})\) are J inequality and K equality constraints, respectively. \(x^{\text {l}}_{i}\) and \(x^{\text {u}}_{i}\) are lower bound and upper bound constraints for decision variable \(x_i\) (\(i = 1,2,\ldots D\)).

The multi-objective optimization problems give rise to two different kinds of spaces. The space \(\mathcal {F}\subseteq \mathcal {R}^{D}\) spanned by the vectors of decision variables (\(\bar{X}\)) is called the decision space or search space. And, the space \(\mathcal {S}\subseteq \mathcal {R}^{M}\) formed by all the possible values of objective functions is called the objective space or solution space [1].

Similar to single-objective optimization problems, a multi-objective optimization problem can be classified as linear and nonlinear MOOP depending on the objective functions and constraints. If all the objective functions and constraints are linear, then MOOP is referred as a linear multi-objective optimization problem. On the other hand, if any of the objectives or constraints is nonlinear, MOOP is called the nonlinear multi-objective optimization problem. Further, a MOOP can also be classified as a convex and non-convex multi-objective optimization problem. For the detailed classification, an interested reader can refer to ‘Multi-Objective Optimization Using Evolutionary Algorithms’ by Deb [2].

3.2 Multi-objective Optimization Techniques (MOOT)

In the mobile-buying decision-making problem, we see that our hypothetical smart buyer wants to maximize the quality and features, but also wants to minimize the cost. We cannot generalize this case for every buyer or consumer in the market, there might be some buyers not worried about the cost, and their primary preference is toward quality and features only. Similarly, there might be buyers available in the market who do not think about the quality and features, but make their decision on the basis of the cost only. In both of these extreme cases, the problem is in fact not a multi-objective but a single objective. In many situations, due to the scarcity of resources taking decisions on extremes is not a feasible option. One has to make some trade-off on available choices based on his/her preference. In that case, this problem becomes a multi-objective optimization problem.

The gist of the above discussion is, when multiple-conflicting objectives are important in the decision-making process, finding a single optimum solution such that it optimizes all the objectives simultaneously is not possible, and even not prudent to look for. We have to settle ourselves on some trade-off solutions, or in layman’s language, we have to achieve certain harmony between conflicting objectives based on our preference. If in case, the harmony or balance between these conflicting objectives is not possible, we must try to find a list of preferences as to which objective should be given the most preference and make a compromise.

Multi-objective optimization techniques are methods or procedures, primarily focused on dealing with the optimization problems, where conflicting objectives cannot be ignored. There are classical and evolutionary techniques available in the literature of multi-objective optimization techniques, which we will discuss in later sections of this chapter, but before going further, we have to make ourselves familiar with some concepts and terminologies, which are important for understanding the multi-objective optimization procedures.

3.2.1 Some Concepts and Terminologies

For understanding the idea of optimality in multi-objective optimization, first we have to discuss about the Pareto optimality. The concept of Pareto optimality was first introduced by Francis Ysidro Edgeworth, and it is named after the mathematician Vilfredo Pareto, who generalized the concept for multi-objective optimization [3].

Suppose we have a minimization problem as mentioned in Eq. (3.1). The multi-objective function is denoted by \(F(\bar{X})=[f_{1}(\bar{X}), f_{2}(\bar{X}),\ldots , f_{M}(\bar{X})]\). \(\mathcal {S}\) is the objective space. And, \(\mathcal {F}\) is decision space, and \(\bar{X} \in \mathcal {F}\) is a decision vector or solution. A reader should not get confused with the terms decision vectors and solutions, because in the literature of multi-objective optimization, these terms are used interchangeably, and have the same meaning. Continuing this tradition, we also use the terms decision vectors or solutions, interchangeably depending on the requirements.

Definition 3.1

(Dominance) A solution, \(X_1 \in \mathcal {F}\) dominates another solution \(X_2 \in \mathcal {F}\), if the following two conditions are satisfied:

  1. 1.

    \(f_{k}(X_1) \le \ f_{k}(X_2), \forall \ k = 1,2,\dots M\)

  2. 2.

    There exists some \(j \in \{ 1,\ldots M\}\) such that, \(f_{j}(X_1) < f_{j}(X_2)\).

In the above definition, condition (1) says solution \(X_1\) is no worse than solution \(X_2\) in all the objectives, while condition (2) indicates that there exists at least one objective (say \(f_j\)) for which \(X_1\) is strictly better than \(X_2\). If any of the above conditions are not satisfied or violated, the solution \(X_1\) does not dominate the solution \(X_2\). It is also worth mentioning here again that we are considering a minimization-type MOOP for our discussion, and if the underlying MOOP is maximization type, the inequalities (\(\le ,<\)) will be replaced by (\(\ge ,>\)) in the above definition of dominance.

If solution \(X_1\) dominates the solution \(X_2\), we can denote this situation mathematically as \(X_1 \prec X_2\). Apart from saying, solution \(X_1\) dominates the solution \(X_2\), one can also say, solution \(X_2\) is dominated by solution \(X_1\), or \(X_1\) is non-dominated by \(X_2\), or solution \(X_1\) is non-inferior to solution \(X_2\). The concept of dominance is graphically shown in Fig. 3.2.

Fig. 3.2
figure 2

Graphical illustration of dominance

Definition 3.2

(Pareto optimality) A decision vector or solution, \(X^* \in \mathcal {F}\) is called Pareto optimal solution or non-dominated solution if \(\not \exists ~ \text {any}~ k\in \{ 1,\ldots M\}\) such that \(\ f_{k}(X) < f_{k}(X^*)\).

The Pareto optimality of solutions implies that there does not exist any feasible solution in the decision space which would decrease some objectives without simultaneously causing an increase in at least one objective. That is, any improvement in one objective results in the worsening of at least one other objective.

Definition 3.3

(Pareto optimal set) The set containing all the Pareto optimal solutions is called the Pareto optimal set, \(P^*\). It is given by,

$$\begin{aligned} P^* = \{X^* \in \mathcal {F} : X^* \prec X, \ \forall X \in \mathcal {F}\} \end{aligned}$$
(3.2)

Definition 3.4

(Pareto optimal front) In the objective space \(\mathcal {S}\), all the objective values corresponding to the Pareto optimal solutions are joined with a continuous curve. This curve is called the Pareto optimal frontier, simply Pareto optimal front.

Fig. 3.3
figure 3

Graphical illustration of map** of a decision space onto an objective space, where both the conflicting objectives are to be minimized [4]

The graphical illustration of the Pareto front and Pareto optimal set is shown in Fig. 3.3. For every solution in the decision space, there is a corresponding objective value in the objective space. The objectives of the optimization problem are to be minimized, and these objectives are conflicting in nature. Furthermore, a vector is called an ideal vector or Utopian objective vector if it contains all the decision variables that correspond the optima of objectives functions when each objective is considered separately [1]. It is also interesting to mention that when the objective functions in a MOOP are not conflicting in nature, the cardinality of the Pareto optimal set is one [5]. In the next section, different approaches to handle multi-objective optimization problems are discussed in detail.

3.2.2 Different Approaches of Solving MOOP

The conflicting objectives in the multi-objective optimization (MOO) problems lead to multiple trade-off solutions or Pareto optimal solutions. Many different approaches to solve MOOP are proposed and classified for the MOO problems. There are conventional or classical methods and modern or meta-heuristic methods available in the literature. In conventional methods, the reformulation of the MOO problems is required to proceed with the optimization process. Different methods and approaches were proposed to reformulate the MOO problems. For instance, one approach is to reformulate MOO problem into single-objective optimization problem using weighted sum of objective functions in which weights are assigned on the basis of a preference or utility by decision-maker (DM). One other approach is to optimize the most preferred objective function of DM’s interest and treat other objectives as constraints with some predefined limits. In both of these methods, some preference of the decision-maker is required before the optimization process begins. However, some classical methods do not need any priori information about the relative importance of the objective function. These methods are called ‘No-preference methods’. Discussing these classical methods in detail is not in the scope of this book. For detailed information regarding classical methods, an interested reader can refer to the book, “Multi-Objective Optimization Using Evolutionary Algorithms” by Deb [2]. However, for the convenience of the readers, a brief classification inspired by Miettinen [6], and Hwang and Masud [7] is presented here (Fig. 3.4).

Fig. 3.4
figure 4

Classification of multi-objective optimization methods

  1. (1)

    Priori methods : In priori methods, the preference information (e.g., weights of the objective functions) is specified before applying the optimization algorithm. These preferences are used to quantify the relative importance of the different objective functions in the MOO problems. These methods convert a MOO problem into a single-objective optimization problem for the further optimization process. Priori approaches can be described as a “decide first and then search” approaches, where the decision is taken before searching. The major limitation of these methods is that they are applicable only when the decision-maker knows the problem very well. However, it is very challenging for the decision-maker to accurately express his/her preferences through some goals or weights. Moreover, every time the relative importance of the objectives changes, weights and preferences are to be relooked. Some example of priori methods are bounded objective method, lexicographic method, compromise programming method, goal programming, utility function method, and multi-attribute utility analysis (MAUA) [7].

  2. (2)

    Progressive (or Interactive) methods : In progressive or interactive methods, to guide the search process, the objective functions and constraints are redefined and incorporated multiple times, based on the decision-maker’s preferences, during the execution of the algorithms. A subset of non-dominated (Pareto optimal) solutions are found in each iteration, and the resultant Pareto set is then presented before the decision-maker. If the decision-maker is satisfied with the solutions, then the algorithm terminates the optimization process. However, if decision-maker is not satisfied, then he/she is required to modify the preferences, and new Pareto optimal solutions are found using the new modified preferences. This process continues until the decision-maker is satisfied or no further improvement is possible. Method of displaced ideal, method of Steuer, method of Geoffri on, interactive goal programming (IGP), and surrogate worth trade-off method are some of the methods that fall under this category [7].

  3. (3)

    “A Posterior” approach : These approaches are mainly ‘first search and then decide’ strategies, where the search is executed before decision-making. The non-dominated solutions are first generated using some optimization method. Once the method is terminated, the most satisfactory solutions are selected from the obtained non-dominated solutions based on the decision-maker’s requirements. In other words, the decision-maker expresses his/her preferences once all the non-dominated solutions are generated. The decision-making process is involved after the solutions are generated, with the changing preferences of the decision-maker, new decisions are possible without repeating the optimization process. The main criticism about the posterior approaches is that these methods usually generate many non-dominated solutions, making it very difficult for the decision-maker to choose the most satisfactory solution. Moreover, the process of approximating the Pareto optimal set is often time-consuming. Some of the examples are \(\epsilon \)-constraint method, physical programming method, normal boundary intersection (NBI) method, and normal constraint (NC) method [7].

  4. (4)

    No articulation approach : In these methods, personal preference information from the decision-maker is not needed once the problem is formulated; i.e., constraints and objectives are defined. These approaches are advantageous for problems where the decision-maker cannot precisely define his/her preferences. These methods are used only when the decision-maker is not available, or the decision-maker cannot define what he/she prefers. These methods are known for their faster convergence and speed. Some examples of these methods are the global criterion method and the min–max method [7].

The approaches discussed above often lead to a solution that may not be optimal. The obtained Pareto front might be locally non-dominated, not necessarily globally non-dominated. For example, approaches in which a multi-objective optimization problem is reformulated as single-objective optimization problems. The reformulation is sometimes challenging. Also, converting the objectives into constraints due to the conflicting nature of the multiple objectives is not feasible. Similarly, in the weighted sum approaches, the major challenge is to determine the appropriate weights based on the preference of the user. Many complex real-world problems do not provide sufficient information about the problem, and hence, it is not an easy task to get the optimal values of the weights. Moreover, in most of the methods mentioned above, additional parameter settings are required. Decision-maker is supposed to supply the value of parameters, and the preferences of the decision-maker are subjective in many cases. These methods are not only difficult to implement, but they also suffer from many drawbacks. Some are mentioned below:

  1. (1)

    Most of these methods fail to perform if the shape of the Pareto front is concave or disconnected.

  2. (2)

    These methods are only able to produce a single solution in every run of the optimization process. For obtaining different trade-off solutions, one has to run the algorithm multiple times, which increases the computational cost of these methods.

  3. (3)

    The different objectives might take values of different orders of magnitude (or different units). A normalization of objective functions is required, which demands knowledge of the extremum values of each objective in the objective space.

The methods for multi-objective optimization are presented above utilize the single-objective optimization techniques for the optimization process. The single-objective optimization techniques are incapable of producing multiple solutions, which is the most important aspect of the MOO problems.

The challenge of producing multiple solutions for a MOO problem, however, can be handled in a more sophisticated manner. There are other promising methods available, which are non-conventional, more advanced, and intelligent. The methods which require very low (or, no) information about the optimization problems and are equipped with the potential of producing multiple solutions in a single run of the optimization process. Moreover, they provide privilege to the user in deciding the number of solutions, as much as he wants, or as low as he can. These methods are meta-heuristic methods. The population-based approach and capability of handling black-box problems make these evolutionary and swarm-based techniques a suitable candidate for MOO problems. Meta-heuristic techniques for single-objective optimization can be extended to handle MOO problems with some modifications, because of their basic structure, which is different from the single-objective optimization.

The first hint regarding the possibility of using population-based stochastic optimization algorithms to solve multi-objective optimization problems was presented in the Ph.D. thesis of Rosenberg [8], in which a multi-objective problem was restated as a single-objective problem and solved with the genetic algorithm (GA). However, David Schaffer was the first who introduced the revolutionary idea of applying stochastic techniques to deal with multi-objective optimization problems by proposing the multi-objective evolutionary optimization approach based on the genetic algorithm (GA), known as vector evaluated genetic algorithm (VEGA) [9]. The expansion in the research of meta-heuristic techniques and advancements in the computing power of modern computers paved the way for the researchers to focus on articulating more superior multi-objective meta-heuristic techniques. For example, some of the well-known multi-objective stochastic optimization techniques are non-dominated sorting genetic algorithm (NSGA) [10], non-dominated sorting genetic algorithm version 2 (NSGA-II) [11], multi-objective particle swarm optimization (MOPSO) [12], Pareto archived evolution strategy (PAES) [13], Pareto-frontier differential evolution (PDE) [14], multi-objective ant colony optimization [15], multi-objective dragonfly algorithm (MODA) [16], and multi-objective sine cosine algorithm [17].

The population-based approach of meta-heuristic algorithms provides liberty to obtain multiple Pareto optimal solutions in a single run of the algorithm. Instead of finding a single Pareto optimal front containing solutions with specific preferences, these methods explore the search space extensively to provide multiple Pareto optimal front corresponding to the different regions. In the next section, we will discuss the particular case of multi-objective sine cosine algorithm, which is the main focus of this chapter.

3.3 Multi-objective SCA

The basic structure of multi-objective optimization is different from the single-objective optimization, which compels to incorporate some modifications in the original sine cosine algorithm (SCA) proposed for single-objective optimization. Before coming to the proposed modifications in the SCA, let us discuss some problems, which have to be taken into consideration.

  1. 1.

    How to choose \(\boldsymbol{P_g}\) (i.e., destination point) in each iteration?

    SCA is required to favor non-dominated solutions over dominated solutions, and drive the population toward the different parts of the Pareto front, or set of non-dominated solutions, and not only in the direction of the destination point.

  2. 2.

    How to identify the non-dominated solutions in SCA, and how to retain the solutions during the search process?

    Ans: One strategy is to combine all solutions obtained in the optimization process and then extract the non-dominated solutions from the combined population. Of course, other approaches do exist.

  3. 3.

    How to maintain the diversity in the population, so that a set of well-distributed non-dominated solutions can be found along the Pareto front?

    Ans: Some classical niching methods (e.g., crowding or sharing) are available and can be adopted for maintaining the diversity.

The problem of finding an accurate approximation of the true Pareto optimal front is challenging and even sometimes impossible for a given multi-objective optimization problem. However, the approximated Pareto front obtained by using multi-objective meta-heuristic algorithms should possess certain characteristics. For instance, the resultant non-dominated set of solutions should lie at a minimum distance from the optimal Pareto front and the solutions in the resultant Pareto front should be uniformly distributed to cover a wide range of the non-dominated solutions [18]. These points were taken into consideration, and various attempts have been made to design the multi-objective SCA. The structure of multi-objective SCA is different because of the presence of Pareto optimal solutions and the concept of dominance; however, the search mechanism is almost similar to single-objective SCA. We will study the multi-objective versions of SCA based on two approaches, particularly the aggregation-based approaches and non-dominance diversity-based approaches which are discussed in subsequent sections. A list of all the multi-objective SCA proposed in the literature is presented in Table 3.1.

Table 3.1 Multi-objective sine cosine algorithms

3.3.1 Aggregation-Based Multi-objective Sine Cosine Algorithm and Their Applications

In aggregation-based approaches, the multiple objectives of a MOO problem are combined using aggregation operator to form a single-objective function. The aggregation operators merge multiple objectives using techniques like random weights, price penalty function, fuzzy membership function, utility function, etc. [25] to formulate a single-objective function. This single objective is then solved using standard single-objective optimization algorithms. However, in principle, aggregation-based approaches for handling MOO problem fail to find solutions when the Pareto optimal region is non-convex. Fortunately, not many real-world multi-objective optimization problems have been found to have a non-convex Pareto optimal region. This is the reason why the aggregation-based approaches are still popular and used in practice for multi-objective optimization problems [2].

The single-objective sine cosine algorithm [28] is a robust optimizer and can be utilized with aggregation-based approaches for solving MOO problems. Some significant applications of aggregation-based MOO-SCA are discussed here in the subsequent sections.

3.3.1.1 Multi-objective Improved Sine Cosine Algorithm for Optimal Allocation of STATCOM

In power systems, STATCOM or static synchronous compensator is a power electronic device used in power systems to regulate its various parameters either by injecting or by absorbing the reactive power. The optimal location of STATCOM is needed to enhance the performance of the power system and simultaneously reduce the cost. Multi-objective improved SCA was proposed by Singh and Tiwari [25] to handle the problem of optimal allocation of holomorphic embedded load-flow (HELF) model of STATCOMs with six objective functions. In the proposed improved SCA (ISCA), some modifications were incorporated in the SCA to boost its exploration and exploitation capabilities. The control parameter \(r_1\) is modified to change the range of sine and cosine functions in an adaptive manner.

$$\begin{aligned} r_1 = \gamma \times \cos \left( 90^{\circ } - 90^{\circ } \left( \frac{t-T}{T}\right) \right) \times \cos \left( 60^{\circ } - 60^{\circ } \left( \frac{t-T}{T}\right) \right) \end{aligned}$$
(3.3)

where \(\gamma \) is a constant and its value is taken equal to 2.

Singh and Tiwari [25] formulated STATCOM’s multi-objective problem into a single-objective problem using aggregation-based approach. The underlying aggregation operator was based on a fuzzy membership function. Multiple objectives were aggregated using the concept of the fuzzy membership function. In fuzzy membership, each objective function was assigned a membership value, and these membership values represent the weights of the objectives in the aggregated fuzzy membership function. The range of fuzzy membership values lies in the interval [0, 1]. The membership value 0 indicates the incompatibility of an objective function with the aggregated function, meanwhile, the membership value 1 represents the complete compatibility of an objective function with aggregated function [29].

The underlying six objectives of the holomorphic embedded load-flow (HELF) model for STATCOMs problem can be considered as different important factors to consider before planning and operation of STATCOMs allocation. All the six objectives, (say) \(f_1,f_2,f_3,f_4,f_5,f_6\), are minimization-type problems and share relative importance in the STATCOMs allocation problem. For instance, \(f_1\) represents active power loss, \(f_2\): reactive power loss, \(f_3\): node voltage deviation, \(f_4\): cost of STATCOM, \(f_5\): node severity to voltage collapse, and \(f_6\) denotes the apparent power flows through transmission lines. For the mathematical definition of the mentioned objectives, readers can refer to Singh and Tiwari [25].

The objective function \(f_3\), the node voltage deviation, is a important metric for the STATCOMs allocation problem. The authors used an exponential membership function mentioned in Eq. (3.4) to compute the membership value for the objective function \(f_3\). The exponential membership function helps in detecting good and bad solutions of the node voltage profile by assigning higher membership values to the better solutions and low membership values to other solutions. The membership value of the rest objective functions \(f_i\), (\(i=1,2,4,5,6\)) was calculated using an quarter cosine membership function (\(\mu f_i\)) given by Eq. (3.5). The quarter cosine membership function help to retain the solutions of moderate quality as well, along with the solutions of high quality.

$$\begin{aligned} \mu f_1 = {\left\{ \begin{array}{ll} 1 \ \ &{} \text {if} \ f_{1,\min } \le f_ 1 \le f_{1,\max }\\ \\ \text {e}^{m\times |1- V_k|} \ \ &{} \text {if} \ f_{1,\min } \ge f_ 1 \ge f_{1,\max } \end{array}\right. } \end{aligned}$$
(3.4)

where \(V_k\) is the kth bus voltage, and \(m=-10\) is used to vary the time constant of an exponential curve.

$$\begin{aligned} \mu f_i = {\left\{ \begin{array}{ll} 1 \ \ {} &{} \text {if} \ f_i \le f_{i,\min }\\ \\ \cos {\left[ \frac{\pi }{2} \times {\left( \frac{f_i - f_{i,\min }}{f_{i,\max } - f_{i,\min }}\right) }\right] } \ \ {} &{} \text {if} \ f_{i,\min }< f_{i,\max }\\ \\ 0 \ \ {} &{} \text {if} \ f_i \ge f_{i,\max }\\ \end{array}\right. } \end{aligned}$$
(3.5)

where \(\mu f_i\) is the value of the membership function for the objective \(f_i\), while \(f_{i,\min }\) and \(f_{i,\max }\) are lower and upper bounds of the ith objective.

The fuzzy membership functions \(\mu f_i\) of the objectives functions \(f_i\) were aggregated to produce trade-off solutions. To aggregate these fuzzy membership functions, the ‘max-geometric mean’ operator is used [30]. Max-geometric mean operator first calculates the geometric mean of fuzzy membership functions of the underlying objective functions as mentioned in Eq. (3.6) below,

$$\begin{aligned} \mu f = (\mu f_1 * \mu f_2 * \mu f_3 * \mu f_4 * \mu f_5 * \mu f_6)^{(\frac{1}{6})} \end{aligned}$$
(3.6)

The geometric mean of the aggregated fuzzy membership function, denoted by \(\mu f\), represents the degree of overall fuzzy satisfaction that means \(\mu f\) indicates the relative importance of the fuzzy membership functions in the aggregation. In the second procedure of the max-geometric mean operator, \(\mu f\) with a maximum degree is considered to generate the best trade-off solutions [25]. The given multi-objective optimization problem was reformulated as the minimization problem, mentioned below in Eq. (3.7).

$$\begin{aligned} {\min f} = \frac{1}{1 +\mu f} \end{aligned}$$
(3.7)

f is selected as the fitness function in the proposed ISCA [25], and it provides the optimal solution without violating any of the constraints of the given multi-objective optimization problem.

3.3.1.2 Multi-objective Sine Cosine Algorithm for Combined Economic Emission Dispatch Problem

Combined economic and emission dispatch (CEED) is the process of determining the outputs of generating units in a power system in order to minimize the fuel cost and pollutants emissions at the same time. Gonidakis and Vlachos [24] solved the combined economic emission dispatch (CEED) problem using sine cosine algorithm (SCA). The objective of the CEED problem is to minimize the four conflicting objective functions of fuel cost, nitrogen oxides (\(\text {NO}_\text {x}\)) emission, sulfur dioxide (\(\text {SO}_2\)) emission, and carbon dioxide (\(\text {CO}_2\)) emission, under certain constraints. This multi-objective problem is converted into a single objective by introducing penalty factors to the objectives representing pollutants [31]. Moreover, to deal with the constraints, penalty function method is used. The authors used the max–max price penalty factor to solve the CEED problem, which is the ratio between maximum fuel cost and maximum emission of the corresponding generator [31]. It is expressed in Eq. (3.8).

$$\begin{aligned} h_i = \frac{F(P_{i,\max })}{E(P_{i,\max })}, \quad i=1,2,\dots n \end{aligned}$$
(3.8)

where \(F(P_{i,\max })\) is the maximum fuel cost, \(E(P_{i,\max })\) is the maximum emission, n is the number of generating units, and \(P_i\) is the active power generated by the ith generating unit.

In real-time economic emission dispatch, generator fuel cost curves are approximated using polynomials. This is a standard practice followed by the industries, and this approximation greatly affects the accuracy of the economic dispatch solutions. Fuel cost and emission are usually formulated as a second-order polynomial (quadratic) functions. However, by introducing higher order polynomials, economic emission dispatch solutions can be improved. Higher order polynomial models replicate the actual thermal generators’ fuel and emission costs. Gonidakis and Vlachos [24] used cubic polynomials to express the economic and emission cost. The CEED problem is mathematically formulated as mentioned in Eq. (3.9) below.

$$\begin{aligned} \begin{aligned} \text {Min}~C&= \sum _{i=1}^{n}{[F(P_i) + h_{\text {SO}_{2},i}E_{\text {SO}_{2}} +h_{\text {CO}_{2},i}E_{\text {CO}_{2}}+ h_{\text {NO}_{\text {x}},i}E_{\text {NO}_{\text {x}}}}] \\&\text {subject to} \quad \quad \sum _{i=1}^{n}{P_i}-P_\text {D}-P_\text {L}=0 \end{aligned} \end{aligned}$$
(3.9)

where \(h_{\text {SO}_{2,i}}\), \(h_{\text {NO}_{\text {x}},i}\), \(h_{\text {CO}_{2},i}\) are the penalty factors of \(\text {SO}_2\), \(\text {NO}_\text {x}\), \(\text {CO}_2\) emissions, respectively. \(E_{\text {SO}_{2}}\), \(E_{\text {CO}_{2}}\), and \(E_{\text {NO}_{\text {x}}}\) are the total \(\text {SO}_2\), \(\text {CO}_2\), and \(\text {NO}_\text {x}\) emissions, respectively. \(\sum _{i=1}^{n}{P_i}\) is the total output of all generating units, \(P_\text {D}\) is power system load demand and \(P_\text {L}\) is the transmission loss. The constraint mentioned in Eq. (3.9) is known as the power balance constraint.

To satisfy the equality constraint, the objective function in the CEED problem is modified as follows:

$$\begin{aligned} \text {Min}~G= C+ k \left| \sum _{i=1}^{n}{P_i}-P_\text {D}-P_\text {L}\right| \end{aligned}$$
(3.10)

where k is a constant penalty parameter.

3.3.2 Non-dominance Diversity-Based Multi-objective SCA and Its Applications

The non-dominance diversity-based approaches do not reformulate a multi-objective optimization problem into single-objective optimization problem. All the objectives are considered at the same time during the optimization process, and no preferences or weights are required to proceed. These methods produce a set of non-dominated solutions distributed uniformly along the Pareto optimal front. In the non-dominance diversity-based approaches, the very first task of the algorithm is to find non-dominated set of solutions from a given set of solutions. Different methods and procedures are available in the literature for this purpose, for example ‘Naive and Slow’ approach, ‘continuously updated’ method, and ‘non-dominated sorting’ [5]. For a detailed discussion of these methods, an interested reader can refer to the book ‘Multi-Objective Optimization Using Evolutionary Algorithms’ by Deb [5].

The other important task in non-dominance diversity-based approaches is to maintain the distribution of non-dominated solutions throughout the Pareto region, and it is an important assessment metric for such algorithms. There are several methods for maintaining diversity, such as the adaptive grid mechanism [2], and the crowding distance mechanism [11]. These mechanisms consist of a procedure that divides objective space in a recursive manner. Next, we will discuss about the first multi-objective version of SCA based on non-dominance diversity approach.

3.3.2.1 Multi-objective Sine Cosine Algorithm (MOSCA)

Tawhid and Savsani [17] proposed the first multi-objective version of SCA using elitism-based non-dominated sorting and crowding distance (CD) method of NSGA-II [11]. In MOSCA, the elitist non-dominated sorting adopted to introduce the selection bias to the solutions (or, agents) in the population, enabling the model to select the solutions from the fronts closer to the true Pareto optimal front (let us denote Pareto optimal front by \(\text {PF}^*\)). To maintain the diversity in the population, the crowded-comparison approach of NSGA-II was adopted. The working of MOSCA can be divided into two phases:

  1. 1.

    Elitist non-dominated sorting.

  2. 2.

    Crowding distance assignment and comparison.

Elitist non-dominated sorting

In elitist non-dominated sorting approach, for each solution, two attributes are defined:

  1. (i)

    domination count (\(n_i\)): number of solutions dominating the solution \(X_i\)

  2. (ii)

    \(S_i\), a set of solutions dominated by the solution \(X_i\), are calculated using Procedure 1.

All the solutions \(X_i\) that are assigned a domination count (\(n_i=0\)), are put in the first non-dominated level (or, first Pareto front) (\(\text {PF}_1\)), and their non-domination rank (\(\text {NDR}_i\)) is set equal to 1 (see Procedure 1). Then, for obtaining the second non-domination level for each solution \(X_i\) with \(n_i=0\), each member \(X_j\) of the set \(S_i\) is visited, and its domination count \(n_j\) is reduced by one. While reducing domination count if it falls to ‘0’, the corresponding solution \(X_j\) is put in the second non-domination level (\(\text {PF}_2\)), and its rank (\(\text {NDR}_j\)) is set equal to 2. The above procedure is repeated for each member of the second non-domination level to identify the third non-domination level. This process continues until the whole population is classified into different non-domination levels (see Procedure 2).

Procedure 1: Determining the optimal non-dominated set

Step 1 For each \(X_i\in P\) (Population), \(i\in {1,2,\dots N}\), set \(n_i\) = 0 and \(S_i\) = \(\phi \). Then set solution counter \(i=1\).

Step 2for all \(j \in \{1,2,\dots N\}\) and \(j\ne i\), If \(X_i \prec X_j\), update \(S_i = S_i \cup \ {X_j}\). Otherwise, if \({X_j}\prec {X_i}\), set \(n_i\) = \(n_i +1\)

Step 3Replace i by \(i+1\). If \(i \le N\), go to step 2. Otherwise, go to step 4.

Step 4 Keep \(X_i\) in \(P_1\) (first non-dominated front) if \(n_i\) = 0 and set \(\text {NDR}_i=1\)

Procedure 2: Non-dominated sorting

Step 1 Determine the best non-dominated set or front (P1) using procedure 1

Step 2 Set a front counter(k) = 1

Step 3 While \(P_k\ne \phi \), perform the following steps

Step 3(a) Initialize Q = \(\phi \) for storing next non-dominated solutions

Step 3(b)For each \(X_i\in P_k\) and for each \(X_j \in S_i\), update \(n_j = n_{j-1}\)

Step 3(c) If \(n_j\) = 0, keep \(X_j\) in Q (i.e., \(Q = Q\cup \{X_j\}\) and set \(\text {NDR}_j=k+1\)

Crowding distance estimation

For measuring the distribution of the solutions in the neighborhood of a solution, MOSCA adopted the crowding distance metric as used in the NSGA-II [11]. The crowding distance metric estimates the normalized search space around a solution \(X_i\) which is not occupied by any other solution in the population. The crowding distance value of a particular solution is the average distance of its two neighboring solutions. The crowding distance is calculated by sorting all the solutions in the population of a particular non-dominated set in ascending order for each objective function \(f_l\) (\(l= 1,2\dots M\)). The individuals with the lowest and the highest objective function values are assigned an infinite crowding distance so that they are always selected, while other solutions are assigned the crowding distance (\(\text {cd}_{l}^{i}\)) using the following equation:

$$\begin{aligned} \text {cd}_{l}^{i} =\frac{f_{l}^{i+1}-f_{l}^{i-1}}{f_{l}^{\max }-f_{l}^{\min }} \ \ \forall \ l= 1,2\dots M, i= 2,3\dots (l-1) \end{aligned}$$
(3.11)

The final crowding distance value (\(\text {CD}_i\)) for each solution (\(X_i, i=1\dots N\)) is computed by adding the solution’s crowding distance values (\(\text {cd}_I^{l}\)) in each objective function.

$$\begin{aligned} \text {CD}_i = \sum _{l=1}^{m} \text {cd}_{l}^{i} \end{aligned}$$
(3.12)

For \(m=2\), the crowding distances of a set of mutually non-dominated points are illustrated in Fig. 3.5.

Fig. 3.5
figure 5

Non-dominance ranking and crowding distance

Crowded tournament selection

After calculating the crowding distance (CD) for each of the solutions (see Procedure 3), the SCA is operated to generate a new population \(P_j\). The new and the old population (\(P_\text {o}\)) are then merged to form a population \(P_{\text {new}}\) of size greater than N. In order to maintain a constant population size N, a crowded tournament selection operator (defined in 3.5) based on the non-dominated ranking (NDR), and the crowding distance (\(\text {CD}\)) are used to select N solutions from the \(P_k\) number of solutions to form the updated population.

Definition 3.5

(Crowded tournament selection operator) A solution \(X_i\) is selected over solution \(X_j\) if it satisfies any of the following conditions: 1. If solution \(X_i\) has a lower (or, better) NDR than \(X_j\), 2. If solutions have the same NDR but solution \(X_i\) has a better crowding distance (CD) than the solution \(X_j\).

That means, between solutions with different NDRs, we prefer the solutions with the lower rank. And, if two solutions have the same NDR (i.e., they both belong to the same front), then in order to maintain the diversity, the solution located in a lesser crowded region in the front is preferred. If the crowding distance is the same for two solutions, then any of the solutions is assigned a higher ranking, randomly. The crowding distance measure is used as a tiebreaker in this selection technique, called the crowded tournament selection operator. In more simpler terms, if the solutions are in the same non-dominated front, the solution with a higher crowding distance is the winner.

Procedure 3: Crowding distance assignment

Step 1 Set the front counter \(k=1\)

Step 2 For each solution \(X_i\) in the set \(P_k\), first assign \(\text {cd}_i\) = 0

Step 3 For each objective function \(f_m\), m = 1, 2, ... M, sort the set \(P_k\) in ascending order of its objective function value

Step 4 Assign \(\text {cd}_1^m = \text {cd}_L^m =\infty \), where \(L=|P_k|\)

Step 5 For all other solutions \(X_j \in P_k, j = 2,3,\dots , L-1\), assign crowding distance using Eq. (3.11)

Step 6 Calculate the final crowding distance value (\(\text {CD}_i\)) for each solution (\(X_i, i=1\dots N\)) using Eq. (3.12).

The pseudo-code of the discussed MOSCA algorithm is shown in Algorithm 1.

figure a

3.3.2.2 Multi-objective Sine Cosine Algorithm for Optimal DG Allocation Problem

Raut and Mishra [19] developed another Pareto-based multi-objective sine cosine algorithm (MOSCA) to address the issues of optimal distribution generators (DGs) allocation. This approach applies a fast non-dominated sorting approach and the crowding distance operator. In addition to this, to enhance the performance, \(r_1\) of SCA is defined as an exponential decreasing parameter and a self-adapting levy mutation as defined in Eqs. (3.13) and (3.14) is adopted.

$$\begin{aligned} r_1 = b\times \text {e}^{(-t/T)} \end{aligned}$$
(3.13)
$$\begin{aligned} P^{t+1}_{g,j} = P^{t}_{g,j} + \ \text {levy} \times A(j) P^{t}_{g,j} \end{aligned}$$
(3.14)

where \(P^{t}_{g,j}\) is the value of the best agent in the jth dimension, levy step length is calculated from Eq. (3.15), and a self-adapting control coefficient A is calculated using Eqs. (3.18), (3.19) and (3.20).

$$\begin{aligned} \text {levy}=0.01\times {\left( \frac{S\times \sigma }{T^{(\frac{1}{\alpha })}}\right) } \end{aligned}$$
(3.15)

where S and T are random numbers in the range [0, 1]. \(\sigma \) is defined as:

$$\begin{aligned} \sigma = \left( \frac{\Gamma (1+\alpha ) \times \sin (\frac{\pi \alpha }{2})}{{\Gamma (\frac{1+\alpha }{2}) \times \alpha \times 2^{(\frac{\alpha -1}{2})}}}\right) ^{\frac{1}{\alpha }} \end{aligned}$$
(3.16)

where

$$\begin{aligned} \Gamma (k)= (k-1)! \end{aligned}$$
(3.17)

The large value of A in the early iteration enhances the exploration, while the gradual decrease in A with increasing iteration numbers facilitates the exploitation.

$$\begin{aligned} A(j)=\text {e}^{(\frac{-\epsilon \times t}{T})(1-\frac{w(j)}{w_{\max }(j)})} \end{aligned}$$
(3.18)
$$\begin{aligned} w(j)= \left| {P^{t}_{\text {best},j} - \left( \frac{1}{N}\sum _{i=1}^{N}X_{i,j}^t\right) }\right| \end{aligned}$$
(3.19)
$$\begin{aligned} w_{\max }(j)= \max (P^t_j)- \min (P^t_j ) \end{aligned}$$
(3.20)

where \(\epsilon \) and \(\alpha \) are constants, w(j) is the difference between the jth dimension value of the current best solution and jth dimension average value of the population. \(w_{\max }(j)\) is the maximum distance of the best solution from the worst solution.

Once the Pareto optimal set of non-dominated solutions is obtained, a fuzzy-based mechanism is employed to extract the best trade-off solutions from the obtained Pareto set and assist the decision-making process. Due to the imprecise nature of the decision-maker’s judgment, each objective function is represented by a membership function. A simple linear membership function \(\mu _l^ k\) is defined for each objective and the membership value of kth solution in jth objective is given as

$$\begin{aligned} \mu _l^ k = \frac{F^{\max }_l - F^k_l}{F_l^{\max } - F_l^{\min }} \end{aligned}$$
(3.21)

where \(\mu \) is the fuzzy membership function, \(F_l^{\min }\) and \(F_l^{\min }\) are the maximum and minimum values of lth objective function. For each member of non-dominated set, the normalized membership value (\(\mu ^k\)) is calculated using the following equation:

$$\begin{aligned} \mu ^k = \frac{\sum _{l=1}^{m} \mu ^k_l}{\sum _{k=1}^{K}\sum _{l=1}^{m}\mu ^k_l} \end{aligned}$$
(3.22)

where K is the total number of Pareto solutions. The maximum value of \(\mu ^k\) is selected as the Pareto optimal solution.

3.3.2.3 Multi-objective Sine Cosine Algorithm for Spatial-Spectral Clustering Problem

Wan et al. [20] developed a multi-objective SCA for remote sensing image spatial-spectral clustering (\(\text {MOSCA}\_\text {SSC}\)) that uses a knee-point-based selection approach [32], the concept of Pareto dominance and elitism. ‘Knees’ are the solutions of the Pareto front in which any modification to improve one objective would significantly deteriorate at least one other objective. The technique of Pareto dominance combined with elitism ensures that the non-dominated solutions survive in the succeeding generations of the algorithm. A multi-objective model consisting of multiple clustering objectives is utilized for the purpose of the clustering task of remote sensing image data. In \(\text {MOSCA}\_\text {SSC}\), two widely used metrics for remote sensing data, namely the **e-Beni (XB) index and Jeffries–Matusita (Jm) distance combined with the spatial information are used as objective functions for the optimization purposes [33] (see Eqs. 3.23 and 3.24)

$$\begin{aligned} (\text {XB})_{\text {ind}} =\frac{\sum _{i=1}^{K}{\sum _{j=1}^{N}{\mu _{ij}^m}||x_j - U_i||^2}}{N {\min }_{i\ne k}||U_i - U_j||^2} \end{aligned}$$
(3.23)
$$\begin{aligned} (\text {SJm})_{\text {ind}}= \sum _{i=1}^{K}{\sum _{j=1}^{N}{\mu _{ij}^m}||x_j - U_i||^2} + \phi \sum _{i=1}^{K}{\sum _{j=1}^{N}{\mu _{ij}^m}||\overline{x_j} - U_i||^2} \end{aligned}$$
(3.24)

where K is the number of cluster centers, N is the total number of pixels in the remote sensing image, m is the fuzzy weighting exponent, which determines the degree of sharing of samples between classes. \(x_j\) is a vector, which denotes the jth pixel of the image, and \(\mu _{ij}\) denotes the fuzzy membership. \(U_i\) and \(U_k\), are the jth and the kth cluster centers, and \(\phi \) is the control parameter. \(\overline{x_j}\) represents the average gray value [33].

The procedure for \(\text {MOSCA}\_\text {SSC}\) is described as follows:

Main Steps of the MOSCA_SSC

Step 1 Initialize a set of parent search agents (population) of size NP.

Step 2 Select the initial destination point using the Fuzzy C-Means (FCM) method.

Step 3 Generate new offsprings using SCA to get a new population and merge the new population with the old population to get 2\(\times \) NP solutions.

Step 4 For each search agent, calculate the values of the two clustering objective functions using Eqs. (3.23) and (3.24).

Step 4 For each search agent, calculate the values of the two clustering objective functions using Eqs. (3.23) and (3.24).

Step 5 Rank the parent and the offspring search agents using the non-dominance sorting and crowding distance approach and select the NP best solutions from 2\(\times \) NP solutions.

Step 6 Select the destination point using the knee-point-based selection approach.

Step 7 Repeat the process from steps 3 to 6 until the stop** criteria is reached

The Fuzzy C-Means (FCM) method [34], mentioned in the step 2 of the MOSCA_SSC is used to obtain the initial destination point, as SCA requires initial destination points to begin the optimization procedure. Knee-point selection approach is utilized for automatically updating the destination points in the SCA algorithm [32]. In non-dominance diversity-based approaches, there are two challenging aspects to handle. The first aspect of this approach is to produce multiple non-dominated solutions to form a near optimal Pareto front, while the second aspect is to maintain diversity among these non-dominated solutions. Researchers have proposed different methods and techniques to tackle this challenge. The use of an external archive to store the non-dominated solutions and a grid mechanism to improve the diversity of the non-dominated solutions are some major methods to enhance the capabilities of non-dominance diversity-based approaches.

Archive: Archive is a storage memory where non-dominated solutions of previous iterations are stored. The non-dominated solutions stored in archive can be utilized for further generating new solutions, and based on the dominance status of these newly generated solutions, the solutions stored in the archive are managed.

Grid Mechanism : It concerns with managing the diversity in the non-dominated solutions by locating the crowded region where non-dominated solutions lie. Different grid mechanism techniques are available in the literature for this purpose. However, the basic idea behind the grid mechanism is to divide the objective space into smaller regions or grids to observe the distribution of the non-dominated solutions. If the distance between non-dominated solutions is small, and the number of non-dominated solutions is big, that particular grid is considered crowded.

Selim et al. [21] proposed a multi-objective sine cosine algorithm with an external archive and adaptive grid mechanism to handle the DSTATCOM allocation problem as mentioned below.

3.3.2.4 Multi-objective Sine Cosine Algorithm for DSTATCOM Problem

In distribution systems, to improve the voltage profile, and overall reliability, Distribution STATic COMpensators (DSTATCOMs) are used. Selim et al. [21] proposed multi-objective SCA (MOSCA) and used fuzzy logic decision-making to optimally install multiple Distribution STATic COMpensators (DSTATCOMs). The optimization procedure is carried out to determine the optimum size and location of DSTATCOMs that leads to the minimization of power loss, voltage deviation (VD), and maximization of the voltage stability index (VSI) of the radial distribution system. MOSCA is a Pareto-based algorithm that utilizes the Pareto ranking scheme in the sine cosine algorithm to handle this multi-objective optimization problem. MOSCA incorporates an external archive of solutions to keep the historical record of non-dominated solutions, and the mechanism of adaptive grid [12] to maintain the diversity of the non-dominated solutions in the external archive. The major objective of the external archive and grid mechanism is the fact that a solution that is non-dominated with respect to its current population might not be non-dominated with respect to other solutions stored in the archive from the previous iterations in the evolutionary process. In MOSCA [21], an archive controller and adaptive grid mechanism are employed to store the non-dominated solutions and maintain the diversity of the solutions.

Archive Controller

Archives controllers are responsible for deciding whether a solution should be included in the archive or not. The non-dominated solutions generated at each iteration of the MOSCA are compared with the solutions inside the archive [21]. This archive is initially empty, and with the iterations, non-dominated solutions are added and updated. However, a fixed size of the archive is maintained because of memory limitations. If the archive is empty, then the candidate solution is accepted. If the archive is not empty, there are three possibilities—if solutions in the archive dominate the new candidate solution, it is not added in the archive. If there are solutions in the archive that are dominated by the new solution, then those solutions are eliminated. If the new candidate solution is neither dominated by any solution of the archive nor dominates any solution, it is added to the archive depending on the availability of the slot in the archive. Finally, the adaptive grid mechanism is triggered if the external population has exceeded its permitted capacity [12]. The archiving behavior is summarized in Algorithm 2. The graphical illustration of the archive update mechanism is depicted in Fig. 3.6.

figure b
Fig. 3.6
figure 6

Archive update mechanism

Adaptive Grid Mechanism

The adaptive grid mechanism maintains the diversity in non-dominated solutions lying in the archive. It is used to delete solutions from the external archive if the external population has reached its maximum size. The MOSCA proposed in [21] utilizes the adaptive grid mechanism proposed in [12] to generate well-distributed Pareto fronts. This mechanism measures the degree of crowding in different regions of the solution space. The objective space region is divided into d \(\times \) M number of equal-sized M-dimensional hyper-cubes, where d is a user-defined parameter that denotes the number of grids (see Fig. 3.7). The archived solutions are placed in these hyper-cubes according to their locations in the objective space. A map of the grid is maintained, to calculate the number of non-dominated solutions lying in a particular grid. If the archive is already full, then the new solution cannot be included without making the space in the archive. In this case, the hypercube with the highest number of solutions is identified, and if the new solution does not belong to this hypercube, it is included in the archive, and one of the solutions from the archive is eliminated. If the new solution is not dominated or dominates any other solution in the archive, while the archive is full, the solution with the highest grid count is deleted from the archive. If the new solution inserted into the archive lies outside the current bounds of the grid, then the grid is recalculated, and each solution inside it is relocated (see Fig. 3.8).

Fig. 3.7
figure 7

Graphical representation of the insertion of a new element in the adaptive grid when the individual lies within the current boundaries of the grid [12]

figure c
Fig. 3.8
figure 8

Graphical representation of the insertion of a new element in the adaptive grid when it lies outside the previous boundaries of the grid [12]

When the non-dominated solution in the archive are assessed on the basis of crowding, the solutions with the least crowding or the solutions located in the least congested region of the objective space are given preference over the solutions lying in the more crowded region. The pseudo-code of the MOSCA [21] is given in Algorithm 4.

figure d

3.4 Conclusion

Optimization problems involving multiple objectives are common. In this context, meta-heuristics turn out to be a valuable tool, in particular, if the problem complexity prevents exact methods from being applicable and flexibility is required with respect to the problem formulation. Most real-world engineering problems involve simultaneously optimizing multi-objectives where considerations of trade-offs is important. Multi-objective sine cosine algorithm has shown its applicability to various application problems. Apart from basic MOO concepts, this chapter has covered various multi-objective sine cosine algorithms and their applications.

Practice Exercises

  1. 1.

    Prove that dominance relation is a partial order. (Hint: If a relation is reflexive, anti-symmetric, and transitive, it is called partial order.)

  2. 2.

    Given a set of points and a multi-objective optimization problem, analyze the statement that one point always dominates the others.

  3. 3.

    Given four points and their objective function values for multi-objective minimization:

    \(f_1(x_1) = 1, f_2(x_1) = 1, f_1(x_2) = 1, f_2(x_2) = 2, f_1(x_3) = 2, f_2(x_3) = 1\), \(f_1(x_4) = 2, f_2(x_4) = 2\)

    1. (1)

      Which point dominates all the others?

    2. (2)

      Which point is non-dominated?

    3. (3)

      Which point is Pareto optimal?

  4. 4.

    Discuss the challenges involved in multi-objective optimization.

  5. 5.

    Comment on the dependence of the optimal solution on the weighting coefficients in the weighted sum approach.

  6. 6.

    For multi-objective optimization, the understanding of the Pareto front is very important. Explain.