Abstract
This paper presents a multi-strategy improved grasshopper optimization algorithm (MSIGOA), which aims to address the shortcomings of the grasshopper optimization algorithm (GOA), including its slow convergence, vulnerability to trap** into local optima, and low accuracy. Firstly, to improve the uniformity of the population distribution in the search space, the MSIGOA uses circle map** for the population initialization. A nonlinear decreasing coefficient is utilized instead of an original linear decreasing coefficient to improve the local exploitation and global exploration capabilities. Then, the modified golden sine mechanism is added during the position update stage to change the single position update mode of GOA and enhance the local exploitation capability. The greedy strategy is added to greedily select the new and old positions of the individual to retain a better position and increase the speed of convergence. Finally, the quasi-reflection-based learning mechanism is utilized to construct new populations to improve population multiplicity and the capability to escape from the local optima. This paper verifies the efficacy of MSIGOA by comparing it with other advanced algorithms on six engineering design problems, CEC2017 test functions, and 12 classical benchmark functions. The experimental results show that MSIGOA performs better than the original GOA and other compared algorithms and has stronger comprehensive optimization capabilities.
Avoid common mistakes on your manuscript.
1 Introduction
In domains of practical application, a multitude of constrained and unconstrained optimization questions necessitate resolution. As the forms of these questions become more and more complex, the traditional gradient-based optimization methods can no longer meet the actual needs. However, the meta-heuristic algorithm has the following characteristics: simple coding, strong applicability, a limited number of assumptions that must be satisfied, and no derivative tools. Therefore, they are regarded as a new method for solving optimization questions and are popularly employed in numerous fields, for instance, engineering, computer science, mathematics, energy, medicine, neuroscience, and so on [1]. Meta-heuristic algorithms can be divided into four categories according to their source of inspiration: evolutionary algorithms (inspired by the law of survival of the fittest), physics-based algorithms (inspired by chemical or physical laws), human-based algorithms (inspired by various human behaviors), and swarm intelligence algorithms (inspired by the swarm behavior of creatures). Among them, the most representative algorithms among evolutionary algorithms, human-based algorithms, and physics-based algorithms are the genetic algorithm (GA) [2,3,4], teaching–learning-based optimization (TLBO) [5,6,7], and gravitation search algorithm (GSA) [8,9,10], respectively. The classic swarm intelligence (SI) algorithms include particle swarm optimization (PSO) [11,12,13], ant colony optimization (ACO) [14,15,16], and artificial bee colony algorithm (ABC) [17,18,19], etc. The grasshopper optimization algorithm (GOA) mentioned in this paper is a novel swarm intelligence (SI) algorithm proposed by Saremi [20] in 2017. The idea is derived from the imitation of the migration and foraging behaviors of grasshopper populations. The advantages of the GOA over other SI algorithms include its straightforward structure, limited number of parameters, and easy implementation. The experiment on the benchmark function demonstrates that the GOA outperforms the previously proposed SI algorithms, for example, PSO, in terms of convergence speed and accuracy. It has already been used successfully in a variety of fields. For example, Xu et al. [21] introduced the bare-bones Gaussian strategy and elite opposition-based learning into GOA to improve algorithm performance and applied its binary version to feature selection with good effects. Jalali et al. [23] proposed an enhanced grasshopper optimization algorithm (EGOA), which added the Levy flight strategy and tent map** to the original GOA, where logistic map** was utilized for population initialization to enhance population diversity, and the velocity perturbation mechanism was employed for perturbing individual positions to enhance the ability to break away from the local optima and apply to three engineering design problems. Jalali et al. [23] put forward an enhanced grasshopper optimization algorithm (EGOA) employing tent map** and the Levy flight strategy. Then, combined with the mutual information (MI) feature selection algorithm, the long short-term memory (LSTM) neural network architecture is optimized for wind speed prediction. Wu et al. [24] proposed an improved grasshopper optimization algorithm (IGOA), which adopted logistic map** for initialization and added differential evolution and linear optimization strategies in the position update stage. Then it was employed to identify the parameters of polycrystalline silicon solar cells. Liu et al. [25] proposed a comprehensive strategy combining the original GOA with the linear weighted sum to address energy management issues. Alhejji et al. [26] added the spiral path strategy and Levy flight mechanism to the original GOA, proposed an adaptive grasshopper optimization algorithm, and applied it to solve the optimal power flow problem. Bhukya et al. [27] utilized GOA to optimize the membership functions of FLC to deal with nondeterminacies caused by changeable temperatures and irradiances, thereby enhancing the performance of maximum power point tracking.
However, in the same vein as other SI algorithms, the GOA is prone to local optima and exhibits relatively slow convergence when dealing with multimodal or high-dimensional optimization questions. In response to these issues, Dong et al. [28] suggested a modified grasshopper algorithm (CC–GOA), which employs logistic map** for initialization to enhance population diversity and adds the Cauchy mutation strategy to enhance the algorithm's capability to escape from the local optima. Zhao et al. [29] proposed a grasshopper algorithm that incorporates the nonlinear decreasing coefficient, the Levy flight, and the random jum** strategy. Among them, the nonlinear decreasing coefficient accelerates the algorithm's convergence, and the Levy flight and the random jum** strategy enhance the population diversity and assist the algorithm in esca** from the local optima. Bekana et al. [30] proffered a modified grasshopper algorithm (Crazy–GOA) by adding the crazy factor to the GOA. Adding the crazy factor to the position update expression of the GOA helps to explore the entire search space and enhance population diversity. Yldz et al. [31] proposed a hybrid grasshopper algorithm, which combined the original GOA with the simplex method to enhance local exploitation ability. Zhou et al. [32] proposed a modified grasshopper algorithm, which employs the orthogonal learning mechanism to enhance convergence speed and introduces the genetic mutation and the Cauchy mutation strategies to enhance the solution accuracy. Huang et al. [33] suggested a grasshopper algorithm that first divides the population into two subpopulations, then introduces the social interaction mechanism to balance the exploitation and exploration, and finally incorporates the learning strategy and the nonlinear coefficient to the enhance global exploration capability.
Although the aforementioned improvement methods have somewhat enhanced the performance of the original GOA, most studies have not comprehensively considered the shortcomings of the original GOA, and there is still room for improvement. Meanwhile, the NFL theorem also points out that there is no algorithm that can effectively and efficiently solve every optimization problem. At this point, this paper presents a multi-strategy improved grasshopper optimization algorithm (MSIGOA), which aims to address the shortcomings of the original GOA, including its slow convergence, vulnerability to trap** into local optima, and low accuracy. Firstly, the circle map** is used to initialize the population, making the population distribution more uniform and having higher diversity. Secondly, a nonlinear decreasing coefficient is employed instead of an original decreasing coefficient to meet the needs of the algorithm at different stages and improve both local exploitation and global exploration capabilities. Thirdly, the modified golden sine mechanism is added during the position update stage to change the single position update mode of GOA and enhance the local exploitation capability. Fourthly, the greedy strategy is added to greedily select the new and old positions of the individual to retain the better position and increase the speed of convergence. Finally, the quasi-reflection-based learning mechanism is utilized to construct new populations to improve population multiplicity and the capability to escape from the local optima. In the experimental simulation, the performance of the proposed MSIGOA was evaluated and compared with other advanced algorithms by employing 12 classical test functions and the CEC2017 test functions. The experimental results indicate that the MSIGOA outperforms the original GOA and other compared algorithms, and it has faster convergence speed, better stability, and stronger searching ability. In addition, six engineering design problems are solved using the MSIGOA. The results reveal that the proposed MSIGOA is more competitive than other algorithms.
The remainder of this paper is structured as follows: Sect. 2 describes the principles of the original GOA and golden sine algorithm (Gold-SA); Sect. 3 details the proposed MSIGOA; Sect. 4 conducts comparative experiments to validate the performance of the MSIGOA and applies it to six engineering design problems; Sect. 5 concludes the paper and proposes future study.
2 Background
2.1 Grasshopper Optimization Algorithm (GOA)
Grasshoppers are a kind of group behavior insect whose life cycle is mainly divided into two phases: larva and adulthood, and they have completely different characteristics in the two phases. The larval phase is characterized by small steps and slow movements, while the adulthood phase is characterized by long-range and abrupt movements [34]. When modeling the behavior of grasshopper swarms, their motions are often thought to be influenced by gravity, social interaction, and wind advection. Therefore, the following equation can be used to mimic the behavior of grasshopper swarms in nature [20, 35]:
where \(X_{i} = (x_{i,1} ,x_{i,2} ,x_{i,3} , \ldots ,x_{i,d} )\) defines the position of the ith grasshopper, \(r_{1}\), \(r_{2}\), and \(r_{3}\) are random numbers in [0, 1], G represents the force of gravity acting on the grasshopper, A depicts wind advection, and S represents social interaction. The specific calculation equation for S is as follows:
where N denotes the quantity of grasshoppers in the swarm, and \(d_{ij}\) and \(\widehat{{d_{ij} }}\) represent the distance and unit vector between the ith and jth grasshoppers, which are computed using \(d_{ij} = |x_{j} - x_{i} |\) and \(\widehat{{d_{ij} }} = \frac{{x_{j} - x_{i} }}{{d_{ij} }}\), respectively. The social forces between the grasshoppers are shown in Fig. 1, and s is the function that defines the social forces.
There exist social forces between grasshoppers within a certain distance, which manifest as attraction and repulsion, respectively, depending on the distance. Specifically, when the distance between grasshoppers is relatively small, the social force appears as repulsion, and when the distance is relatively large, the social force appears as attraction. The distance between grasshoppers at which neither attraction nor repulsion occurs is commonly referred to as the comfort distance or comfort zone. As the distance between grasshoppers exceeds their comfort zone, attraction increases until a certain threshold is reached. Beyond this point, attraction weakens gradually with increasing distance until it disappears altogether. The social forces between grasshoppers are defined as follows [20]:
where \(l\) is the scale of attracting length and f is the strength of attraction. The values assigned to these variables have a direct impact on the extent of the repulsion region, attraction region, and comfort zone. Usually, f is a constant of 0.5, and l is a constant of 1.5. The variable r represents the distance between grasshoppers, which is usually mapped to the interval [1, 4] to avoid situations where the attraction is zero when the distance between grasshoppers is large. When \(s(r)\) is greater than 0, the social force appears as attraction; otherwise, it appears as repulsion. Furthermore, the specific expressions of \(G_{i}\) and \(A_{i}\) in Eq. (1) are as follows:
where \(\widehat{{e_{w} }}\) and \(\widehat{{e_{g} }}\) stand for the unit vectors in the direction of the wind and the center of the earth, and u and g stand for the drift constant and gravitational constant.
To sum up, Eq. (1) can be expanded as follows by substituting \(G_{i}\), \(A_{i}\), and \(S_{i}\):
However, Eq. (6) cannot be directly used to solve the optimization problems because the grasshoppers quickly reach the comfort zone and the population does not converge to a specified position. Assuming that the wind direction is consistent with the movement direction of the target individual, ignoring the influence of gravity, a modified version of this equation is as follows [20]:
where \(x_{i,d} (t)\) denotes the position of the ith grasshopper in the d dimension of the search space at the t iteration. The search space’s lower and upper bounds are represented by \(lb_{d}\) and \(ub_{d}\), respectively, and \(\widehat{{D_{d} }}(t)\) denotes the position of the target grasshopper (or current best grasshopper) in the d dimension at the t iteration. The variable c is a coefficient that linearly declines with the quantity of iterations, defined as follows:
where the values of \(c_{\min }\) and \(c_{\max }\) are 0.00001 and 1, respectively, and stand for the lowest and highest values of c, and T and t stand for the largest and current iterations, respectively.
2.2 Golden Sine Algorithm (Gold-SA)
As a basic and important tool, the sine function has a wide range of applications in many fields, especially in time series analysis methods and change detection methods [36]. Inspired by the sine function, Tanyildizi et al. [37] proposed a meta-heuristic algorithm called the golden sine algorithm (Gold-SA) in 2017, which has the characteristics of fast convergence speed and good robustness. Gold-SA combines the golden section coefficient and sine function in the iterative optimization process. Among them, the sine function endows the algorithm with good global exploration ability, while the golden section coefficient continuously refines the search space, endowing the algorithm with strong local exploitation ability.
The core of Gold-SA is the process of individual position updating, and each individual position corresponds to a potential solution to the question. Assume the number of individuals and the dimension of the question (or search space) are N and Dim, respectively. Gold-SA first randomly generates N individuals in the Dim dimension search space, then update the individual positions according to the corresponding formula, and finally iterate until the stop condition is met. Assuming that at the tth iteration, the individual i’s \((i = 1,2,3, \ldots ,N)\) position in the search space is shown as \(V_{i} (t)\), then update the position of the t + 1 iteration according to the following formula:
where Pi(t) represents the optimal position of individual i at th tth iteration, r1 is a random number between [0, 2π] that determines how far the individual moves on each iteration, r2 is a random number between [0, π], which determines the movement direction of the individual at each iteration, and k1 and k2 are the coefficients obtained by the golden number \(\tau\). These coefficients can effectively narrow the search space, and other individuals can be directed to move closer to the current best individual.
3 Proposed MSIGOA
For the sake of enhancing the optimization performance of the original GOA, this paper combines multiple strategies to improve it and names the improved algorithm MSIGOA. The following sections will specifically introduce the improvement strategies for MSIGOA.
3.1 Circle Map**
Since the original grasshopper optimization algorithm lacks the ability to incorporate prior information during population initialization, it is limited to generating the initial population through random initialization. However, this random initialization may result in an uneven distribution of individuals within the search space, thereby impacting the algorithm’s solution precision and convergence speed.
The chaotic sequence has the characteristics of non-periodicity, ergodicity, and regularity [38, 39]. Compared with random initialization, the incipient grasshopper population produced by employing chaotic map** displays higher diversity and a more uniform distribution. The fundamental idea of chaotic map** is to generate a chaotic sequence according to the map** relationship on the interval [0, 1] and then transform the chaotic sequence into the search space. There are various types of chaotic map**s, including commonly used ones such as the logistic map**, tent map**, circle map**, and so on. Among the various chaotic map** methods, circle map** has been found to exhibit superior performance [40]. Therefore, this paper utilized circle map** to produce the grasshopper population with the aim of enhancing its diversity. Circle map** is formulated as follows:
where \(y_{i,j}\) denotes the \(j\)th variable of chaotic sequence i, \(\bmod (b,a)\) mod (b , a) represents the remainder operation of b on a.
After all chaotic sequences are obtained by Eq. (13), and then the chaotic sequences are inversely mapped to the search space according to Eq. (14), the initial positions of individuals can be obtained.
3.2 Nonlinear Decreasing Coefficient
As per the original GOA principle, the decreasing coefficient c has a vital role in the local exploitation and global exploration of the algorithm [41, 42]. From Eq. (8), the coefficient c decreases linearly as the number of iterations increases. However, this linear variation cannot meet the actual needs of the algorithm at different stages, leading to low convergence accuracy. As a result, this paper proposes a nonlinear decreasing coefficient to replace the linear decreasing coefficient, and the equation is as follows:
where T and t stand for the largest and current iteration numbers, respectively.
The rapid decrease of the new coefficient c in the early stage of iteration is beneficial for the algorithm to quickly converge to the vicinity of the optimal value, while its slow decrease in the later stage of iteration allows the algorithm for more detailed exploitation. Therefore, this coefficient can effectively improve the exploration and exploitation capabilities of the algorithm.
3.3 Modified Golden Sine Mechanism
The original GOA is prone to local optima in the middle and later iterations due to a lack of excellent local exploitation capability, which leads to poor solving accuracy. Compared with GOA, Gold-SA has a splendid convergence rate and exploitation ability. Therefore, the idea of Gold-SA was incorporated into GOA to change the single position update mode of GOA and enhance the local exploitation capability. Specifically, the golden sine mechanism is added during the position update stage to make the ordinary grasshopper individuals move toward the target individuals in a golden sine manner, reducing the blindness of the individual optimization. In addition, this mechanism will also promote information exchange between ordinary individuals and the target individual so that ordinary individuals can sufficiently absorb the position information from the target individual, thereby improving the local exploitation capability of the algorithm.
In MSIGOA, the choice of two position update methods is determined by switching probability \(P_{v}\), where \(P_{v} = 0.5\). When \(P_{v}\) is smaller than the random number \(r\) in [0, 1], the golden sine mechanism is used to update the grasshopper positions. Otherwise, the position is updated in the original way by GOA. Besides, in an effort to further enhance the local exploitation capability, Eq. (15) is added to the current individual position of Eq. (9) as an adaptive weight coefficient. When the position is updated, the adaptive weight coefficient w adjusts the influence weight of the current individual position on the new position according to the number of iterations, thus fully utilizing the current individual position information [43,44,45]. Therefore, the updated formula of the modified golden sine mechanism is as follows:
where \(X_{i} (t)\) and \(\hat{D}(t)\) are the positions of the ith grasshopper and the current optimal grasshopper at the t iteration, respectively, w is the dynamic weight coefficient, k1 and k2 are the golden section coefficients to reduce the search area, and r1 and r2 are random numbers to control the moving distance and direction of grasshopper individuals (specific definitions in Sect. 2.2). Equation (16) employs these coefficients to control the impact of the current individual position on the new position and gradually guide the current individual to approach the best individual.
3.4 Greedy Strategy
As the effect of position updates is uncertain, the quality of the newly generated position might not be as good as the individual’s original position. Therefore, greedy selection of individual positions before and after renewal is carried out to maintain population quality and improve convergence speed. The main idea of the greed strategy is to compare the fitness value of each grasshopper’s new position with its original position. If the new position has a higher fitness value, the grasshopper’s position will be updated; otherwise, it remains unchanged. The following is a description of the greed strategy [46]:
3.5 Quasi-reflection-Based Learning
In 2005, the opposition-based learning (OBL) mechanism was initially suggested by Tizhoosh et al. [47]. Studies show that there is a greater chance that opposite solutions will approach the global optimal solution than random ones, and this mechanism can also significantly increase population diversity and population quality [48,49,50]. At present, OBL has been widely utilized in the modification of the SI algorithms to enhance their solving accuracy and convergence speed. Rahnamayan et al. suggested a variation of OBL in 2007, namely the quasi-opposition-based learning (QOBL) mechanism [51]. Research has confirmed that finding the global optimal solution by employing quasi-opposite solutions is more efficient than employing opposite solutions [52,53,54]. Later, based on the principles of OBL and QOBL, a new variant called the quasi-reflection learning (QRBL) mechanism was proposed [55]. The fundamental principle is to construct a quasi-reflective population by figuring out the quasi-reflective solution for each individual in the current population. After that, combine the quasi-reflective population with the present population and arrange the two populations based on fitness. Finally, pick the top N individuals with excellent fitness values to construct a new population. Fan et al. [56] combined OBL, QRBL, and QOBL with HHO to conduct comparison experiments. The outcomes reveal that HHO that combines the QRBL mechanism performs better in terms of solution accuracy and convergence speed. In order to improve the diversity and quality of the population and the capability of the algorithm to escape from the local optima, this paper adds the QRBL mechanism after the position update phase.
Assuming \(X_{i} = (x_{i,1} ,x_{i,2} ,x_{i,3} , \ldots ,x_{i,d} )\) is an individual in the d-dimensional search space, the definition of its quasi-reflective solution \(X_{i}^{qr} = (x_{i,1}^{qr} ,x_{i,2}^{qr} ,x_{i,3}^{qr} , \ldots ,x_{i,d}^{qr} )\) is as follows:
where xi,j denotes the position of individual i in the jth dimension of the search space, \(u_{j} (t)\) and \(l_{j} (t)\) represent the upper and lower bounds of the population dynamic boundary at the t iteration, and \({\text{rand}}\) is a random number in [0, 1].
3.6 Specific Steps of MSIGOA
To sum up, the specific steps of MSIGOA are as follows:
Step 1 Set key parameters like the population number N, question dimension Dim, and maximum iteration number T.
Step 2 Perform the chaotic initialization for the grasshopper population according to the circle map** of Eqs. (13) and (14).
Step 3 Calculate the fitness values of all grasshopper individuals and update the position of target individuals.
Step 4 Update the nonlinear decreasing coefficient c.
Step 5 Generate a random number r in the interval [0, 1].
Step 6 If r is smaller than Pv, update the position according to Eqs. (7) and (17).
Step 7 If r is greater than Pv, update the position according to Eqs. (16) and (17);
Step 8 Perform the quasi-reflection learning mechanism according to Eq. (18) to construct a new population.
Step 9 Update the position of the target individuals after calculating the fitness values of all grasshopper individuals.
Step 10 Check whether the stop conditions are met. If the conditions are met, the search is stopped, and the global optimal solution and fitness value are displayed. Otherwise, go to Step 4.
The specific flowchart of MSIGOA is shown in Fig. 2.
4 Experimental Simulation and Analysis
There are two sections in the experiment described in this paper: (1) compare MSIGOA with several novel SI algorithms; (2) compare MSIWOA with other modified GOAs. The experiment picked the 12 classical benchmark functions [57], of which F1–F6 are unimodal and F7–F12 are multimodal. The capacity of algorithms to optimize can be more effectively examined using different types of test functions. Table 1 displays the names, expressions, optimal values, domains, and dimensions of these test functions. Additionally, a comparison of the CEC2017 test functions was added to the first section of the experiment to make it more challenging.
4.1 Experimental Environment
All algorithms use the same hardware and software platform to ensure the testing experiment's neutrality and impartiality. The operating system is Windows 10 (64-bit), the hardware is an Intel (R) Core (TM) i5-8250U CPU running at 1.60 GHz and 1.80 GHz, and the software is MATLAB R2018b.
4.2 Compare MSIGOA with Other SI Algorithms
Harris hawks optimization (HHO) [58], dung beetle optimizer (DBO) [59], butterfly optimization algorithm (BOA) [60], whale optimization algorithm (WOA) [61], and grasshopper optimization algorithm (GOA) are the five novel SI algorithms that were chosen in the first part of the experiment. The population size N and maximum number of iterations T for each algorithm are set at 500 and 30, respectively.
4.2.1 Compare MSIGOA with Other SI Algorithms on Classical Benchmark Functions
In this section, DBO, BOA, HHO, WOA, and GOA are chosen to perform comparative experiments with MSIGOA on the 12 benchmark test functions listed in Table 1 to verify the viability and efficacy of MSIGOA. Under the same conditions, the experimental results were evaluated through four performance indicators: maximum value, mean value, minimum value, and standard deviation. The maximum and minimum values represent the worst and best accuracy to which the algorithm converges, respectively. The mean value denotes the algorithm’s mean convergence accuracy, while the standard deviation signifies its stability and robustness. The statistical results of each algorithm after 30 independent runs on 12 test functions are displayed in Table 2. The convergence curves for each algorithm on 12 test functions are illustrated in Fig. 3.
It is evident from Table 2’s comparison findings between the MSIGOA and the other five algorithms that the MSIGOA is capable of reaching the theoretical best value on the unimodal test functions F1–F4, multimodal test functions F7–F9, and F11, and the standard deviation and mean accuracy are zero. Moreover, on the test function F10, the result of MSIGOA is quite near the theoretical best value, and the standard deviation is also zero. On most test functions, MSIGOA can reach or approach the theoretical best value, indicating that it has powerful local exploitation and global exploration capabilities and can promptly jump out and find the global optimal solution (theoretical best value) when falling into the local optima. At the same time, the standard deviation of MSIGOA on most test functions is zero, which indicates that the algorithm has strong stability and that the optimization results are not accidental.
Compared with these five algorithms, MSIGOA has better mean accuracy, standard deviation, worst accuracy, and best accuracy on the seven functions F1–F5 and F7–F8, and the optimization effect is obviously better than theirs. On the function F10, the mean accuracy, standard deviation, worst accuracy, and best accuracy obtained by MSIGOA are exactly the same as that of HHO and DBO and basically the same as WOA, but still far superior to BOA and GOA. On the functions F9 and F11, MSIGOA, DBO, HHO, and WOA can all reach the theoretical optimal value. Among them, the indicators of HHO and MSIGOA are identical. Although both WOA and DBO can achieve the theoretical best value, WOA sometimes sinks into local optima on F11, while DBO sometimes sinks into the local optima on F9. Therefore, their mean accuracy and other indicators are different from MSIGOA. For the function F6, HHO is superior to MSIGOA in mean accuracy, worst accuracy, and best accuracy, and the standard deviation is basically the same. MSIGOA is basically the same as WOA, DBO, and BOA in mean accuracy, standard deviation, worst accuracy, and best accuracy, but still better than GOA. For the function F12, HHO and DBO are superior to MSIGOA in the mean accuracy, worst accuracy, standard deviation, and optimal accuracy. MSIGOA is superior to WOA, BOA, and GOA.
The convergence curves of MSIGOA and the other five SI algorithms on 12 benchmark functions are illustrated in Fig. 3 to more intuitively demonstrate the convergence accuracy and speed of each algorithm. The convergence graph's horizontal axis (iteration) and vertical axis (fitness) stand for the number of iterations and the fitness value, respectively. Figure 3 demonstrates that MSIGOA has the highest convergence accuracy and the fastest convergence speed on the test functions F1–F5 and F7–F8. Especially for the functions F1 and F3, MSIGOA can reach the theoretical optimal value within 300 iterations. For the test functions F9–F11, if we do not consider the situation where DBO and WOA sink into the local optima, then DBO, WOA, and HHO have almost the same convergence accuracy as MSIGOA. But compared to these three algorithms, the convergence curve of MSIGOA drops faster, and the fitness value reaches or approaches the theoretical best value at a faster speed. In addition, even though the optimization effect of MSIGOA on the test functions F6 and F12 is slightly worse than that of DBO and HHO, it is still better than the original GOA in terms of convergence accuracy and speed. In summary, MSIGOA can reach theoretical optimal values on most test functions. This demonstrates that the algorithm has strong global exploration and local development capabilities and can effectively escape from local optima. In addition, the convergence curve and standard deviation show that the algorithm has excellent convergence speed and stability. All these fully demonstrate the effectiveness and feasibility of MSIGOA.
4.2.2 Comparison of MSIGOA with Other SI Algorithms on the CEC2017 Test Functions
This section compares MSIGOA with five algorithms, such as WOA and HHO, on the CEC2017 test function to further validate the effectiveness and feasibility of MSIGOA in resolving intricate issues. The experimental outcomes for MSIGOA and five compared algorithms on 30 test functions are displayed in Table 4, and the convergence curves are illustrated in Fig. 4. CEC2017 has 30 test functions (as shown in Table 3), which are divided into four categories: unimodal F1–F3, multimodal F4–F10, mixed F11–F20, and composite F21–F30. However, for some reasons, F2 has now been removed from the CEC2017 test functions. The structure of the CEC2017 test functions is more intricate than that of classical test functions, and it is difficult to discover the optimal solution [62]. Because of this, all algorithms have 1000 iterations set for them. The dimension of all test functions to are adjusted to 10 in consideration of computer performance and time costs.
After separately running each of the MSIGOA and five compared algorithms 30 times on 29 test functions, Table 4 displays their standard deviation and mean accuracy. The data in Table 4 shows that MSIGOA has suboptimal accuracy on the 8 functions F5, F9–F10, F16, F19–F20, F23, and F25 and optimal accuracy on the 16 functions F1, F3–F4, F7–F8, F11–F15, F17–F18, F22, and F28–F30. Additionally, the MSIGOA has better standard deviations than the five compared algorithms on the majority of test functions. From this, it can be known that MSIGOA can obtain outstanding optimization outcomes on 80% of the CEC2017 test functions and has good stability. The convergence curves for the six algorithms on all CEC2017 test functions are displayed in Fig. 4. It is evident from the curve in Fig. 4 that MSIGOA outperforms the other algorithms for convergence speed on over half of the test functions. For example, on testing functions F3, F4, F11, F14, and F22, although the accuracy of these five compared algorithms is not significantly different from or approximately equal to that of MSIGOA, the convergence speed of MSIGOA is faster. In summary, MSIGOA achieved better optimization results than the other five algorithms on more than half of the CEC2017 test functions, which sufficiently demonstrates the validity and feasibility of MSIGOA in resolving intricate issues.
4.3 Comparison with Other Modified Grasshopper Optimization Algorithms
To validate that MSIGOA is more competitive than other improved GOAs, the following GOAs were chosen for comparison experiments: EGOA [23], IGOA [24], CC–GOA [28], Crazy–GOA [30], and the original GOA. For the sake of fairness in experiment, parameters such as the highest iterations Tmax, population size N, and question dimension Dim are set at 500, 30, and 30, respectively. The above six algorithms were independently run 30 times on 12 benchmark test functions in Table 1. Then the corresponding worst and best convergence accuracy, standard deviation, and mean convergence accuracy were statistical. Table 5 shows the experimental data for each algorithm, and Fig. 5 displays their convergence curves.
In Table 5, it can be seen that CC–GOA, IGOA, and Crazy–GOA are superior to GOA in terms of mean accuracy, standard deviation, worst accuracy, and best accuracy for unimodal test functions F1–F6 and multimodal test functions F7–F12. As for EGOA, it performed slightly worse than GOA on the test functions F8 and F9 and slightly better than GOA on the other functions. From the perspective of the improvement effect, although CC–GOA, IGOA, and Crazy–GOA have enhanced the mean accuracy and other performance indicators, there is a big gap compared with the improvement effect of MSIGOA. Taking CC–GOA, which has the best optimization effect among the three algorithms, as an example, the best convergence accuracy of CC–GOA on the test functions F1–F4 and F7–F8 is far from reaching the theoretical best value. However, the best accuracy of MSIGOA on these six test functions can reach the theoretical best value, and the standard deviation and mean accuracy are zero. On the test functions F5 and F6, the mean accuracy, standard deviation, worst accuracy, and best accuracy obtained by MSIGOA are basically the same as those of CC–GOA, IGOA, and Crazy–GOA, but better than that of EGOA and GOA.
The convergence curves of each algorithm on different test functions are displayed in Fig. 5 to facilitate a more visual analysis. Figure 5 shows that the optimization performance of MSIGOA on all test functions is better than that of other modified grasshopper algorithms. On the functions F1–F4 and F7–F12, it is clear that MSIGOA has the highest convergence accuracy and the fastest convergence speed. Especially for the functions F9 and F11, the MSIGOA can reach the theoretical optimal value within 50 iterations. Finally, although the convergence accuracy of MSIGOA on the test functions F5 and F6 is basically the same as that of CC–GOA, IGOA, and Crazy–GOA, its convergence speed is better than these three algorithms. In summary, the convergence accuracy and speed of MSIGOA, CC–GOA, IGOA, and Crazy–GOA on 12 test functions are better than those of GOA. However, the performance of MSIGOA is much better than that of other improved GOAs. This fully demonstrates that MSIGOA is more competitive than other improved GOAs.
4.4 The Engineering Application of MSIGOA
To validate the effectiveness of MSIGOA when resolving actual problems, WOA, HHO, BOA, DBO, and Aquila optimizer (AO) [63] are selected to be compared with MSIGOA in six engineering problems, such as compression spring design, gear train design, and three-bar truss design. The population size and iterations of each algorithm are set to 30 and 500, respectively, and the optimal solutions obtained by each algorithm are compared after 30 independent runs.
4.4.1 Compression Spring Design
A common problem in mechanical engineering is the design of compression springs [64], where the objective is to minimize the spring's weight while meeting the requirements of minimal deflection, shear stress, flutter frequency, etc. Figure 6 depicts the general construction of the spring. Three design variables are involved in this problem: the diameter w of the spring wire, the number N of effective coils in the spring, and the average coil diameter W of the spring. The objective function and constraint conditions are as follows:
subject to
where
Table 6 indicates the compression spring design optimization outcomes for MSIGOA and the other four algorithms. When the optimization outcomes of each algorithm are compared, it becomes evident that the design scheme obtained by MSIGOA is the best because it minimizes the mass of the spring. This means MSIGOA outperforms other algorithms in compression spring design in terms of optimization performance.
4.4.2 Gear Train Design
The objective of the mechanical engineering unconstrained discrete design problem known as the gear train design problem is to determine an optimal amount of gears to minimize the transmission ratio [65]. The ratio between the angular velocities of the output and input shafts is known as the transmission ratio. Figure 7 depicts the general construction of the gear train. Each of the gears in the gear train has a different number of teeth, which corresponds to four design variables \((T_{a} ,T_{b} ,T_{c} ,T_{d} )\). The problem can be described as:
where
Table 7 indicates the gear train design optimization outcomes for MSIGOA and the other four algorithms. The statistical findings shown in Table 7 indicate that MSIGOA, WOA, HHO, and AO outperform the other comparative algorithms. The gear train designed based on the combination of the number of gears obtained from the above four algorithms has the minimum transmission ratio. It follows that the majority of advanced algorithms can effectively handle unconstrained engineering design problems.
4.4.3 Three-Bar Truss Design
One characteristic nonlinear structural design problem in civil engineering is the design of a three-bar truss structures [66]. The objective is to minimize the volume of the three-bar truss by determining the ideal cross-sectional area combination of truss members while satisfying constraints like buckling, stress, and deflection. Figure 8 depicts the general construction of the three-bar truss. Since the construction of the three-bar truss is symmetrical, there are only two design variables in this problem, which are the cross-sectional areas A1 of bar 1 and A2 of bar 2. The mathematical description of this problem is shown below:
subject to
where
Table 8 indicates the three-bar truss design optimization outcomes for MSIGOA and the other four algorithms. When the optimization outcomes of each algorithm are compared, it becomes evident that the design scheme obtained by MSIGOA is the best because it minimizes the volume of the three-bar truss. This means MSIGOA outperforms comparative algorithms in three-bar truss design in terms of performance.
4.4.4 Welded Beam Design
The intention of the welded beam design problem is to minimize the fabrication cost. The problem is subject to buckling load, shear stress, bending stress in the beam, and end deflection of the beam [67]. Figure 9 depicts the general construction of the welded beam. The clamped beam length \({\text{l}}\), weld thickness h, beam thickness b, and beam height t are the design variables of this problem. The objective function and constraint conditions are as follows:
subject to
where
Table 9 indicates the welded beam design optimization outcomes for MSIGOA and the other four algorithms. Upon comparing the optimization results of all algorithms, it is apparent that the design scheme acquired by MSIGOA is the best one, as its cost is the lowest. Consequently, MSIGOA is capable of handling the welded beam design problem efficiently.
4.4.5 Corrugated Bulkhead Design
The objective of this problem is to minimize the weight of the corrugated bulkhead of the tanker while the corresponding constraints are met [68]. The plate thickness t, depth d, width w, and length l are the design variables for this problem. The problem can be described as:
subject to
where
Table 10 indicates the corrugated bulkhead design optimization outcomes for MSIGOA and the other four algorithms. It is clear from comparing the optimization results of each algorithm that the design scheme attained by MSIGOA is the best one, since it minimizes the weight of the corrugated bulkhead. This means MSIGOA outperforms the comparative algorithms in corrugated bulkhead design in terms of performance.
4.4.6 Tubular Column Design
This engineering problem aims to produce tubular columns with homogeneous sections employing particular materials [69]. In the meantime, it is required that the tubular column can sustain a certain compressive load, and the production cost must be as low as possible. Figure 10 depicts the general construction of the tubular column. The thickness t of the tube and the average diameter d of the column are the design variables for this problem. The specific mathematical expressions are as follows:
subject to
where
Table 11 indicates the tubular column design optimization outcomes for MSIGOA and the other four algorithms. It is clear from comparing the optimized outcomes of each algorithm that the design scheme produced by MSIGOA is the best one, since it minimizes the production cost of the tubular columns. This indicates that MSIGOA performs better than other algorithms when it comes to tubular column design.
5 Conclusions
This paper presents a multi-strategy improved grasshopper optimization algorithm named MSIGOA to overcome the drawbacks of the original grasshopper optimization algorithm, such as slow convergence, vulnerability to trap** into local optima, and low accuracy. Firstly, circle map** is used to initialize the population, making the population distribution more uniform and having higher diversity. Secondly, a nonlinear decreasing coefficient is employed instead of an original decreasing coefficient to meet the needs of the algorithm at different stages and improve both local exploitation and global exploration capabilities. Thirdly, the modified golden sine mechanism is added during the position update stage to change the single position update mode of GOA and enhance the local exploitation capability. Fourthly, the greedy strategy is added to greedily select the new and old positions of the individual to retain the better position and increase the speed of convergence. Finally, the quasi-reflection-based learning mechanism is utilized to construct new populations to improve population multiplicity and the capability to escape from local optima.
To examine the performance of the proposed MSIGOA, comprehensive comparative experiments were conducted with the original GOA and other advanced algorithms on 12 classic test functions and the CEC2017 test functions. The results reveal that MSIGOA has stronger comprehensive optimization capabilities and is superior to the compared algorithms in terms of search ability, convergence speed, and stability. In addition, the MSIGOA solves six engineering optimization problems of the compression spring design, gear train design, three-bar truss design, welded beam design, tubular column design, and corrugated bulkhead design. The experimental outcomes show that the MSIGOA achieves the best results on all design problems, can provide better design solutions than the compared algorithms, and is more competitive than the compared algorithms. It is worth noting that the MSIGOA does not achieve the best results on all test functions. The results of MSIGOA on some test functions are not as good or slightly worse than the compared algorithms. Therefore, MSIGOA still has a lot of development space in the future.
In future work, the proposed MSIGOA will be applied to practical problems such as wind power forecasting and UAV path planning. For example, one of our ongoing projects is to apply MSIGOA to ultra-short-term wind power forecasting, and some progress has been made. In addition, another research direction worth exploring is applying MSIGOA to the maximum power point tracking of photovoltaic power generation or further enhancing the performance of MSIGOA by introducing new improvement strategies.
Data Availability
All data generated or analyzed during this study are included in this article.
Abbreviations
- MSIGOA:
-
Multi-strategy improved grasshopper optimization algorithm
- GOA:
-
Grasshopper optimization algorithm
- GA:
-
Genetic algorithm
- TLBO:
-
Teaching–learning-based optimization
- GSA:
-
Gravitation search algorithm
- SI:
-
Swarm intelligence
- PSO:
-
Particle swarm optimization
- ACO:
-
Ant colony optimization
- ABC:
-
Artificial bee colony
- EGOA:
-
Enhanced grasshopper optimization algorithm
- IGOA:
-
Improved grasshopper optimization algorithm
- CC–OA:
-
Improved grasshopper optimization algorithm combined with combined chaos and Cauchy
- Crazy–GOA:
-
Improved grasshopper optimization algorithm using crazy factor
- Gold-SA:
-
Golden sine algorithm
- OBL:
-
Opposition-based learning
- QOBL:
-
Quasi-opposition-based learning
- QRBL:
-
Quasi-reflection-based learning
- HHO:
-
Harris hawks optimization
- DBO:
-
Dung beetle optimizer
- BOA:
-
Butterfly optimization algorithm
- WOA:
-
Whale optimization algorithm
- AO:
-
Aquila optimizer
- CSD:
-
Compression spring design
- GTD:
-
Gear train design
- TBTD:
-
Three-bar truss design
- WBD:
-
Welded beam design
- CBD:
-
Corrugated bulkhead design
- TCD:
-
Tubular column design
References
Khalid, O.W., Isa, N.A.M., Sakim, H.A.M.: Emperor penguin optimizer: a comprehensive review based on state-of-the-art meta-heuristic algorithms. Alex. Eng. J. 63, 487–526 (2023)
Holland, J.H.: Genetic algorithms. Sci. Am. 267, 66–72 (1992)
Arqub, O.A., Abo-Hammour, Z., et al.: Solving singular two-point boundary value problems using continuous genetic algorithm. Abstr. Appl. Anal. 2012, 205391 (2012)
Arqub, O.A., Abo-Hammour, Z., et al.: Numerical solution of systems of second-order boundary value problems using continuous genetic algorithm. Inf. Sci. 279, 396–415 (2014)
Rao, R.V., Savsani, V.J., Vakharia, D.P.: Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43(3), 303–315 (2011)
Yu, K., Wang, X., Wang, Z.: An improved teaching-learning based optimization algorithm for numerical and engineering optimization problems. J. Intell. Manuf. 27(4), 831–843 (2016)
Ge, F., Hong, L., Shi, L.: An autonomous teaching-learning based optimization algorithm for single objective global optimization. Int. J. Comput. Intell. Syst. 9(3), 506–524 (2016)
Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)
Rashedi, E., Nezamabadi-pour, H., Saryazdi, S.: BGSA: binary gravitational search algorithm. Nat. Comput. 9, 727–745 (2010)
Mittal, H., Tripathi, A., Pandey, A.C., Pal, R.: Gravitational search algorithm: a comprehensive analysis of recent variants. Multimedia Tools Appl. 80, 7581–7608 (2021)
Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN'95-International Conference on Neural Networks, Vol. 4. pp. 1942–1948 (1995)
Shen, Y., Wang, G., Tao, C.: Particle swarm optimization with novel processing strategy and its application. Int. J. Comput. Intell. Syst. 4, 100–111 (2011)
Fan, S.K.S., Zahara, E.: A hybrid simplex search and particle swarm optimization for unconstrained optimization. Eur. J. Oper. Res. 181(2), 527–548 (2007)
Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)
Chandra, B.M., Baskaran, R.: Survey on recent research and implementation of ant colony optimization in various engineering applications. Int. J. Comput. Intell. Syst. 4, 566–582 (2011)
Socha, K., Dorigo, M.: Ant colony optimization for continuous domains. Eur. J. Oper. Res. 185(3), 1155–1173 (2008)
Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Glob. Optim. 39, 459–471 (2007)
Buyukozkan, K., Sarucan, A.: Applicability of artificial bee colony algorithm for nurse scheduling problems. Int. J. Comput. Intell. Syst. 7, 121–136 (2014)
Karaboga, D., Akay, B.: A modified artificial bee colony (ABC) algorithm for constrained optimization problems. Appl. Soft Comput. 11(3), 3021–3031 (2011)
Saremi, S., Mirjalili, S., Lewis, A.: Grasshopper optimisation algorithm: theory and application. Adv. Eng. Softw. 105, 30–47 (2017)
Xu, Z., Heidari, A.A., Kuang, F., et al.: Enhanced Gaussian bare-bones grasshopper optimization: mitigating the performance concerns for feature selection. Expert Syst. Appl. 212, 118642 (2023)
Ye, Y., **ong, S., Dong, C., Chen, Z.: The structural weight design method based on the modified grasshopper optimization algorithm. Multimedia Tools Appl. 81(21), 29977–30005 (2022)
Jalali, S.M.J., Ahmadian, S., Khodayar, M., Khosravi, A., et al.: Towards novel deep neuroevolution models: chaotic levy grasshopper optimization for short-term wind speed forecasting. Eng. Comput. 38, 1787–1811 (2021)
Wu, Z., Shen, D.: Parameter identification of photovoltaic cell model based on improved grasshopper optimization algorithm. Optik 247, 167979 (2021)
Liu, J., Wang, A., Qu, Y., Wang, W.: Coordinated operation of multi-integrated energy system based on linear weighted sum and grasshopper optimization algorithm. IEEE Access. 6, 42186–42195 (2018)
Alhejji, A., Hussein, M.E., Kamel, S., Alyami, S.: Optimal power flow solution with an embedded center-node unified power flow controller using an adaptive grasshopper optimization algorithm. IEEE Access. 8, 119020–119037 (2020)
Bhukya, L., Nandiraju, S.: A novel photovoltaic maximum power point tracking technique based on grasshopper optimized fuzzy logic approach. Int. J. Hydrog. Energy 45(16), 9416–9427 (2020)
Dong, C., Ye, Y., Liu, X., Yang, Y., Guo, W.: The sensitivity design of piezoresistive acceleration sensor in industrial IoT. IEEE Access. 7, 16952–16963 (2019)
Zhao, R., Ni, H., Feng, H., et al.: An improved grasshopper optimization algorithm for task scheduling problems. Int. J. Innov. Comput. Inform. Control. 15(5), 1967–1987 (2019)
Bekana, P., Sarangi, A., Mishra, D., Sarangi, S.K.: Improved grasshopper optimization algorithm using crazy factor. In: Intelligent and Cloud Computing: Proceedings of ICICC 2021, pp. 187–197 (2022)
Yildiz, B.S., Pholdee, N., Bureerat, S., et al.: Robust design of a robot gripper mechanism using new hybrid grasshopper optimization algorithm. Expert. Syst. 38(3), e12666 (2021)
Zhou, H., Ding, Z., Peng, H., Tang, Z., Liang, G., et al.: An improved grasshopper optimizer for global tasks. Complexity 2020, 1–23 (2020)
Huang, J., Li, C., Cui, Z., Zhang, L., Dai, W.: An improved grasshopper optimization algorithm for optimizing hybrid active power filters’ parameters. IEEE Access. 8, 137004–137018 (2020)
Meraihi, Y., Gabis, A.B., Mirjalili, S., Ramdane-Cherif, A.: Grasshopper optimization algorithm: theory, variants, and applications. IEEE Access. 9, 50001–50024 (2021)
Topaz, C.M., Bernoff, A.J., Logan, S., Toolson, W.: A model for rolling swarms of locusts. Eur. Phys. J. Spec. Top. 157, 93–109 (2008)
Ghaderpour, E., Pagiatakis, S.D., Hassan, Q.K.: A survey on change detection and time series analysis with applications. Appl. Sci. 11(13), 6141 (2021)
Tanyildizi, E., Demir, G.: Golden sine algorithm: a novel math-inspired algorithm. Adv. Electr. Comput. Eng. 17(2), 71–79 (2017)
Wu, Z., Yu, D., Kang, X.: Application of improved chicken swarm optimization for MPPT in photovoltaic system. Optim. Control. Appl. Meth. 39(2), 1029–1042 (2018)
Alatas, B.: Chaotic bee colony algorithms for global numerical optimization. Expert Syst. Appl. 37(8), 5682–5687 (2010)
Arora, S., Anand, P.: Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl. 31, 4385–4405 (2019)
Yue, X., Zhang, H., Yu, H.: A hybrid grasshopper optimization algorithm with invasive weed for global optimization. IEEE Access 8, 5928–5960 (2020)
Zhang, H., Gao, Z., Ma, X., Zhang, J., Zhang, J.: Hybridizing teaching-learning-based optimization with adaptive grasshopper optimization algorithm for abrupt motion tracking. IEEE Access 7, 168575–168592 (2019)
Fan, Q., Huang, H., Chen, Q., Yao, L., Yang, K., Huang, D.: A modified self-adaptive marine predators algorithm: framework and engineering applications. Eng. Comput. 7, 168575–168592 (2022)
Ozbay, F.A., Alatas, B.: Adaptive Salp swarm optimization algorithms with inertia weights for novel fake news detection model in online social media. Multimedia Tools Appl. 80(26), 34333–34357 (2021)
Cao, D., Xu, Y., Yang, Z., Dong, H., Li, X.: An enhanced whale optimization algorithm with improved dynamic opposite learning and adaptive inertia weight strategy. Complex Intell. Syst. 9(1), 767–795 (2023)
Gupta, S., Deep, K.: Improved sine cosine algorithm with crossover scheme for global optimization. Knowl. Based Syst. 165, 374–406 (2019)
Tizhoosh, H.R.: Opposition-based learning: a new scheme for machine intelligence. In: International conference on computational intelligence for modelling, control and automation and international conference on intelligent agents, web technologies and internet commerce (CIMCA-IAWTIC'06), pp. 695–701 (2005)
Abd Elaziz, M., Oliva, D., et al.: An improved opposition-based sine cosine algorithm for global optimization. Expert Syst. Appl. 90, 484–500 (2017)
Ewees, A.A., Abd Elaziz, M., Houssein, E.H.: Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 112, 156–172 (2018)
Abd Elaziz, M., Oliva, D.: Parameter estimation of solar cells diode models by an improved opposition-based whale optimization algorithm. Energy Convers. Manag. 171, 1843–1859 (2018)
Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.: Quasi-oppositional differential evolution. In: 2007 IEEE Congress on Evolutionary Computation, pp. 2229–2236 (2007)
Guha, D., Roy, P.K., Banerjee, S.: Load frequency control of large scale power system using quasi-oppositional grey wolf optimization algorithm. Eng. Sci. Technol. Int. J. 19(4), 1693–1713 (2016)
Sharma, S., Bhattacharjee, S., Bhattacharya, A.: Quasi-oppositional swine influenza model based optimization with quarantine for optimal allocation of DG in radial distribution network. Int. J. Electr. Power Energy Syst. 74, 348–373 (2016)
Shiva, C.K., Mukherjee, V.: A novel quasi-oppositional harmony search algorithm for automatic generation control of power system. Appl. Soft Comput. 35, 749–765 (2015)
Ergezer, M., Simon, D., Du, D.: Oppositional biogeography-based optimization. In: 2009 IEEE International Conference on Systems, Man and Cybernetics, pp. 1009–1014 (2009)
Fan, Q., Chen, Z., **a, Z.: A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems. Soft. Comput. 24, 14825–14843 (2020)
Momin, J., Yang, X.S.: A literature survey of benchmark functions for global optimization problems. J. Math. Model. Numer. Optim. 4(2), 150–194 (2013)
Heidari, A.A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., Chen, H.: Harris hawks optimization: algorithm and applications. Future Gener. Comput. Syst. 97, 849–872 (2019)
Xue, J., Shen, B.: Dung beetle optimizer: a new meta-heuristic algorithm for global optimization. J. Supercomput. 79(7), 7305–7336 (2023)
Arora, S., Singh, S.: Butterfly optimization algorithm: a novel approach for global optimization. Soft. Comput. 23, 715–734 (2019)
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016)
Wu, G., Mallipeddi, R., Suganthan, P.N.: Problem defnitions and evaluation criteria for the CEC 2017 competition on constrained real-parameter optimization. National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical report (2017)
Abualigah, L., Yousri, D., Abd Elaziz, M., et al.: Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 157, 107250 (2021)
Houssein, E.H., Saad, M.R., Ali, A.A., et al.: Multiple strategies boosted ORCA predation algorithm for engineering optimization problems. Int. J. Comput. Intell. Syst. 16(1), 67 (2023)
Pham, V.H.S., Nguyen Dang, N.T., Nguyen, V.N.: Hybrid sine cosine algorithm with integrated roulette wheel selection and opposition-based learning for engineering optimization problems. Int. J. Comput. Intell. Syst. 16(1), 171 (2023)
Hou, P., Liu, J., Ni, F., et al.: Hybrid strategies based seagull optimization algorithm for solving engineering design problems. Int. J. Comput. Intell. Syst. 17, 62 (2024)
Mohapatra, S., Mohapatra, P.: An improved golden jackal optimization algorithm using opposition-based learning for global optimization and engineering problems. Int. J. Comput. Intell. Syst. 16, 147 (2023)
Bayzidi, H., Talatahari, S., Saraee, M., et al.: Social network search for solving engineering optimization problems. Comput. Intell. Neurosci. 2021, 1–32 (2021)
Gandomi, A.H., Yang, X.S., Alavi, A.H.: Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng. Comput. 29, 17–35 (2013)
Acknowledgements
The authors thank the anonymous reviewers for their thoughtful suggestions and comments.
Funding
This research was funded by the Short-term Power Load Forecasting based on Feature Selection and optimized LSTM with DBO which is the fundamental scientific research project of Liaoning Provincial Department of Education (JYTMS20230189), and the Application of hybrid Gray Wolf Algorithm in job shop scheduling problem of the Research Support Plan for Introducing High-Level Talents to Shenyang Ligong University (No. 1010147001131).
Author information
Authors and Affiliations
Contributions
WL was responsible for methodology, writing, reviewing, and supervising. WY participated in data statistics, data analysis, writing, and software. TL, GH, and TR took part in the data analysis, writing, and plotting of the figures.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Consent to Participate
Informed consent was obtained from all individual participants included in the study.
Ethical Approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Liu, W., Yan, W., Li, T. et al. A Multi-strategy Improved Grasshopper Optimization Algorithm for Solving Global Optimization and Engineering Problems. Int J Comput Intell Syst 17, 182 (2024). https://doi.org/10.1007/s44196-024-00578-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-024-00578-6