1 Introduction

In many scientific and engineering fields, we face different complex engineering optimization problems, such as designing industrial systems (Ma et al. 2011), simulating biological systems (Gábor and Banga 2015), estimating chemical reaction parameters (Fernandes 2005), predicting protein structures (Aksenov et al. 2005), among others. Traditional optimization methods, such as linear programming and nonlinear programming, require high computational costs and make strong assumptions that may not be satisfied in practical problems. Therefore, researchers have turned to metaheuristic optimization algorithms as an alternative. These algorithms use heuristic information extracted from natural or social phenomena to find the optimal solution.

Examples of metaheuristic algorithms include differential evolution (DE) (Storn and Price 1997), particle swarm optimization (PSO) (Shi and Eberhart 1998), artificial bee colony (ABC) algorithm (Karaboga and Basturk 2007), harmony search (HS) (Geem et al. 2001), bald eagle strategy (BES) (Alsattar et al. 2020), krill herd (KH) algorithm (Wang et al. 2014), whale optimization algorithm (WOA) (Mirjalili and Lewis 2016), and gravitational search algorithm (GSA) (Rashedi et al. 2009). These algorithms have lower computational costs and can handle uncertainties and complexities that exist in practical problems. They have found applications in diverse areas, such as machine learning, deep learning, fuzzy logic system design, and image enhancement (Mirjalili et al. 2012; Aljarah et al. 2018; Junior and Yen 2019; Wang and Kumbasar 2019; Yang and Chen 2019; Sun et al. 2015; Tsai et al. 2012; Rakhshani et al. 2016). Due to their superior performance and flexibility, metaheuristic optimization algorithms have become important tools for solving practical problems today. Therefore, further research and development of these algorithms can lead to more efficient, flexible, and practical numerical optimization methods.

The harmony search (HS) algorithm (Geem et al. 2001), proposed by Geem in 2001, is a heuristic algorithm that draws inspiration from the process of musical improvisation. HS is a simple and easy-to-implement algorithm that simulates the process of creating music, where the objective function to be minimized is viewed as a piece of music composed of decision variables, each representing a note. In the search process, these pieces of music are continuously adjusted under the guidance of the harmony memory to obtain better solutions (the best music). HS has been successfully applied to various real-world optimization problems, including power system scheduling (Vasebi et al. 2007; Karthigeyan et al. 2015; **a & Wang 2013), signal processing (Guo et al. 2012; Wang et al. 2012; Mohdiwale et al. 2020), neural networks (Gao et al. 2012; Kattan et al. 2010; Lai et al. 2015; Özçalıcı et al. 2022; Zainuddin et al. 2013), and image processing (Ceylan and Taşkın 2016; Li et al. 2023; Shivali et al. 2018). Despite its success, HS has some limitations that affect its effectiveness, including slower convergence speed and weaker development ability compared to other algorithms. Additionally, its performance heavily relies on the quality of the harmony memory during the search process (El-Abd 2013; Ouyang et al. 2015). To address these issues, researchers have proposed modifications to improve the performance of HS, falling into three categories: modifying parameter settings, improving search strategies, and hybridizing HS with other metaheuristic algorithms. According to these three categories, the work related to HS algorithm in the past 10 years is sorted out and placed in Table 1.

Table 1 The previous work and contributions already made

Table 1 shows that in previous research, the development of parameter tuning techniques has become relatively mature. However, the improvement of search strategies and the method of enhancing a specific algorithm by integrating the advantages of other algorithms remain popular research topics. This approach of augmenting a particular algorithm by incorporating the strengths of other algorithms into its search strategy is not limited to the HS algorithm but has also been proven to be effective in various other algorithms (Abed-alguni et al. 2021, 2022). However, most of the existing improvements to the HS algorithm have not fully utilized the information and experience in harmony memory. Instead, they often only use the best or worst harmony from harmony memory and randomly select a harmony for random fine-tuning to generate new memory (Ouyang et al. 2015). To enhance the search process of the algorithm, it is worth exploring how to better use the eliminated harmonies in harmony memory to guide the search process. Moreover, although using the global best harmony to guide other harmonies toward it can speed up the convergence of the HS algorithm and improve computational efficiency, it can also cause the algorithm to converge too early and fall into local optima. Therefore, it is worthwhile to explore how to adjust the search domain reasonably to avoid premature convergence and improve computational accuracy. Some researchers have proposed the concept of a dynamic domain, which continuously narrows the search domain as the search process proceeds to avoid searching in invalid areas (Khalili et al. 2014; Zhu et al. 2020; Zhu and Tang 2021). However, further exploration is needed on how to adjust the search domain appropriately to balance convergence speed and computational accuracy

To address the limitations of the HS algorithm and improve its performance, we have made enhancements in three areas. Firstly, we have modified the algorithm parameters HMCR and BW to make them more effective. Secondly, we have designed a nonlinear dynamic domain, and finally, we have combined the Equilibrium Optimizer (EO) algorithm (Faramarzi et al. 2020a) to propose an improved algorithm called the Equilibrium Optimization-based Harmony Search Algorithm with Nonlinear Dynamic Domains (EO-HS-NDD). Many scholars have found that the particle update strategy in EO algorithm has good optimization performance, and have combined it with other algorithms such as slime mould algorithm to solve single-objective engineering design problems, multi-objective optimization problems and inverse kinematics of robotic arms (Yin et al. 2022a, 2022b; Luo et al. 2023). So, the concept of the Equilibrium Pool in the EO algorithm is introduced in EO-HS-NDD, which is a pool containing several of the best solutions. The algorithm randomly selects a solution from this pool as the basis for updating the position of each particle, a mechanism that enhances the algorithm’s global search capability while ensuring its convergence. Therefore, incorporating the optimization pool from the EO algorithm and its particle update strategy can help to strengthen the global search capability of HS. The main contributions of the EO-HS-NDD algorithm are as follows:

  • (1) We have introduced a historical harmony memory bank to store the eliminated harmony sums and the useful information hidden in them. A contrastive learning strategy is used to generate the initial harmony for the initialization of the historical harmony memory bank and harmony memory.

  • (2) The search strategy of the Equilibrium Optimizer algorithm is combined in the search process to improve the computational performance of the algorithm. The harmonies with the top four adaptation values in the harmony memory are selected as the candidate harmonies in each calculation, and the superior harmonies pool is constructed. Based on certain probability, the selected harmonies are updated according to the update rule of the improved EO algorithm. This improves the convergence of the algorithm and maintains its global search capability.

  • (3) A new dynamic domain for nonlinear convergence using harmony memory information is designed. We dynamically adjust the size of the search domain according to the current number of iterations and the maximum range of harmonies in the harmony memory, which can speed up algorithm convergence and reduce the waste of resources in the invalid region search.

  • (4) We have further modified the parameters of the algorithm to balance the search and mining performance. HMCR, PAR, and BW are set to a global adaptive adjustment form, which enhances the search ability and avoids premature convergence to fall into local optima.

  • (5) The proposed EO-HS-NDD algorithm, despite its various modifications, does not significantly alter the computational complexity and the basic framework of the classical HS algorithm. As a result, it remains relatively simple to implement.

To validate the performance of the proposed algorithm, it was evaluated on the Computational Intelligence International Standard Test Set CEC2017 (Wu et al. 2017). The CEC test suite has been used in multiple literatures related to optimization algorithms to test the performance of optimization algorithms (Zamani et al. 2022, 2021, 2019; Nadimi-Shahraki et al. 2022; Fatahi et al. 2024), among which the CEC2017 test suite is one of the more popular test sets (Zamani et al. 2022, 2021). The performance of the algorithm reflected by the experimental results of the test set is reliable. The performance of each modified part was tested and evaluated, and comparisons were made with HS, nine latest variants of HS and seven famous algorithms. The results demonstrate that the proposed algorithm outperforms other algorithms in terms of search, convergence speed, accuracy, and robustness. Furthermore, the algorithm’s capability to solve real-world optimization problems was verified by conducting tests on 15 CEC2020 real-world optimization problems and comparing them with top-level algorithms. The method achieved promising results compared to other algorithms.

The remainder of this paper is organized as follows. Section 2 provides an overview of the standard HS algorithm described in Section 2.1. Then, the EO algorithm is introduced in Section 2.2. Subsequently, the proposed algorithm and its innovative aspects are detailed in Section 3. Following that, Section 4 presents numerical optimization experiments on CEC 2017 and real-world optimization problems, followed by comparisons with several well-known, state-of-the-art algorithms, and an analysis and summary of the experimental results. Finally, Section 5 concludes the entire paper.

2 Preliminaries

This section briefly introduces the two algorithms hybridized in this paper: the HS algorithm and the EO algorithm. Section 2.1 presents the basic principles and framework of the HS algorithm, along with its pseudocode process; Section 2.2 discusses the computational principles and optimization strategies of the EO algorithm, providing the corresponding pseudocode process.

2.1 Harmony Search Algorithm

The Harmony Search (HS) algorithm draws inspiration from the improvisation process of music composition during ensemble playing (Geem et al. 2001). This process involves musicians constantly adjusting the pitch of a note in a musical piece to eventually obtain the best composition. Similarly, the HS algorithm continually adjusts the value of a variable in a solution vector to obtain an optimal solution. The algorithm generates new harmonies and finds better ones by applying three rules, which are adjusted using three parameters: Harmony Memory Considering Rate (HMCR), Pitch Adjustment Rate (PAR), and Bandwidth (BW).

The first rule of the Harmony Search (HS) algorithm involves selecting a decision variable from the harmony memory (HM) and assigning it to the corresponding position in the new harmony. The second rule adjusts the selected decision variable within a maximum bandwidth distance, denoted as BW. The third rule generates decision variables randomly within a specified maximum range. During the generation of new harmonies, the HMCR parameter determines whether to apply the first and second rules or the third rule. Furthermore, if the first two rules are selected, an additional parameter called PAR is used to determine whether to execute the first rule or the second rule. The algorithm is shown in Algorithm 1.

Algorithm 1
figure e

Harmony search (HS)

2.2 Equilibrium optimizer algorithm

The EO algorithm is a method developed by extracting ideas from the quality balance function, which was first proposed in 2020. The method utilizes the coefficients given in the quality balance function and combines features of initial population generation, optimal solution, solution update process, and fitness function calculation in metaheuristic algorithms to obtain a simple and effective solution. In EO, solutions are represented as particles in PSO, with concentration being similar to the position of particles in the PSO algorithm. Each solution represents a concentrated liquid within an adjustable volume, and adjusting the parameters of solutions corresponds to the variables of the solution. To determine the quality level of each solution, the fitness function of each solution must be found and used as the main comparison criterion.

The EO algorithm has three terms representing the update rules for the solutions. Each solution updates its concentration through three separate terms. The first term is the selection of a candidate solution, randomly selected from a pool of the best solutions so far, called the equilibrium pool. The second term is related to the concentration difference between the solution and the candidate solution, which can directly act as a search mechanism. This term encourages each solution in the population to conduct a global search, acting as an explorer. The third term is related to the generation rate and mainly plays the role of a developer or solution refiner, making small and wide adjustments, but it sometimes also contributes as an explorer. The EO algorithm is shown in Algorithm 2.

Algorithm 2
figure f

Equilibrium optimizer (EO)

3 The proposed algorithm

The Harmony Search (HS) algorithm is a metaheuristic optimization algorithm inspired by the process of musical improvisation. Although it has shown effectiveness in many optimization problems, the balance between its exploration and exploitation capabilities is not always ideal. The standard HS algorithm relies on three harmony generation rules: Harmony Memory Considering Rate (HMCR), Pitch Adjustment Rate (PAR), and random selection. These rules enable the algorithm to have good exploration capabilities in the early stages but may lead to reduced search efficiency in the later stages. Especially, the first two rules, although using resources in Harmony Memory (HM), do not fully utilize the directional information hidden in the dynamic information of HM during the update process, lacking guidance. The third rule can avoid local optima but may result in ineffective search in the later stages of local convergence.

On the other hand, the Equilibrium Optimization (EO) algorithm is inspired by the mass balance function, adjusts the balance between exploration and exploitation through two parameters, and has shown good performance in solving numerical and simple engineering optimization problems (Faramarzi et al. 2020a). The EO algorithm utilizes directional information in the population to guide the search, which can compensate for the shortcomings of the HS algorithm in utilizing Harmony Memory (HM). Therefore, by incorporating the Equilibrium Pool and particle update strategy from the EO algorithm into the HS algorithm, the global search capability of HS can be significantly strengthened. Specifically, this combination can use high-quality solutions in the Equilibrium Pool as reference points to guide the search process of the HS algorithm, increasing the ability of the algorithm to escape local optima and explore new solution spaces. Such an update mechanism promotes solution diversity and also introduces the advantage of using population dynamic information in the EO algorithm, thus enhancing search efficiency and solution quality. Moreover, introducing the EO algorithm’s particle update strategy into HS can provide more directional guidance for the harmony memory update of the HS algorithm. In the standard HS algorithm, the update of harmony memory is mainly based on random selection or fine-tuning of existing harmonies. By introducing the particle update mechanism from EO, the HS algorithm can consider directional information in the current search space when updating harmony memory, thereby improving the efficiency of the algorithm in the local search stage and accelerating convergence speed. The EO-HS-NDD algorithm also introduces and proposes a new nonlinear dynamic convergence domain mechanism, increasing the effectiveness of the chaotic search strategy of the third rule of the HS algorithm.

The following five subsections will describe these improvements in detail. The resulting pseudocode for the EO-HS-NDD algorithm is presented in Algorithm 3, and its Computational complexity is at the end.

Algorithm 3
figure g

EO-HS-NDD

3.1 The historical harmony memory

It has been observed by many researchers that better utilization of the information in HM can lead to better results in updating harmonies. However, few have considered that the harmonies eliminated from HM also contain valuable information that can guide the search. To address this, we propose the use of a Historical Harmony Memory (HHM) to store the eliminated harmonies from HM and utilize it in the updating rules to guide the algorithm search. In this study, we construct an HHM of the same size as HM, and use an Opposite Learning Strategy (OLS) to initialize both the Harmony Memory and the Historical Harmony Memory. The specific formula is shown below:

$${{\text{x}}}^{2{\text{k}}}={\text{LB}}{+({\text{UB}}-{\text{x}}}^{{\text{k}}})$$
(1)
$${{\text{HM}}}^{{\text{initial}}}=\{{{\text{x}}}^{1},{{\text{x}}}^{2},{{\text{x}}}^{3},\dots ,{{\text{x}}}^{{\text{HMS}}}\}$$
(2)
$${{\text{HHM}}}^{{\text{initial}}}=\{{{\text{x}}}^{{\text{HMS}}+1},{{\text{x}}}^{{\text{HMS}}+2},{{\text{x}}}^{{\text{HMS}}+3},\dots ,{{\text{x}}}^{2{\text{HMS}}}\}$$
(3)

Here, \(k=\mathrm{1,2},\dots ,HMS\), LB and UB are the minimum and maximum values of the variables. After backward learning generation, we calculate the fitness value of all initial harmonies and rank them from best to worst, with the best as \({x}_{f}^{1}\) and the worst as \({x}_{f}^{2HMS}\).

The first HMS harmonies are stored in the Harmony Memory, while the last HMS harmonies are stored in the Historical Harmony Memory. Unlike the Harmony Memory in standard HS, the harmonies in our Harmony Memory have been sorted according to their fitness values, but this does not affect the randomness of selecting harmonies from HM. HM and HHM are matrices of size \(HMS\times D\), where D is the dimension of the harmony. During the computation, HHM stores the eliminated harmonies from HM and arranges them in order, as shown in Fig. 1. Figure 1. shows the update mechanisms for HM and HHM. The updates to HM and HHM occur after generating HMS new harmonies. The current HM, HHM, and the new HM are combined to form \(HH{M}_{total}\). The harmonies within \(HH{M}_{total}\) are then sorted in ascending order according to their fitness values. The new HM refers to an HM composed of HMS new harmonies. After sorting, the top HMS harmonies are selected as the next generation of HM, denoted as \(H{M}_{iter+1}\), and the harmonies from the (HMS + 1)th to the (2*HMS)th position are chosen as the next generation of HHM, denoted as \(HH{M}_{iter+1}\).

Fig. 1
figure 1

The updating process of HHM

3.2 The improved Harmony Memory approach

The standard Harmony Search (HS) algorithm initially selects a pitch from the Harmony Memory (HM) at random, and then fine-tuning is only executed under the second rule. The probability of fine-tuning is usually set very low, and due to the limited number of adjustments, the pitch search is limited. In the improved Harmony Memory method, the algorithm first selects the generation of the new harmony \({x}^{new}\) based on the HM or the Improved Harmony Memory (HHM) based on a probability β, where each dimension of \({x}^{new}\) is randomly picked from HM (HHM), with β decreasing with iteration, see Eq. (4) for the specific calculation method. The random selection operation for \({x}^{new}\) is consistent with the first rule of the standard HS algorithm, except that a group of pitches is selected at once, laying the groundwork for integrating the particle update strategy of the Extremal Optimization (EO) algorithm. The random selection operation for \({x}^{new}\) in the MATLAB program implementation can be replaced with matrix operations instead of loop operations to speed up the calculation, see Eq. (5) for the calculation method. Subsequently, \({x}^{new}\) is adaptively fine-tuned based on Gaussian randomness, where the fine-tuning formula is shown in Eq. (6), and the formula for calculating the adaptive BW is shown in Eq. (7)

$$\upbeta =0.9-\left(0.9-0.1\right)*\frac{{\text{iter}}}{{T}_{max}}$$
(4)
$$\left\{\begin{array}{c}{x}^{new}=HM\left(R+\left(linspace\left(1,^{\prime}D\right)-1\right)*HMS\right)ifrand\le \beta \\ {x}^{new}=HHM\left(R+\left(linspace\left(1,^{\prime}D\right)-1\right)*HMS\right)ifrand>\beta \end{array}\right.$$
(5)

In the Eq. (4), the \(iter\) denotes the current count and \({T}_{max}\) denotes the maximum number of iterations. In Eq. (5), R is a vector of random integers of size \(1\times {\text{D}}\), with values ranging within [1, HMS]. Where \(linspace\left(1,{\prime}D\right)\) represents a linear vector with values from 1 to D in steps of 1, and the D is the dimensionality of the solution. In MATLAB, the elements of a two-dimensional matrix are ordered from top to bottom and from left to right.

$${x}^{new}={x}^{new}+Gaussian\left(\mathrm{0,1}\right)*B{\text{W}}$$
(6)
$$BW=B{W}_{max}*e{\text{xp}}\left(c*\frac{iter}{Tmax}\right); c=ln\left(\frac{B{W}_{min}}{B{W}_{max}}\right)$$
(7)

where \({BW}_{min}\) and \({BW}_{max}\) represent the minimum and maximum values of the BW setting, and \(Gaussian\left(\mathrm{0,1}\right)\) represents a Gaussian distribution with mean 0 and standard deviation 1. The generation of Gaussian random numbers is more concentrated than the original uniform distribution and has a higher probability of producing large fluctuations. Therefore, BW is set as a non-linear adaptive adjustment, as shown in Fig. 2. It has a larger value in the early stages to allow for a wide range of pitch searches, and a smaller value in the later stages to improve the precision and convergence rate of the solution.

Fig. 2
figure 2

Adaptive change of BW

3.3 EO-based Superior Harmony Guided Search Strategy

The EO-HS-NDD algorithm improves upon the standard Harmony Search (HS) algorithm by unifying the first two update rules into a new first update rule and making further enhancements. This can be seen from Eq. (4) to (7). In the second update rule of EO-HS-NDD, we combine the equilibrium pool and search mechanism of the EO algorithm to select the best four harmonies in HM and their mean harmony to establish a superior harmonies pool. The formula for establishing a superior harmonies pool is as follows:

$${{\text{x}}}^{{\text{SH}}(1)}={{\text{x}}}_{1{\text{st}}};{{\text{x}}}^{{\text{SH}}(2)}={{\text{x}}}_{2{\text{nd}}};{{\text{x}}}^{{\text{SH}}(3)}={{\text{x}}}_{3{\text{rd}}};{{\text{x}}}^{{\text{SH}}(4)}={{\text{x}}}_{4{\text{th}}}$$
(8)
$${{\text{x}}}^{{\text{SH}}({\text{ave}})}={{\text{x}}}^{{\text{SH}}(1)}+{{\text{x}}}^{{\text{SH}}(1)}+{{\text{x}}}^{{\text{SH}}(1)}+{{\text{x}}}^{{\text{SH}}(1)})/4$$
(9)
$${\text{SH}}=\left\{{{\text{x}}}^{{\text{SH}}(1)},{{\text{x}}}^{{\text{SH}}(2)},{{\text{x}}}^{{\text{SH}}(3)},{{\text{x}}}^{{\text{SH}}(4)},{{\text{x}}}^{{\text{SH}}({\text{ave}})}\right\}$$
(10)

At the same time, a harmony \({x}^{SH}\) is randomly selected from the superior harmonies pool to direct the search of \({x}_{i}^{new}\) by the first rule according to the quality balance strategy. The specific formula is as follows:

$${x}^{new}={x}^{SH}+\left({x}^{new}-{x}^{SH}\right)*{\text{F}}+\frac{{\text{G}}}{\uplambda }(1-{\text{F}})$$
(11)
$$\left\{\begin{array}{c}F={{\text{a}}}_{1}*sign(r-0.5)*\left[{{\text{e}}}^{-\mathrm{\lambda t}}-1\right]\\ t={(1-\frac{{\text{iter}}}{{T}_{max}})}^{({{\text{a}}}_{2}*\frac{{\text{Iter}}}{{T}_{max}})}\end{array}\right.$$
(12)
$$\left\{\begin{array}{c}G={{\text{G}}}_{0}F\\ GCP=\left\{\begin{array}{c}0.5{{\text{r}}}_{2} {{\text{r}}}_{3}\ge GP\\ 0 {{\text{r}}}_{3}\le GP\end{array}\right.\\ {{\text{G}}}_{0}=GCP\left({x}^{SH}-\uplambda {x}^{new}\right)\end{array}\right.$$
(13)

Equation (11) represents the harmonic production strategy combined with the particle renewal strategy of EO, and Eq. (12) and (13) represent the calculation of F and G in Eq. (11). Here, the turnover rate λ is a uniform distribution random number vector within (0,1]. \(r\), \({r}_{2}\) and \({r}_{3}\) are also uniform distribution random number vector within (0,1]. GP is set to 1, \({a}_{1}\) is set to 2, and for the setting of \({a}_{2}\), it can be seen from Fig. 3 that when \({a}_{2}=0\), the guided search range will not shrink with the number of iterations, given fixed values of \({x}^{new}=5\) and \({x}^{SH}=0\). As \({a}_{2}\) increases, the search range gradually decreases with the number of iterations. Therefore, setting \({a}_{2}\) too large may cause the entire HM to converge too early to the superior harmonies pool and easily fall into a local optimum. To reduce the probability of falling into a local optimum, \({a}_{2}\) is set to 0 here. Thus, the Eq. (12) for F can be simplified as:

Fig. 3
figure 3

An iterative variation graph of the range of regions searched for different a2 values

$$F={a}_{1}*sign(r-0.5)*\left[{e}^{-\lambda }-1\right]$$
(14)

In Eq. (11), the first term represents the randomly selected superior harmony, while the second term represents the amount of change for the new harmony. The second term is mainly responsible for exploring the global search space to find the optimal point. When a point is found, the third term helps to improve the accuracy of the solution. The direct difference between the superior pitch and the new pitch is the key aspect of this section, benefiting from the large variation between pitches for global search. The F calculated with Eq. (14) balances the search and exploitation abilities of the algorithm. When F is small, such as F = 0.05, the new pitch will be adjusted to an area closer to the superior pitch, which is beneficial for local search. When F is large, such as F = 0.9, the new pitch will be adjusted to a place far away from the superior pitch, which is helpful for global search. Figure 4 visualizes this operation in dimension, where \({x}_{j}^{new1}-{x}_{j}^{SH}\) represents the main part of the second term.

Fig. 4
figure 4

Pitch update illustrating \(\lambda\)’s ability to balance exploration and exploitation

3.4 Nonlinear Dynamic Convergence Domains

In the standard HS algorithm, the third rule involves randomly generating a pitch within the global search domain. However, as the iteration progresses, it becomes more challenging to find an optimal solution through global search. This is because the search space saved in the harmony memory is continually decreasing, but it is most likely that the optimal solution exists in the current location and its local surroundings. As a result, full-range search becomes less efficient and effective. Various variations of HS, such as GDHS (Chauhan and Yadav 2023), have been proposed to address this issue by using dynamic search domain methods. However, experiments have shown that this algorithm tends to converge the search range quickly and fall into local optima. To address this problem, a new nonlinear dynamically changing search domain strategy has been proposed. This strategy works as follows:

Initial:

$${x}^{ub}=UB;{x}^{lb}=LB; {B}^{max}=UB;{B}^{min}=LB$$
(15)

Nonlinear dynamic search domains:

$${B}^{max}=max\left(HM\right)$$
(16)
$${B}^{min}=min\left(HM\right)$$
(17)
$${{\text{x}}}_{{\text{j}}}^{{\text{ub}}}={{\text{x}}}_{{\text{j}}}^{{\text{ub}}}+\left({{\text{B}}}_{{\text{j}}}^{{\text{max}}}-{{\text{x}}}_{{\text{j}}}^{{\text{ub}}}\right)\times {\left({\text{iter}}/{T}_{max}\right)}^{2}+0.5\times ({{\text{B}}}_{{\text{j}}}^{{\text{max}}}-{{\text{B}}}_{{\text{j}}}^{{\text{min}}})\times {\left({\text{iter}}/{T}_{max}\right)}^{2}$$
(18)
$${{\text{x}}}_{{\text{j}}}^{{\text{lb}}}={{\text{x}}}_{{\text{j}}}^{{\text{lb}}}+\left({{\text{B}}}_{{\text{j}}}^{{\text{min}}}-{{\text{x}}}_{{\text{j}}}^{{\text{lb}}}\right)\times {\left({\text{iter}}/{T}_{max}\right)}^{2}+0.5\times ({{\text{B}}}_{{\text{j}}}^{{\text{max}}}-{{\text{B}}}_{{\text{j}}}^{{\text{min}}})\times {\left({\text{iter}}/{T}_{max}\right)}^{2}$$
(19)

Equation (15) represents the initialization method of the convergence domain range, and Eq. (18) and (19) respectively represent the calculation method of the convergence domain boundary. In this equation, \(j=\mathrm{1,2},\dots ,D\),\({x}^{ub}\) and \({x}^{lb}\) represent the upper and lower boundary of the convergence domain for each generation, while \({B}^{max}\) and \({B}^{min}\) represent the maximum and minimum value vectors of each dimension in the HM, respectively. They are all D-dimensional vectors with a size of \(1\times D\). \(UB\) and \(LB\) are the initial range value. For Eq. (18), it can also be written as follows:

$${{\text{x}}}_{{\text{j}}}^{{\text{ub}}}={\left(1-{\text{iter}}/{T}_{max}\right)}^{2}\times {{\text{x}}}_{{\text{j}}}^{{\text{ub}}}+{\left({\text{iter}}/{T}_{max}\right)}^{2}\times (1.5\times {{\text{B}}}_{{\text{j}}}^{{\text{max}}}-{{\text{B}}}_{{\text{j}}}^{{\text{min}}})$$
(20)

In Eq. (20), it can be seen that during the early iterations, the value of \({\left(1-iter/Tmax\right)}^{2}\) is greater than that of \({\left(iter/Tmax\right)}^{2}\), so \({x}_{j}^{ub}\) will dominate, i.e., \({x}_{j}^{ub}={\left(1-iter/Tmax\right)}^{2}\times {x}_{j}^{ub}\). This indicates that the upper limit value gradually decreases from the initial range, in a quadratic nonlinear form, with a slow decline in the early iterations to avoid premature convergence. In the later iterations, the difference term \((1.5\times {B}_{j}^{max}-{B}_{j}^{min})\) of \({B}_{j}^{max}\) and \({B}_{j}^{min}\) in HM will dominate, allowing the search range to adaptively maintain a certain width based on the difference between harmonious pitches, providing opportunities to escape from local optima. The same principle is applicable for explaining the changes in the lower limit value as well. Figure 5 provides an example of a nonlinear dynamic changing domain of the basic multimodal function with a maximum iteration of 100,000, which clearly demonstrates the process of changing the upper and lower limit values of the search range.

Fig. 5
figure 5

Example of nonlinear dynamic changing domain

3.5 Adaptive dynamic parameter settings

In order to balance the exploration–exploitation trade-off in the algorithm, the values of HMCR and PAR should be well-adjusted. The three improved pitch updating rules mentioned earlier, combined with the dynamic search domain strategy, suggest that small values of HMCR and PAR should be set in the early stages of iteration. This helps the algorithm to focus on using the third rule to increase the diversity of solution vectors and avoid getting trapped in local optima.

As the iteration progresses, HMCR and PAR should gradually increase, shifting the focus to using the first two rules to make Gaussian adjustments and receive superior harmony instruction from the harmony memory. Since both the dynamic search domain and the advanced harmony guidance can accelerate the convergence speed, it is essential to balance the exploration–exploitation trade-off. Therefore, we propose modifying HMCR to the S-type function form and PAR to the nonlinear quadratic function. Figure 6 shows the adaptive changes in HMCR and PAR. Equation (21) is the calculation formula of HMCR, Eq. (22) is the calculation formula of \(\gamma\) in Eq. (21). Equation (23) is the calculation formula of PAR.

Fig. 6
figure 6

The adaptive changes of HMCR and PAR

$${\text{HMCR}}={{\text{HMCR}}}_{{\text{min}}}+\left(HMC{R}_{{\text{max}}}-HMC{R}_{{\text{min}}}\right)\times \left(\frac{1}{\left(1+{{\text{e}}}^{\upgamma }\right)}\right)$$
(21)
$$\upgamma =\frac{-3}{0.125*{T}_{max}\times ({\text{iter}}-0.3*{T}_{max})}$$
(22)
$${\text{PAR}}=\mathrm{ PA}{{\text{R}}}_{{\text{min}}}+({{\text{PAR}}}_{{\text{max}}}-{{\text{PAR}}}_{{\text{min}}})\times ({\text{iter}}/{T}_{max}{)}^{2}$$
(23)

3.6 Procedure for calculating EO-HS-NDD and its Computational complexity

The computation process of EO-HS-NDD is similar to that of HS but with improvements. Firstly, parameters are initialized, and HM and HHM are initialized using OLS. After each iteration, HM, HHM, SH, HMCR, PAR, BW, \(\beta\), \({x}^{ub}\) and \({x}^{lb}\) are updated. Each new harmony is generated according to three new rules proposed. Finally, the algorithm checks if the computation is complete and outputs the optimal solution \({x}_{1st}\). To help readers understand the EO-HS-NDD algorithm in computer language and to reproduce it, this section provides the pseudocode of EO-HS-NDD and the time complexity of computing the new algorithm.

The EO-HS-NDD algorithm integrates the Harmony Search (HS) algorithm and the Equilibrium Optimizer (EO) algorithm, and incorporates historical harmony memory (HHM), an improved harmony memory method, an EO-based superior harmony guidance search strategy, a nonlinear dynamic convergence domain, and an adaptive dynamic parameter setting mechanism to enhance the search efficiency and solution quality of the algorithm. The main computational steps of the algorithm and their time complexity analyses are as follows:

  1. 1)

    Initialization: At the beginning of the algorithm, it is necessary to initialize the harmony memory (HM) and the historical harmony memory (HHM), which includes randomly generating initial harmonies and calculating their fitness values. The computational complexity of this step is \((O\left(HMS \times D\right))\), where HMS represents the size of the harmony memory, and D represents the number of decision variables for the optimization problem.

  2. 2)

    Harmony Sorting: In order to select superior harmonies from HM and establish a super harmony pool, it is necessary to sort HM. When using common comparison-based sorting algorithms (such as quick sort or merge sort), the complexity of this step is \((O\left(HMS{\text{log}}HMS\right))\).

  3. 3)

    Harmony Update: The harmony update process combines the random selection and adjustment mechanism of the HS algorithm, the directional guidance search strategy of the EO algorithm, and the search range adjustment based on the nonlinear dynamic convergence domain. These operations involve updating each harmony in the harmony memory, including adjustments in the D-dimensional space for each harmony. Therefore, the computational complexity of this part is \((O\left(HMS\times D\right)).\)

  4. 4)

    Nonlinear Dynamic Convergence Domain: This mechanism involves dynamically adjusting the search range based on the maximum and minimum values of the harmonies in the current HM. Considering that it is necessary to traverse HM to determine the extreme values for each dimension, the computational complexity of this part is \((O\left(HMS\times D\right))\).

  5. 5)

    Parameter Update: This includes operations such as parameter initialization, computation of adaptive parameters, and updates of HM and HHM, with a computational complexity of \((O\left(1\right))\), meaning that the complexity of these operations does not increase with the problem size.

In summary, the overall computational complexity of the EO-HS-NDD algorithm is the complexity of the most complex operation in each iteration multiplied by the maximum number of iterations \(({T}_{max})\), and it is \((O\left({T}_{max}\times \left(HMS\times D+HMS{\text{log}}HMS\right)\right))\). This indicates that the computational complexity of the EO-HS-NDD algorithm is mainly determined by the initialization step and the improvisation update step in each iteration. Due to the integration of the features of the EO algorithm and the nonlinear dynamic convergence domain mechanism, the complexity of EO-HS-NDD is slightly higher than that of the standard HS algorithm, but it provides better search capability and diversity of solutions.

In summary, the overall computational complexity of the EO-HS-NDD algorithm is the complexity of the most complex operation in each iteration multiplied by the maximum number of iterations \(({T}_{max})\), and it is \((O\left({T}_{max}\times \left(HMS\times D+HMS{\text{log}}HMS\right)\right))\). This indicates that the computational complexity of the EO-HS-NDD algorithm is mainly determined by the initialization step and the improvisation update step in each iteration. Due to the integration of the features of the EO algorithm and the nonlinear dynamic convergence domain mechanism, the complexity of EO-HS-NDD is slightly higher than that of the standard HS algorithm, but it provides better search capability and diversity of solutions.

4 Experimental results and analysis

In this section, we give a comprehensive evaluation of the EO-HS-NDD algorithm. First, in Section 4.1, we conducted a series of experiments using the EO-HS-NDD algorithm on the well-known CEC2017 international standard test set and analyzed the results. Second, in Section 4.2, we test, compare, and analyze the EO-HS-NDD algorithm and other algorithms on 15 CEC2020 real problems. The EO-HS-NDD algorithm was implemented in MATLAB 2015b on a computer running Windows 10 operating system with a 2.76 GHz Intel(R) Xeon(R) processor, 36 processors, and 160 GB RAM. All the experiments are done on this platform.

4.1 Experiment on numerical optimization problems

In the following sections, we will evaluate the EO-HS-NDD algorithm. In Section 4.1.1, we will introduce the CEC2017 test function set, which will serve as the numerical optimization problems for our experiments. In Section 4.1.2, we will specify the requirements for the dimensions of the optimization functions, the maximum number of iterations, and the desired solution accuracy. In Section 4.1.3, we conduct a parameter sensitivity analysis for EO-HS-NDD to find the optimal parameter settings. Then, in Section 4.1.4, we will test the impact of HMS on the performance of the proposed EO-HS-NDD algorithm. In Section 4.1.5, the population diversity, exploration, and exploitation capabilities of our proposed EO-HS-NDD algorithm are analyzed. In Section 4.1.6, we will investigate the effects of two main strategies (the EO-based superior harmony guided search strategy and the nonlinear dynamic convergence domain) on the performance of the EO-HS-NDD algorithm. Subsequently, in Section 4.1.7, we will conduct a performance comparison analysis of several well-known and advanced variants of HS with the proposed EO-HS-NDD. In Section 4.1.8, we will experimentally compare the proposed EO-HS-NDD with other renowned optimization algorithms at a high dimensionality of 100D to analyze the performance of EO-HS-NDD on high-dimensional problems.

4.1.1 Benchmark functions of CEC 2017

The CEC2017 test set comprises 29 unconstrained numerical optimization test functions, which are classified into four groups based on their function types. The first group comprises two basic single-peaked optimization functions, while the second group comprises seven simple basic multimodal optimization functions. The third group comprises ten hybrid functions that are linearly weighted combinations of the first ten basic functions. The fourth group consists of ten composite functions. All 29 functions are subjected to rotation and translation, and their optimal values are summarized in Table 2. For further information on these functions, please see (Wu et al. 2017).

Table 2 Benchmark functions of CEC2017

4.1.2 Experimental setup and the nonparametric test

In the Section from 4.1.3 to 4.1.7, each algorithm was run separately 51 times on the CEC2017 test function set. Each test function was also experimentally tested in three dimensions: D = 10, 30, and 50, with a search range of [-100, 100] for all test functions. The maximum number of function evaluations was set to \(MaxFE=10000\times D\) according to the benchmark rules, and the error value was considered to be 0 when the obtained error value was less than \({10}^{-8}\). In the Section 4.1.8 we set the \(MaxFE=200000\) and the D = 100. Wilcoxon signed rank sum test was used for statistical analysis of experimental data. In the non-parametric test, the " + " symbol indicates that the overall results of the EO-HS-NDD algorithm are better than those of the compared algorithm, "-" indicates that the overall results of the EO-HS-NDD algorithm are worse, and " = " indicates that the overall results are similar. We determine whether there are significant differences between the two algorithms by judging whether the P-value of 51 experimental results on each test function is less than 0.05. The advantages and disadvantages of the algorithm in the test function are compared by comparing the result mean value. The best value, worst value, average value, and standard deviation of each group of experimental results are indicated, with the best results highlighted in bold black font.

4.1.3 Experimental setup and the nonparametric test

In EO-HS-NDD, HMCR and PAR are important parameters that affect the harmony generation operation. Since it is necessary to accelerate the convergence speed and improve the convergence accuracy of the algorithm in the later stage of calculation, the combined EO particle generation strategy should be used as much as possible. Therefore, \(HMC{R}_{{\text{max}}}\) and \({{\text{PAR}}}_{{\text{max}}}\) should both be set to larger values, which are 0.95 and 0.99 respectively. The invocation of the first and third rules in the early stage of the calculation affects the global search capability of the algorithm, so the optimal setting of \(HMC{R}_{{\text{min}}}\) and \({{\text{PAR}}}_{{\text{min}}}\) is key. For this purpose, we conduct a sensitivity experiment analysis on these two parameters. Here we select one function from each of single-mode, multimodal, hybrid functions, and composite functions for parameter analysis, namely functions F1, 5, 16, and 25. On these four functions, we run these two parameters separately under different values 20 times and provide box plots as shown in Fig. 7. The first row shows the experimental results under \(HMC{R}_{{\text{max}}}=0.99,HMC{R}_{{\text{min}}}=[\mathrm{0.1,0.3,0.5,0.7}],PA{{\text{R}}}_{{\text{max}}}=0.99,PA{{\text{R}}}_{{\text{min}}}=0.3\); the second row shows the experimental results under \(HMC{R}_{{\text{max}}}=0.99,HMC{R}_{{\text{min}}}=0.5,PA{{\text{R}}}_{{\text{max}}}.99,PA{{\text{R}}}_{{\text{min}}}=[\mathrm{0.1,0.3,0.5,0.7}]\).

Fig. 7
figure 7

Parameter analysis box diagram

In the \(HMC{R}_{{\text{min}}}\) parameter sensitivity experiment, it can be observed that when HMCR_min is set to 0.5, the algorithm demonstrates better overall performance on four types of test functions, as well as higher stability and fewer outliers. In the box plots for F5 and F25, the median is the lowest of all the boxes when \(HMC{R}_{{\text{min}}}\) is 0.5, and it also keep the value small on F1 and F16; in addition, across these four functions, the interquartile range (IQR) at \(HMC{R}_{{\text{min}}}\) of 0.5 is smaller, and there are fewer outliers. While the performance at \(HMC{R}_{{\text{min}}}\) of 0.5 is not the best on every test function, when considering all types together, it can be seen that at 0.5 the performance is relatively good and stable. As for \({{\text{PAR}}}_{{\text{min}}}\), the graphs indicate that at \({{\text{PAR}}}_{{\text{min}}}\) of 0.3, the medians for F1, F5, and F16 are relatively low, and although not the lowest for F25, the IQR is comparatively small. Moreover, on the other three types of test functions, the IQR at \({{\text{PAR}}}_{{\text{min}}}\) of 0.3 is relatively smaller compared to other values. From the results on the four test functions, although no single \({{\text{PAR}}}_{{\text{min}}}\) value performs best across all tests, the value of 0.3 typically provides a lower median and smaller IQR, indicating that the algorithm achieves better performance and stability in most tests. Therefore, through experimentation, we have determined that the optimal settings for \(HMC{R}_{{\text{min}}}\) and \({{\text{PAR}}}_{{\text{min}}}\) in EO-HS-NDD are 0.5 and 0.3, respectively.

4.1.4 Evaluating the effect of HMS

In the HS algorithm, the size of HMS plays a crucial role in its convergence speed and global search capability. A larger HMS size often leads to slower convergence speed but enhanced global search capability. However, for complex and practical problems, convergence speed is a critical indicator to consider when designing a new heuristic algorithm. Hence, selecting an appropriate HMS size for the newly proposed EO-HS-NDD algorithm is crucial, and its impact on the algorithm’s performance must be studied. In this experiment, the parameters of EO-HS-NDD are set as \(HMS=10,{BW}_{max}=\frac{UB-LB}{20},{BW}_{min}=0.0001,{{\text{PAR}}}_{{\text{min}}}=0.3,{{\text{PAR}}}_{{\text{max}}}=0.99,{{\text{HMCR}}}_{{\text{min}}}=0.5,{{\text{HMCR}}}_{{\text{max}}}=0.95\). The experiments with different HMS values (5, 10, 20, 40, 80, 150, 300) will be conducted on the CEC2017 benchmark with D = 30, following the experimental settings of Section 4.1.2, and the results will be compared and analyzed. The mean and standard deviation of 51 independent experiments are presented in Table 3, where the best results are highlighted in bold for better visibility.

The last row of Table 2 summarizes the number of best performances achieved by different HMS values among the 29 functions. It is evident that when HMS = 10, the algorithm achieves the best average values for 14 functions (F3, F5, F6, F7, F8, F9, F10, F11, F14, F16, F21, F22, F27, F28) and also attains the greatest number of best performances. This indicates that when HMS = 10, the EO-HS-NDD algorithm demonstrates relatively good performance, which is different from the best HMS value of 5 for HS and most HS variants. The reason behind this is that EO-HS-NDD utilizes the information of harmonious memory and guidance direction more efficiently, and a smaller HMS may not provide sufficient information, whereas an excessively large HMS may slow down the convergence speed. Therefore, HMS = 10 is a suitable value for EO-HS-NDD. The bold values in the table highlight the best results achieved among different HMS values.

4.1.5 Population diversity, Exploration and exploitation analysis

The solving capability of EO-HS-NDD largely depends on its ability to maintain population diversity and balance exploration and exploitation during the search process. Through in-depth analysis of these aspects, we can better understand the performance of EO-HS-NDD and its applicability in various optimization tasks.

Population diversity is a key indicator of an algorithm’s ability to explore different regions of the solution space. The EO-HS-NDD algorithm, by maintaining the diversity among individuals within the population, prevents the algorithm from converging prematurely to local optima, thereby increasing the probability of finding the global optimum. We evaluated the population diversity of the algorithm using an index called population diversity (Chauhan and Yadav 2023), allowing EO-HS-NDD to select one function (F1, 5, 16, 25) from four test functions for 20 independent operations, and plot the average iteration graph of its population diversity index (Fig. 8). The experimental parameters were set to \({\text{HMS}}=10,{BW}_{max}=\frac{UB-LB}{20},{BW}_{min}=0.0001,\mathrm{ PA}{{\text{R}}}_{{\text{min}}}=0.3,{{\text{PAR}}}_{{\text{max}}}=0.99,{{\text{HMCR}}}_{{\text{min}}}=0.5,{{\text{HMCR}}}_{{\text{max}}}=0.95\). The population diversity index is calculated as shown in Eq. (24), where the population diversity \({{\text{Pop}}}_{{\text{div}}}\) of each generation is represented by the average Euclidean distance between each harmony in the harmony memory and the average harmony. From Fig. 8, it can be observed that for lower-dimensional problems, the population diversity rapidly decreases in the early stages of the algorithm’s iterations, indicating that the algorithm can quickly focus on potential areas for in-depth search. However, for high-dimensional problems, the rate of decrease in population diversity is slower, reflecting the greater challenge of exploration in high-dimensional spaces, requiring more iterations of the algorithm to focus on promising areas. Additionally, the slight fluctuations in population diversity in high-dimensional problems indicate that finding a balance between exploration and exploitation is a dynamic process. These fluctuations may be caused by the randomness or diversity maintenance mechanisms introduced into the algorithm, hel** to prevent premature convergence and explore new potential areas.

Fig. 8
figure 8

Population diversity analysis map

$${Pop}_{div}(iter)=\frac{1}{n}\sum \nolimits_{i=1}^{n}\sqrt{\sum \nolimits_{j=1}^{D}{(\overline{{pos }_{j}}-{pos}_{i,j}(iter))}^{2}}$$
(24)

The exploration and exploitation capabilities are crucial reflections of an algorithm’s ability to adapt to different optimization tasks. Mathematically, the criteria for exploration–exploitation analysis are provided in Eq. (25) and (26) (Xue and Shen 2023). Figure 9 offers an in-depth illustration of the changes in exploration and exploitation capabilities of the EO-HS-NDD algorithm across different dimensions. For lower-dimensional problems, the algorithm demonstrates strong exploration capabilities in the early stages of iteration, followed by a swift transition to the exploitation phase to refine the potential optimal solutions discovered. In contrast, for higher-dimensional problems, the algorithm maintains a longer exploration phase, due to the high-dimensional space requiring the algorithm to search in a broader area to ensure no potential optimal regions are missed. Notably, even in higher-dimensional problems, the algorithm eventually enhances its exploitation capabilities, indicating that EO-HS-NDD can adapt to problems of various dimensions through its design mechanisms, ultimately focusing on the most promising areas for in-depth search.

Fig. 9
figure 9

Exploration and exploitation analysis map

$$xpl(iter)=(\frac{{Pop}_{div}(iter)}{max({Pop}_{div}(iter))})\times 100$$
(25)
$$xpt(iter)=(\frac{|{Pop}_{div}(iter)-max({Pop}_{div}(iter))|}{max({Pop}_{div}(iter))})\times 100$$
(26)

Through the analysis of population diversity, along with the exploration and exploitation capabilities of the EO-HS-NDD algorithm, we can observe how EO-HS-NDD dynamically adjusts its search strategies across different problem dimensions. In lower-dimensional problems, the algorithm can quickly transition from broad exploration to in-depth exploitation, while in higher-dimensional problems, more iterations are needed to achieve this transition. This adaptability allows the EO-HS-NDD algorithm to exhibit good adaptability and efficiency in handling a variety of problems. Therefore, by effectively managing population diversity and balancing exploration and exploitation capabilities, the EO-HS-NDD algorithm can effectively address optimization problems of various dimensions.

4.1.6 The effect of EO and NDD strategy

To demonstrate the effectiveness of the EO and NDD strategies proposed in the EO-HS-NDD algorithm, we conducted experiments comparing EO-HS-NDD with HS, EO-HS, and HS-NDD.

The parameters of HS were set to their optimal values based on relevant literature: HMS = 5, HMCR = 0.9, PAR = 0.3, and BW = 0.001. In addition, based on preliminary experiments conducted in Section 4.1.2, we determined that the optimal value for HMS in EO-HS-NDD is 10. Therefore, we set the parameters for EO-HS, HS-NDD, and HS-EO-NDD as \({HMS=10,BW}_{max}=\frac{UB-LB}{20},{BW}_{min}=0.0001,{{\text{PAR}}}_{{\text{min}}}=0.3,{{\text{PAR}}}_{{\text{max}}}=0.99,{{\text{HMCR}}}_{{\text{min}}}=0.5,{{\text{HMCR}}}_{{\text{max}}}=0.95\).

Although the value of HMS in the standard HS algorithm differs from that in other algorithms, using the same maximum number of iterations as the termination condition is still reasonable because HMS only affects the initialization of the harmony memory size and does not impact the number of algorithm executions. In addition, the generation rule of the HS algorithm states that only one harmony is chosen from the harmony memory for optimization in each iteration, and HMS only changes the probability of selecting a harmony. Therefore, it is fair to choose the same maximum number of iterations as the termination condition for comparison, as in other literature. However, to ensure the fairness of the experiments, we also added a standard HS algorithm with HMS = 10. The test function settings and statistical analysis method used were the same as those in Section 4.1.2.

Table 4 displays the test results for the case where D = 10. From the comparison of symbols in the table, it can be concluded that HS performs worse than EO-HS-NDD in the experimental results of all 29 functions. When HMS = 10, HS still performs worse than EO-HS-NDD in the experimental results of all 29 functions. EO-HS performs similarly to EO-HS-NDD on the unimodal function F3 and the basic multimodal function F9, and outperforms EO-HS-NDD on the basic multimodal function F10 and the composite function F22, while performing worse than EO-HS-NDD on the other 25 functions. HS-NDD performs similarly to EO-HS-NDD on the unimodal function F3 and the basic multimodal function F9, outperforms EO-HS-NDD on the basic multimodal functions F6 and F9, the hybrid function F16, and the composite functions F26 and F28, while performing worse than EO-HS-NDD on the other 23 functions.

Table 5 displays the test results for D = 30. The comparison of symbols in the table shows that HS outperforms EO-HS-NDD on the basic multimodal function F10, the hybrid function F18, and the composite function F24, but performs worse than EO-HS-NDD on the other 26 functions. When HMS = 10, HS performs better than EO-HS-NDD on the basic multimodal function F10 and the composite function F24, while performing worse than EO-HS-NDD on the other 27 functions. EO-HS performs better than EO-HS-NDD on the hybrid function F15, while performing worse than EO-HS-NDD on the other 28 functions. HS-NDD performs better than EO-HS-NDD on the unimodal function F3, the basic multimodal functions F4, F6, F9, and F10, the hybrid function F18, and the composite functions F22, F24, and F28, while performing worse than EO-HS-NDD on the other 20 functions.

Table 6 presents the test results for D = 50. Based on the comparison of symbols in the table, it can be concluded that HS outperforms EO-HS-NDD on the basic multimodal functions F5, 7, 8, 10, the hybrid functions F16, 20, and the composite functions F21, 22, and 24, but performs worse than EO-HS-NDD on the other 20 functions. When HMS = 10, HS performs better than EO-HS-NDD on the basic multimodal functions F5, 7, 8, 10, the hybrid functions F16, 20, and the composite functions F21, 22, but performs worse than EO-HS-NDD on the other 21 functions. EO-HS performs better than EO-HS-NDD on the basic multimodal functions F5, 7, 8, the hybrid functions F13, and the composite function F21, but performs worse than EO-HS-NDD on the other 24 functions. HS-NDD performs better than EO-HS-NDD on the basic multimodal functions F5, 6, 7, 8, 10, the hybrid functions F11, 16, 17, 20, and the composite functions F21, 22, 24, but performs worse than EO-HS-NDD on the other 17 functions.

According to the results of the experiments in the three groups, it can be concluded that the EO-HS-NDD algorithm shows better performance compared to the EO-HS algorithm, suggesting that the NDD strategy can improve the algorithm’s performance effectively. Additionally, the EO-HS-NDD algorithm performs better than the HS-NDD algorithm, although the improvement is not as significant as compared to EO-HS. Therefore, these findings suggest that enhancing the EO strategy is an effective way to improve the algorithm’s performance.

To further illustrate the specific performance improvement of the two strategies, we selected 7 classic functions (F5, 7, 15, 17, 19, 21, 29) from CEC2017 and plotted the convergence diagrams. The values shown in the diagram represent the error values of independent experiments for each function. From Fig. 10, we can conclude that EO-HS-NDD generally outperforms EO-HS and HS-NDD, but EO-HS is better than HS-NDD for the basic multimodal functions F5 and F7. This is because basic multimodal functions have multiple local optima, and HS-NDD is prone to falling into local optima due to the lack of guidance. EO-HS and EO-HS-NDD can better escape from local optima through the guidance of superior harmony. In the hybrid functions F15, 17, 19, and composite functions, EO-HS-NDD achieves the best result in most cases, and EO-HS and HS-NDD also outperform HS and HS_10 in most cases. This indicates that the computational accuracy and search speed of EO-HS and HS-NDD are better than HS and HS_10 in most cases, and the two improved strategies have a positive effect on the performance of the HS algorithm.

Fig. 10
figure 10

The self-variant error iteration graph

From the above data analysis and diagram comparison, it is evident that the performance improvement is significant. Moreover, the EO-HS-NDD algorithm combining the two strategies can achieve better results and exhibit good robustness and stability. Therefore, the two innovative strategies proposed in this paper (EO and NDD) are effective in improving the performance of the HS algorithm.

4.1.7 Comparison of EO-HS-NDD with HS Variants

In this section, we compare the optimized performance of EO-HS-NDD with HS and nine other well-known and advanced variants of the HS algorithm. These nine variants are IHS (Mahdavi et al. 2007), GHS (Omran and Mahdavi 2008), GDHS (Khalili et al. 2014), SGHS (Pan et al. 2010), NGHS (Zou et al. 2010), IGHS (El-Abd 2013), LHS (Ouyang et al. 2017), IMGHS (Gholami et al. 2021), and ID-HS-LDD (Zhu et al. 2020). The specific modifications of each variant can be found in the corresponding literature.

The parameter settings used for EO-HS-NDD are set to \(HMS=10,{BW}_{max}=\frac{UB-LB}{20},{BW}_{min}=0.0001,{{\text{PAR}}}_{{\text{min}}}=0.3,{{\text{PAR}}}_{{\text{max}}}=0.99,{{\text{HMCR}}}_{{\text{min}}}=0.5,{{\text{HMCR}}}_{{\text{max}}}=0.95\). The standard HS algorithm and the parameter settings for the remaining 9 variants were obtained from the corresponding literature as follows:

HS

IHS

GHS

GDHS

SGHS

HMS = 5,

HMCR = 0.9,

PAR = 0.3,

BW = 0.01;

HMS = 5, HMCR = 0.9 \({{\text{PAR}}}_{{\text{min}}}=0.01\),

\({{\text{PAR}}}_{{\text{max}}}=0.99\),

\({{\text{BW}}}_{{\text{min}}}=0.0001\),

\({{\text{BW}}}_{{\text{max}}}=({\text{UB}}-{\text{LB}})/20\);

HMS = 5,

HMCR = 0.9, \({{\text{PAR}}}_{{\text{min}}}=0.01\),

\({{\text{PAR}}}_{{\text{max}}}=0.9\);

HMS = 5;

HMS = 5, HMCR = 0.98, PAR = 0.9, \({{\text{BW}}}_{{\text{min}}}=0.0005\),

\({{\text{BW}}}_{{\text{max}}}=({\text{UB}}-{\text{LB}})/10,\) LP = 100;

NGHS

IMGHS

IGHS

LHS

ID-HS-NDD

HMS = 5,

Pm =0.005;

HMS = 5, HMCR = 0.9, PAR = 0.3, BW = 0.01,

\({{\text{P}}}_{{\text{m}}}=0.005\), \(\upmu 1=0.7\), \(\upmu 2=0.3\);

HMS = 5, HMCR = 0.9950, PAR = 0.4;

HMS = 5, HMCR = 0.99;

HMS = 30,\({{\text{HMCR}}}_{{\text{min}}}=0.3,\) \({{\text{HMCR}}}_{{\text{max}}}=0.99\),

\({{\text{PAR}}}_{{\text{min}}}=0.3{{\text{PAR}}}_{{\text{max}}}=0.99\);

To demonstrate the superior optimization performance of EO-HS-NDD, we conducted experiments comparing it with HS and nine other well-known variants of HS on the CEC2017 benchmark function set.

Table 7 presents the experimental results of the 11 algorithms when D = 10. Based on the symbol comparison results obtained by the Mann–Whitney U test in the table, it can be concluded that EO-HS-NDD outperforms HS in all 29 functions. IHS performs similarly to EO-HS-NDD in the single peak function F3 and the basic multimodal function F9, but is inferior in the other 26 functions. GHS outperforms EO-HS-NDD in the hybrid function F18, but is inferior in the other 28 functions. GDHS performs similarly to EO-HS-NDD in the single peak function F3 and the basic multimodal function F9, and is superior in the basic multimodal functions F4 and F6, the hybrid function F12, and the composite functions F25 and F26, but inferior in the other 22 functions. SGHS is superior to EO-HS-NDD in the composite function F24, but inferior in the other 28 functions. NGHS is inferior to EO-HS-NDD in all 29 functions. IGHS performs similarly to EO-HS-NDD in the single peak function F3 and the basic multimodal function F9, is superior in the basic multimodal function F4, the hybrid function F12, and the composite functions F25 and F26, but inferior in the other 23 functions. LHS is superior to EO-HS-NDD in the basic multimodal function F6 and the hybrid function F18, but inferior in the other 27 functions. IMGHS is inferior to EO-HS-NDD in all 29 functions. ID-HS-LDD performs similarly to EO-HS-NDD in the single peak function F3 and the basic multimodal function F9, is superior in the basic multimodal function F4, the hybrid functions F12, F14, F16, and F18, and the composite functions F21, F24, F25, F26, F28, and F30, but inferior in the other 16 functions.

For D = 30, Table 8 shows 11 types of arithmetic experimental results, from which a few observations can be made. HS performs better than EO-HS-NDD on the basic multi-modal function F10, the hybrid function F18, and the composite function F24, but performs worse on the other 26 functions. IHS performs better than EO-HS-NDD on the basic multi-modal function F10, the hybrid function F18, and the composite function F24, but performs worse on the other 26 functions. GHS performs better than E-HS-NDD on the basic multi-modal function F10 and the composite function F24, but performs worse on the other 27 functions. GDHS performs better than EO-HS-NDD on the basic multi-modal functions F4, F9, F10, the hybrid functions F13, F14, F18, and the composite functions F24, F25, F27, and F28, but performs worse on the other 19 functions. SGHS performs worse than EO-HS-NDD on all 29 functions. NGHS performs better than EO-HS-NDD on the basic multi-modal function F10, but performs worse on the other 29 functions. IGHS performs better than EO-HS-NDD on the single-modal function F3, the basic multi-modal functions F4, F9, F10, the hybrid functions F14, F18, and the composite functions F24, F25, F27, and F28, but performs worse on the other 19 functions. LHS performs better than EO-HS-NDD on the basic multi-modal functions F4, F6, F10, and the composite functions F22 and F30, but performs worse on the other 24 functions. IMGHS performs better than EO-HS-NDD on the basic multi-modal functions F4, F10, and the hybrid functions F12, F18, but performs worse on the other 25 functions. ID-HS-LDD performs better than EO-HS-NDD on the basic multi-modal function F4, the hybrid functions F13, F15, F19, and the composite functions F22, F26, F28, and F30, but performs worse on the other 21 functions.

Table 9 displays the experimental results of 11 algorithms when D = 50. Compared to the results of the other 10 algorithms, HS performs better than EO-HS-NDD on the basic multimodal functions F10, the hybrid functions F16, F17, F20, and the composite functions F22, and F24, but worse than EO-HS-NDD on the other 23 functions. IHS performs better than EO-HS-NDD on the basic multimodal function F10, the hybrid functions F16, F17, F20, and the composite functions F22 and F24, but worse than EO-HS-NDD on the other 23 functions. GHS performs better than EO-HS-NDD on the basic multimodal function F10, the hybrid functions F16, F20, and the composite function F22, but worse than EO-HS-NDD on the other 25 functions. GDHS performs better than EO-HS-NDD on the unimodal function F3, the basic multimodal function F10, the hybrid functions F16, F17, F18, F20, and the composite functions F22, F24, and F25, but worse than EO-HS-NDD on the other 20 functions. SGHS performs worse than EO-HS-NDD on all 29 functions. NGHS performs better than EO-HS-NDD on the basic multimodal function F10, the hybrid functions F16, F20, and the composite function F22, but worse than EO-HS-NDD on the other 25 functions. IGHS performs better than EO-HS-NDD on the unimodal function F3, the basic multimodal function F10, the hybrid functions F14, F16, F17, F18, F20, and the composite functions F22, F24, and F25, but worse than EO-HS-NDD on the other 19 functions. LHS performs better than EO-HS-NDD on the unimodal function F1, the basic multimodal function F10, the hybrid functions F13, F16, F20, and the composite function F22, but worse than EO-HS-NDD on the other 23 functions. IMGHS performs better than EO-HS-NDD on the unimodal function F3, the basic multimodal functions F4, F10, the hybrid functions F12, F16, F18, F20, and the composite function F22, but worse than EO-HS-NDD on the other 21 functions. ID-HS-LDD performs worse than EO-HS-NDD on all 29 functions.

Unimodal functions have only one global optimum and no local optima. They are often used to test the exploitation capability of optimization algorithms. In experiments on unimodal functions F1 and F3 in dimensions 10, 30, and 50, EO-HS-NDD outperformed HS and the other 9 HS variant algorithms in terms of overall results. This suggests that EO-HS-NDD has a relatively good exploitation capability. EO-HS-NDD uses a dynamic convergence domain strategy that adaptively narrows down the search range, speeds up the convergence speed, and improves the convergence accuracy. Although a similar dynamic convergence domain is also used in ID-HS-LDD, the superior harmony guidance strategy in EO-HS-NDD can improve the algorithm’s exploitation capability even when F is small.

Basic multimodal functions are commonly utilized to assess the global search capacity of optimization algorithms due to their abundance of local optima. From the summary of the above experimental results, it can be seen that EO-HS-NDD did not perform well on F4 and F10, but outperformed the other 10 algorithms on the other 5 basic multimodal functions in the experiments on dimensions 10, 30, and 50. Therefore, overall, EO-HS-NDD has a relatively good optimization capability for basic multimodal functions, indicating better global search ability. EO-HS-NDD has a smaller HMCR value in the early stage of the algorithm, allowing more opportunities to randomly synthesize the third rule of harmony in the search domain. In addition, in the early stage, harmony is selected from HHM, and the convergence speed slows down, making it less prone to premature convergence problems.

Hybrid functions are comprised of numerous dispersed local optima within the search space. As a result, these functions are commonly employed to showcase the algorithm’s capability to steer clear of local optima. From the summary of the above experimental results, it can be seen that EO-HS-NDD did not perform as well as the other 10 algorithms on F16, 18, and 20, and compared to GDHS and IGHS algorithms on F13 and F17. However, EO-HS-NDD outperformed each algorithm among the other 10 algorithms on at least 5 hybrid functions. Overall, EO-HS-NDD is still the best performing algorithm for solving mixed functions. This is because EO-HS-NDD always has the opportunity to make large adjustments to the harmony in each stage. In the early stage of the calculation, BW has a large value, which can help widen the generation range of new harmony. In the later stage of the calculation, although BW decreases to a small value, based on EO’s superior harmony guidance strategy when F has a large value, the harmony can still be adjusted in a larger range. These features help EO-HS-NDD to take larger steps out of local optima when trapped and find better solutions.

Composite functions have different shapes in different regions of the search space, which can test the ability of the algorithm to establish a proper balance between exploitation and exploration. The results show that EO-HS-NDD performs the best on average across all composite functions compared to the other 10 algorithms. Moreover, it has a greater advantage in balancing exploration and exploitation abilities. This is because EO-HS-NDD uses adaptive adjustment strategies for the parameters HMCR, PAR, and BW, with a focus on the third rule in the early stage and on the first two rules in the later stage. The parameter β is introduced for adaptive adjustment, and the sound comes from either HM or HHM. Furthermore, the size of F in the EO-based superior harmony guiding strategy has a certain ability to adjust the exploration and exploitation abilities of the algorithm. Specifically, on D = 10, the average value of EO-HS-NDD on F24, F27, F28, and F29 is the best. On D = 30, the average value of EO-HS-NDD on F21, F22, F23, F26, F28, and F29 is the best. On D = 50, the average value of EO-HS-NDD on F23, F25, F26, F27, F28, F29, and F30 is the best.

Moreover, for some other functions, including F4, 5, 7, 8, 10, 11, 12, 15, 16, 17, 19, 21, 24, 25, 26, 28, 30 on 10D; F4, 6, 7, 8, 9, 12, 13, 14, 15, 16, 20, 21, 22, 23, 25, 28, 29, 30 on 30D; F4, 5, 6, 7, 11, 12, 13, 23, 25, 26, 27, 28, 29, 30 on 50D, EO-HS-NDD has good global search capabilities and can jump out of local optima in the early stage of computation, demonstrating good performance. These functions mostly consist of basic multimodal functions, hybrid functions, and composite functions with multiple local optima. At the same time, it also shows good exploitation ability in the later stage of computation, and the final result is mostly the best among all algorithms.

The proposed EO-HS-NDD algorithm in this paper has demonstrated superior performance compared to other recent variants of HS algorithms. The algorithm’s ability to balance exploration and exploitation during computation, with a focus on global exploration in the early stages to avoid local optima and guided evolution based on superior harmony in the later stages for accelerated convergence, has led to better results. The EO-based superior harmony guidance strategy also provides a larger adjustment range for each pitch, even when F has a large value, to prevent being trapped in local optima. Overall, the experimental results presented in Tables 7, 8, 9 and Fig. 11 support the effectiveness of the EO-HS-NDD algorithm for solving a wide range of benchmark functions.

Fig. 11
figure 11

The error value iteration graph

4.1.8 Comparison of EO-HS-NDD with other well-known optimization algorithms

To investigate the performance of EO-HS-NDD on high-dimensional problems and its comparative performance against other renowned optimization algorithms, we have compared EO-HS-NDD on 100D of CEC2017 with seven algorithms: APGSK-IMODE (Mohamed et al. 2021), AGSK (Mohamed et al. 2020), MadDE (Biswas et al. 2021), DBO (Xue and Shen 2023), MPA (Faramarzi et al. 2020b), SSA (Xue and Shen 2020), EO (Faramarzi et al. 2020a) and EOSMA (Yin et al. 2022a). The parameter settings for the algorithms were taken as the optimal settings provided in their respective papers. The experimental setup involved running each function independently 30 times, with the maximum number of function evaluations set to \(MaxFE=200000\). The experimental results were statistically analyzed, and the seven algorithms were compared with EO-HS-NDD using the Wilcoxon signed rank sum test. Table 10 presents the mean and standard deviation of the experimental results, Table 11 presents the Wilcoxon signed rank sum test results of the algorithms against EO-HS-NDD, Table 12 shows the average running time and ranking of each algorithm on every function, Fig. 11 presents the iteration graphs of the algorithms, and Fig. 12 shows the corresponding box plots.

Fig. 12
figure 12

The Box graph of error values

From Table 10, it can be observed that the mean and variance of the experimental results of EO-HS-NDD on the high-dimensional 100D CEC2017 are superior on most functions compared to the other seven algorithms. More specifically, when analyzing the four types of functions, EO-HS-NDD does not perform optimally on the Unimodal functions F1 and F3, falling short of EO, but it outperforms the three DE variants and the DBO algorithm. For Simple Multimodal functions F4-10, apart from performing the worst on F10, it achieves the best results on the remaining 6 functions. Simple Multimodal functions have multiple local optima, often testing the global search capability of the algorithm, which verifies that EO-HS-NDD maintains commendable global search capabilities even in high dimensions. However, for Hybrid functions, the test results of EO-HS-NDD on F11-20 are not very ideal. Although better than MadDE and DBO, it is slightly inferior compared to APGSK-IMODE, AGSK, EO and EOSMA algorithms. Hybrid functions are linear or non-linear combinations of several basic test functions through specific rules. These functions are designed to simulate the complexity found in real problems, characterized by multiple local optima regions throughout the search space, testing the algorithm’s global search ability and the balance between exploration and exploitation. This indicates that the balance between exploration and exploitation of EO-HS-NDD in high dimensions is not as excellent as the EO algorithm, possibly due to the increased complexity of high-dimensional spaces with increased dimensions, where simple nonlinear convergence domain strategies can disrupt the balance between exploration and exploitation. From Fig. 11, it can also be seen that the convergence curves of EO-HS-NDD on Hybrid functions are relatively slow and stagnant in early exploration, with slower convergence rates in the later stages. Composition functions consist of multiple basic functions combined through complex rules, characterized by different properties in different search space regions. Composition functions are usually designed to be more complex and variable, simulating the diversity and unpredictability of real-world problems. The internal complexity of composition functions significantly increases with dimensions, testing the algorithm’s high-dimensional optimization capabilities and the maintenance of population diversity. For Composition functions F21-30, EO-HS-NDD generally outperforms the other seven algorithms, although it performs poorly on F22. This is due to EO-HS-NDD maintaining the unique mechanism of HS, adjusting new harmonies composed of pitches on different dimensions from different harmonies. This operation synthesizes new individuals from dimensions, effectively maintaining population diversity. Regarding the robustness of EO-HS-NDD, from the box plots in Fig. 12, it can be seen that on most functions, the median of EO-HS-NDD is small and located at the center of the box. At the same time, the IQR of EO-HS-NDD is also smaller among most algorithms, although some outliers appear on individual functions, likely due to falling into some local optima in a few calculations. Overall, EO-HS-NDD demonstrates commendable stability and robustness.

Regarding the running speed of EO-HS-NDD, the running time statistical results from Table 12 indicate that EO-HS-NDD has a longer solution time. By ranking the average running time for each function and then averaging these rankings across 29 rankings, EO-HS-NDD is ranked 6th, faster than EOSMA, APGSK-IMODE and MadDE, but slower than EO, consistent with the previous computational complexity analysis. This is because EO-HS-NDD performs many more detailed operations in terms of dimensions compared to EO, while also incorporating a nonlinear convergence domain strategy. According to the No Free Lunch Theorem, it is challenging to excel in both performance and running speed simultaneously, but we will pursue further research in this direction in future studies.

4.2 Application of the proposed algorithm to the real-world optimization problems

In this section, we selected a total of 15 real-world optimization problems from IEEE CEC2020, including 2 from Industrial Chemical Processes, 2 from Process Synthesis and Design Problems, 5 from Mechanical Engineering Problems, 2 from Power System Problems, 2 from Power Electronic Problems, and 2 from Livestock Feed Ration Optimization, to investigate the effectiveness of the EO-HS-NDD algorithm (Kumar et al. 2020a). We compare EO-HS-NDD with eight advanced algorithms: SASS (Kumar et al. 2020b), sCMAgES (Kumar et al. 2020c), COLSHADE (Gurrola-Ramos et al. 2020), EnMODE (Sallam et al. 2020), ID-HS-NDD, HS, APGSK-IMODE, AGSK, FCHA (Akhmedova and Stanovov 2020), and DEQL (Kizilay et al. 2020). It is important to note that these evaluations are conducted following the guidelines provided in CEC2020. Please note that detailed descriptions of these real-world optimization problems can be found in the literature of CEC2020. However, in this section, a general description of the problems used can be obtained from Table 13 Furthermore, the results of these evaluations are listed in Table  14.

4.2.1 The constraint‑handling techniques

In this context, the cumulative constraints handling method (Elsayed et al. 2018) is applied to EO-HS-NDD, HS, ID-HS-LDD, AGSK and APGSK-IMODE. This technique first sorts the constraints based on the average degree of constraint violation, and then the algorithm starts with a subset of constraints instead of considering all constraints from the beginning. As new subsets of constraints are added, the algorithm attempts to reach the feasible region of both the previous and newly added subsets of constraints. This process continues until all constraints are considered to reach the final feasible region.

The selection process between any offspring and its parents follows one of the following three cases:

  1. 1.

    Among two feasible candidate solutions, the one with better objective value is preferred. This means prioritizing the solution with higher objective value to achieve better performance.

  2. 2.

    Feasible solutions are preferred over infeasible solutions. This implies that when selecting a solution, feasible solutions are considered first to ensure constraint satisfaction.

  3. 3.

    Among two infeasible candidate solutions, the one with lower constraint violation is chosen. In other words, the solution with fewer constraint violations is selected to minimize the number of constraint violations.

4.2.2 Results and analysis of real-world problems

In Table 14, "Mean F" represents the average objective function value obtained from 25 individual experiments, while "Mean V" represents the average constraint violation value obtained from 25 individual experiments. Each algorithm’s performance is evaluated based on the 25 individual experiment results on the 15 problems, and they are scored accordingly. Score1 represents the score based on the best value, score2 represents the score based on the mean value, and score3 represents the score based on the standard deviation. These scores are then used to rank the algorithms (Suganthan (n.d.)). Based on the provided experimental scoring data, we can delve deeper into the performance of the EO-HS-NDD algorithm compared to others.

  1. 1.

    SASS: In the CEC2020 competition, the SASS algorithm performed excellently, securing a top position and showing remarkable results in Score1, Score2, and Score3. It demonstrated outstanding outcomes in both the mean objective function value (Mean F) and the mean constraint violation value (Mean V). At this point, EO-HS-NDD could not surpass SASS. However, based on the experimental data for each problem, the performance of EO-HS-NDD was also relatively good, even outperforming SASS in problem RC22, although it was slightly inferior to SASS overall. The reason for the exceptional performance of the SASS algorithm is its integration of the popular θ-constraint selection scheme and the gradient-based repair method. These mechanisms can effectively handle non-linear and complex constraints. Particularly, the gradient-based repair method can transform infeasible solutions into feasible ones, especially when the solutions are close to the boundary of the feasible region. Furthermore, the SASS algorithm dynamically adjusts the θ-level in each generation to adaptively select solutions, which helps maintain a good balance between exploration and exploitation. These features enable the SASS algorithm to maintain efficiency and robustness when dealing with real-world optimization problems that have complex constraints.

  2. 2.

    sCMAgES: Similar to SASS, sCMAgES demonstrated a high level of performance across a variety of scoring metrics, making it the second algorithm in the competition. Although EO-HS-NDD ranks lower than sCMAgES, the results of EO-HS-NDD are superior to it on RC25 and slightly worse on other questions.

  3. 3.

    COLSHADE: COLSHADE also excelled in the CEC2020 competition, ranking third. It achieved excellent scores across different evaluation criteria, showcasing strong capabilities in solving optimization problems. EO-HS-NDD ranked second only to COLSHADE and outperformed it in results on RC10, RC11 and RC22.

  4. 4.

    EnMODE: While EnMODE secured fourth place in the CEC2020 competition, its scores slightly trailed behind EO-HS-NDD when compared. EnMODE exhibited good performance in score1 criteria but might lag behind EO-HS-NDD in score2 and score 3.

  5. 5.

    HS and ID-HS-LDD: These two algorithms can be said to be other versions of EOH-HS-NDD, and neither performs as well as EO-HS-NDD in CEC2020 real problems. They performed worse than EO-HS-NDD on all three scores, and all ranked below EO-HS-NDD.

  6. 6.

    AGSK and APGSK-IMODE: Both algorithms were slightly ahead of EO-HS-NDD in CEC2017, but did not perform as well as the latter in CEC2020. They lag behind EO-HS-NDD in solving multiple problems and are worse than EO-HS-NDD in overall scores. Although EO-HS-NDD is slightly worse than these two algorithms for single-objective unconstrained problems, EO-HS-NDD is better than these two algorithms when using the same constraint treatment method in real-world problems with constraints. This can show that our research has a certain role and significance.

  7. 7.

    FCHA and DEQL: These two algorithms ranked not high in CEC2020 competition, and still performed not very well here, ranking 11 and 9 respectively, and did not solve high-quality feasible solutions in the 15 problems selected, which performed poorly compared with EO-HS-NDD.

In summary, EO-HS-NDD showcased strong performance across multiple scoring criteria. Although it fell short of the top three algorithms (SASS, sCMAgES, and COLSHADE) in certain aspects, it surpassed the other seven algorithms in comparison. These evaluation results suggest EO-HS-NDD holds significant potential for solving real-world optimization problems and obtaining feasible solutions in RC10, RC11, RC18, RC20, RC22, and RC25, providing robust support for its prospects in practical applications.

5 Conclusions

This article introduces EO-HS-NDD, an improved version of the Harmony Search (HS) algorithm that incorporates the EO algorithm and a nonlinear dynamic convergence domain. EO-HS-NDD optimizes the use of the Harmony Memory (HM) by introducing the Historical Harmony Memory (HHM) to store eliminated harmonies and using the Superior Harmonies Pool to explore valuable direction information hidden in the HM. To improve search efficiency, EO-HS-NDD employs a nonlinear dynamic convergence domain that adaptively changes the search region’s size based on the HM’s dynamic changes. The algorithm also balances the exploration and exploitation capabilities by making appropriate dynamic changes to the Harmony Memory Consideration Rate (HMCR) and the Pitch Adjustment Rate (PAR). Although EO-HS-NDD makes significant changes to the standard HS algorithm, this study’s multiple experiments demonstrate its superior performance in terms of robustness, stability, convergence speed, and accuracy. EO-HS-NDD outperforms other algorithm variants and state-of-the-art HS algorithms, and it outperforms EnMODE, HS, ID-HS-LDD, FCHA and DEQL on several datasets in the real-world problems.

Through experimental analysis, we not only discovered the advantages of the proposed EO-HS-NDD algorithm but also identified some of its shortcomings. How to reduce the computational complexity and running time of EO-HS-NDD, how to better balance the exploration and exploitation capabilities of the algorithm on high-dimensional hybrid function problems, and how to design an appropriate constraint handling method for EO-HS-NDD to better solve real-world problems are issues we should seriously investigate in the future.

This research can be extended in several directions. Although in this study, the clustering quantity was considered as an input and the decision variable of the algorithm was fixed, future research can explore the possibility of considering the clustering quantity as the output of the problem. In doing so, the algorithm can be designed to adaptively adjust the length of the solution during computation, which can further improve the efficiency and effectiveness of the algorithm in solving real-world optimization problems. Furthermore, the algorithm can be extended to handle more complex problems that involve multiple objectives or constraints. Finally, the computational complexity of the algorithm can be further reduced through the development of parallel or distributed versions.