Abstract
The intelligent optimization algorithm has the advantage of giving feasible solutions in polynomial time when solving complex problems in reality. Its performance depends on its strategy design and parameter configuration. The key to achieving computational intelligence and self-organization is parameter configuration. However, the parameter configuration in the intelligent optimization algorithm relies on the designer’s experience, and the parameter adjustment process will waste a vast amount of resources. In terms of this issue, this paper proposes a multi-granularity competition-cooperation optimization algorithm with adaptive parameter configuration(MGAP). First, the particles are grouped into groups according to the fitness of the solution. Second, an inter-group competition-cooperation relationship network is established. According to the relationship network, each particle performs adaptive learning between and within the group and updates. Then, reinforcement learning is introduced to train learning rules and learning parameters of adaptive learning between and within groups to promote particle autonomous evolution. We have done comparative experiments with six algorithms on the 10-100 dimension test functions and then give application cases. The comparison results with other algorithms in the optimization function show that the proposed algorithm is computationally effective and efficient in the face of large-scale optimization problems. Further experiments on complex problems in the real scene of 270,000 campus consumption data verify the collaborative optimization performance and collaborative learning performance of our algorithm.
Similar content being viewed by others
References
Vikhar PA (2016) Evolutionary algorithms: A critical review and its future prospects. In: 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), pp 261–265
Jaesung L, Wangduk S, Dae-Won K (2018) Effective evolutionary multilabel feature selection under a budget constraint. Complexity 2018:1–14
Karafotias G, Hoogendoorn M, Eiben AE (2015) Parameter control in evolutionary algorithms: Trends and challenges. IEEE Trans Evol Comput 19(2):167–187
Eiben AE, Hinterding R, Michalewicz Z (1999) Parameter control in evolutionary algorithms. IEEE Trans Evol Comput 3(2):124–141
Karafotias G, Eiben AE, Hoogendoorn M (2014) Generic parameter control with reinforcement learning. In: Proceedings of the 2014 annual conference on genetic and evolutionary computation, pp 1319–1326
Zhang H, Sun J, Xu Z (2020) Adaptive structural hyper-parameter configuration by q-learning. In: 2020 IEEE Congress on Evolutionary Computation (CEC), pp 1–8
Huang C, Li Y, Yao X (2020) A survey of automatic parameter tuning methods for metaheuristics. IEEE Trans Evol Comput 24(2):201–216
Zhu H, Liu D, Zhang S, Zhu Y, Teng L, Teng S (2016) Solving the many to many assignment problem by improving the kuhn–munkres algorithm with backtracking. Theor Comput Sci 618: 30–41
Rashid MH, Tao L (2018) Parallelize simulated annealing heuristics with gpus. In: 2018 19th IEEE/ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing (SNPD), pp 76–81
Das N, Priya PA (2019) A gradient-based interior-point method to solve the many-to-many assignment problems. Complexity
Davis LD, De Jong K, Vose MD, Whitley LD (2012) Evolutionary algorithms. Springer Science & Business Media, vol 111
Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28–39
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN95- International Conference on Neural Networks, vol 4, pp 1942–1948
Lai Z, Feng X, Yu H, Luo F (2021) A parallel social spider optimization algorithm based on emotional learning. IEEE Trans Syst Man Cybern Syst 51(2):797–808
Ding S, Du W, Zhao X, Wang L, Jia W (2019) A new asynchronous reinforcement learning algorithm based on improved parallel pso. Appl Intell 49(12):4211–4222
Zhou Y, Hao J-K, Duval B (2016) Reinforcement learning based local search for grou** problems: A case study on graph coloring. Expert Syst Appl 64:412–422
Cui Y, Geng Z, Zhu Q, Han Y (2017) Multi-objective optimization methods and application in energy saving. Energy 125:681– 704
Gunawan A, Lau HC (2011) Fine-tuning algorithm parameters using the design of experiments approach. In: International Conference on Learning and Intelligent Optimization. Springer, pp 278–292
Yuan Z, De Oca MAM, Birattari M, Stutzle T (2012) Continuous optimization algorithms for tuning real and integer parameters of swarm intelligence algorithms. Swarm Intell 6(1):49–75
Lopez-Ibanez M, Dubois-Lacoste J, Caceres LP, Birattari M, Stutzle T (2016) The irace package: Iterated racing for automatic algorithm configuration. Oper Res Perspect 3:43–58
Hutter F, Hoos HH, Leyton-Brown K, St ’utzle T (2009) Paramils: an automatic algorithm configuration framework. J Artif Intell Res 36:267–306
Alsalibi B, Venkat I, Al-Betar MA (2017) A membrane-inspired bat algorithm to recognize faces in unconstrained scenarios. Eng Appl Artif Intell 64:242–260
Alsalibi B, Abualigah L, Khader AT (2020) A novel bat algorithm with dynamic membrane structure for optimization problems. Appl Intell:1–26
Cheng R, ** Y (2014) A competitive swarm optimizer for large scale optimization. IEEE Trans Cybern 45(2):191–204
Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359
Tanabe R, Fukunaga A (2013) Success-history based parameter adaptation for differential evolution. In: 2013 IEEE congress on evolutionary computation. IEEE, pp 71–78
Wang Y, Wang B-C, Li H-X, Yen GG (2015) Incorporating objective function information into the feasibility rule for constrained evolutionary optimization. IEEE Trans Cybern 46(12):2938–2952
Zhu G-Y, Zhang W-B (2017) Optimal foraging algorithm for global optimization. Appl Soft Comput 51:294–313
Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science. IEEE, pp 39–43
Guo W, Liu T, Dai F, Zhao F, Xu P (2021) Skewed normal cloud modified whale optimization algorithm for degree reduction of s-λ curves. Appl Intell (6)
Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: a new method for stochastic optimization. Futur Gener Comput Syst 111:300–323
Cao Y, Wang Q, Wang Z, Jermsittiparsert K, Shafiee M (2020) A new optimized configuration for capacity and operation improvement of cchp system based on developed owl search algorithm. Energy Rep 6:315–324
Dokeroglu T, Sevinc E, Kucukyilmaz T, Cosar A (2019) A survey on new generation metaheuristic algorithms. Comput Indust Eng 137, p 106040
Sulaiman MH, Mustaffa Z, Saari MM, Daniyal H (2020) Barnacles mating optimizer: A new bio-inspired algorithm for solving engineering optimization problems. Eng Appl Artif Intell 87:103330
Yapici H, Cetinkaya N (2019) A new meta-heuristic optimizer: pathfinder algorithm. Appl Soft Comput 78:545–568
Zhao W, Zhang Z, Wang L (2020) Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng Appl Artif Intell 87:103300
Abdullah JM, Ahmed T (2019) Fitness dependent optimizer: inspired by the bee swarming reproductive process. IEEE Access 7:43 473–43 486
de Vasconcelos Segundo EH, Mariani VC, dos Santos Coelho L (2019) Metaheuristic inspired on owls behavior applied to heat exchangers design. Therm Sci Eng Progress 14:100431
de Vasconcelos Segundo EH (2019) Design of heat exchangers using falcon optimization algorithm. Appl Therm Eng 156:119–144
Kamboj VK, Nandi A, Bhadoria A, Sehgal S (2020) An intensify harris hawks optimizer for numerical and engineering optimization problems. Appl Soft Comput 89:106018
Shayanfar H, Gharehchopogh FS (2018) Farmland fertility: a new metaheuristic algorithm for solving continuous optimization problems. Appl Soft Comput 71:728–746
Zhao W, Wang L, Zhang Z (2019) Supply-demand-based optimization: a novel economics-inspired algorithm for global optimization. IEEE Access 7:73 182–73 206
Moosavi SHS, Bardsiri VK (2019) Poor and rich optimization algorithm: a new human-based and multi populations algorithm. Eng Appl Artif Intell 86:165–181
Djenouri Y, Comuzzi M (2017) Combining apriori heuristic and bio-inspired algorithms for solving the frequent itemsets mining problem. Inf Sci 420:1–15
Pierezan J, Coelho LDS (2018) Coyote optimization algorithm: a new metaheuristic for global optimization problems. In: 2018 IEEE congress on evolutionary computation (CEC). IEEE, pp 1–8
Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195
Carrasco J, García S, Rueda M, Das S, Herrera F (2020) Recent trends in the use of statistical tests for comparing swarm and evolutionary computing algorithms: Practical guidelines and a critical review. Swarm Evol Comput 54:100665
Acknowledgements
This work was supported in part by the Key Program of the National Natural Science Foundation of China under Grant No. 62136003, the National Natural Science Foundation of China under Grant Nos. 61772200 and 61772201, Shanghai Pujiang Talent Program under Grant No. 17PJ1401900, Shanghai Economic and Information Commission “Special Fund for Information Development” under Grant No. XX-XXFZ-02-20-2463.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gao, M., Feng, X., Yu, H. et al. Multi-granularity competition-cooperation optimization algorithm with adaptive parameter configuration. Appl Intell 52, 13132–13161 (2022). https://doi.org/10.1007/s10489-021-02952-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-021-02952-9