Abstract
Fireworks algorithm (FWA) is an emerging swarm intelligence inspired by the phenomenon of fireworks explosion. The numbers of sparks generated by fireworks have a great impact on the algorithm performance. It is widely accepted that promising fireworks should generate more sparks. However, in many researches, the quality of a firework is judged only on its current fitness value. This work proposes a Learning Automata-based Fireworks Algorithm (LA-FWA) introduced Learning automata (LA) to assign sparks for a better algorithm performance. Sparks are assigned to fireworks according to a state probability vector, which is updated constantly based on feedbacks from an environment so that it accumulates historical information. The probability vector converges as the search proceeds so that the local search ability of the LAFWA turns strong in the late search stage. Experimental results performed on CEC2013 benchmark functions show that the LAFWA outperforms several pioneering FWA variants.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Fireworks algorithm (FWA) is inspired by the phenomenon of fireworks explosion and proposed by Tan [1, 2]. Fireworks are initialized in solution space randomly in FWA and sparks are generated by the explosion process of fireworks. All fireworks and sparks are regarded as candidate solutions, and the explosion process is considered to be a stochastic search around the fireworks. The original FWA works as follows: N fireworks are initialized randomly in a search space, and their quality (the fitness value is used to represent the quality of fireworks in the original FWA) is evaluated to determine the number of sparks and explosion amplitude for all fireworks. Afterwards, the fireworks explode and generate sparks within their local space. Finally, N candidate fireworks are selected from all the fireworks and sparks as new fireworks of the next generation. The workflow continues until the termination criterion is reached.
Since FWA is raised in [1], it arouses lots of interests from researchers. The FWA has been applied to many real word optimization problems, including optimizing anti-spam model [3], solving the network reconfiguration [4], solving the path problem of vehicle congestion [5], swarm robotics [6, 7] modern web information retrieval [8], single-row facility layout problem [9], etc.
At the same time, there has been many researches attempting improving the performance of FWA. Zheng proposes the Enhanced Fireworks Algorithm (EFWA) [10], five modifications combined with conventional FWA eliminate some disadvantages of the original algorithm among it. Li proposes GFWA which puts forward a simple and efficient mutation operator called guiding vector. The Adaptive Fireworks Algorithm (AFWA) [11] uses a new adaptive amplitude calculated according to the fitness value instead of the amplitude operator in EFWA. Based on EFWA, Zheng proposes the Dynamic Search in Fireworks Algorithm (dynFWA) [12] as an improvement. In dynFWA, the firework with the smallest fitness value uses the dynamic explosion amplitude strategy. Variants mentioned above optimize the performance by adjusting the explosion amplitude adaptively. [13] proposes a fireworks algorithm based on a loser-out tournament, which also uses an independent selection operators to select fireworks for the next generation.
Learning automata (LA) [14] is a kind of machine learning algorithm which can be used as a general-purpose stochastic optimization tool. An learning automaton maintains a state probability vector where each component represents a reward probability of an action. The vector is updated through interactions with a stochastic unknown environment. LA tries to find the optimal action from a finite number of actions by applying actions to environment constantly. Environment returns a reinforcement signal which shows the relative quality of a selected action. An learning automaton receives signals and updates the vector according to its own strategy. When the termination criterion is satisfied, the optimal action is found out. So far, several PSO algorithms combined with LA have been proposed. Hashemi [15] proposes a PSO variant using LA to adaptively select parameters of PSO. A PSO variant that integrates with LA in a noisy environment is proposed by Zhang [16]. It uses LA through its unique selection mechanism to allocate re-evaluations in an adaptive manner and reduce computing resources.
Since the state probability vector of LA is updated constantly, it accumulates historical information and evaluates the quality of each action. It is more reasonable to apply learning automata to determine the number of sparks of each firework than the use of current fitness value only. On the other side, the probability vector converges gradually as the search proceeds, which leads to a strong local search ability in the late search stage.
In this paper, a Learning Automata-based Fireworks Algorithm (LAFWA) is proposed. Fireworks obtain reasonable numbers of sparks by applying LA to FWA, which leads to a competitive performance, as sparks will only be assigned to promising fireworks which brings a strong local search ability.
The rest of this paper is organized as follows. Section 2 reviews the related works of FWA and Learning Automata. Section 3 proposes the LAFWA. Experimental results based on the CEC 2013 benchmark suite are given in Sect. 4 and compared with its peers. Conclusions are drawn in Sect. 5.
2 Related Work
2.1 Fireworks Algorithm
This paper is based on GFWA [17]. In this section, GFWA will be introduced first. Without loss of generality, a minimization problem is considered as the optimization problem in this paper:
where x is a vector in the solution space.
Explosion Strategy. GFWA follows the explosion strategy of dynFWA. In GFWA, the number of explosion sparks of each firework is calculated as following:
where \(\lambda \) is a parameter to control the number of explosion sparks. A firework with smaller fitness value generates more sparks according to this formula. Secondly, GFWA adopts a dynamic explosion amplitude update strategy for each firework from dynFWA. The explosion amplitude of each firework is calculated as follows:
where \(A_{i}(t)\) and \(X_i(t)\) represent the explosion amplitude and the position of i-th firework at generation t. \(\rho ^-\in (0,1)\) is the reduction coefficient while \(\rho ^+\in (1,+\infty )\) is the amplification coefficient. Sparks are generated uniformly within a hypercube. The explosion amplitude is the radius of the hypercube and the center of the hypercube is the firework. Algorithm 1 shows the process of sparks generated by a firework where D is the dimension, \(B_U\) and \(B_L\) are the upper and lower bounds of the search space, respectively.
![figure a](http://media.springernature.com/lw685/springer-static/image/chp%3A10.1007%2F978-3-030-53956-6_6/MediaObjects/497235_1_En_6_Figa_HTML.png)
Guiding Vector. A mechanism called guiding vector (GV) is proposed in GFWA. A group of sparks with good quality and another group of sparks with bad quality are utilized to build a guiding vector. The GV guides a firework to move farther. Note that each firework only generates one guiding vector. The GV of i-th firework named \(\varDelta _{i}\) is calculated from its explosion sparks \(s_{i,j}(1\le j \le \lambda _{i})\) as follows:
where \(\sigma \) is a parameter to control the proportion of adopted explosion sparks and \(s_{i,j}\) means the spark of i-th firework with j-th smallest fitness value. A guiding spark (\(GS_{i}\)) is generated by add a GV to the i-th firework as shown in (5).
The main process of GFWA is described in Algorithm 2.
![figure b](http://media.springernature.com/lw685/springer-static/image/chp%3A10.1007%2F978-3-030-53956-6_6/MediaObjects/497235_1_En_6_Figb_HTML.png)
2.2 Learning Automata
LA with a variable structure can be represented as a quadruple \(\{\alpha , \beta , P, T\}\), where \(\alpha =\{\alpha _1,\alpha _2,\dots ,\alpha _r\}\) is a set of actions; \(\beta =\{\beta _1,\beta _2,\dots ,\beta _s\}\) is a set of inputs; \({P}=\{p_1, p_2, \dots ,p_r\}\) is a state probability vector of actions and T is a pursuit scheme to update the state probability vector, \({P}(t+1)=T(\alpha (t), \beta (t) ,{P}(t))\). The most popular pursuit scheme DP\(_{RI}\) is proposed in [18, 19] which increases the state probability of the estimated optimal action and decreases others. The pursuit scheme can be described as follows:
where the optimal action is the i-th action. Another famous pursuit scheme DGPA is proposed in [20], it increases the state probability of the actions with higher reward estimates than the current chosen action and decreases others. It can be described as follows:
where i represents i-th action selected this time and k is the number of the actions whose probability is not less than the i-th action. And Zhang [21] proposes a new pursuit scheme Last-position Elimination-based Learning Automata (LELA) inspired by a reverse philosophy. Z(t) is the set of actions whose probability is not zero at time t. LELA decreases the state probability of the estimated worst action in Z(t), and increases others in Z(t). It can be described as follows:
3 Learning Automata-Based Fireworks Algorithm
3.1 m-DP\(_{RI}\)
In this paper, we modify DP\(_{RI}\) to make it more suitable for our algorithm. In the classic DP\(_{RI}\), it rewards the estimated optimal action and punishes others. However, the pursuit scheme leads to a fast convergence of the state probability vector which is harmful for the global search ability in the early search stage. In the m-DP\(_{RI}\), the best m actions will be rewarded besides the best one. And m decreases linearly as the search progresses to enhance the local search ability gradually. The update strategy can be expressed as follows:
where \(\varDelta \) is the step size, g is the generation number now, MG is maximum number of generation allowed and M is the initial number of m. The state probability vector will be sorted after the update to decide the m actions to be rewarded again.
3.2 Assigning Sparks
LA is applied to assigning sparks to fireworks according to the state probability vector in this paper. The firework with larger probability will generate more sparks. The probability vector converges as the search proceeds, the promising fireworks generates most sparks in the late search stage so that the algorithm has a strong local search ability. n probability intervals P are calculated to assign sparks by (15) based on the state probability vector p. Algorithm 3 shows how the LAFWA assigns sparks by the probability intervals P.
![figure c](http://media.springernature.com/lw685/springer-static/image/chp%3A10.1007%2F978-3-030-53956-6_6/MediaObjects/497235_1_En_6_Figc_HTML.png)
3.3 Learning Automata-Based Fireworks Algorithm
The procedure of LAFWA is given as the pseudo code shown in Algorithm 4 and explained as follows:
-
Step 1 Initialization: Generate the positions and velocities of n fireworks randomly. Initialize the state probability vector of assigning the sparks p evenly and step size \(\varDelta \), where p represents the probability that the spark assigned to the firework and \(\varDelta \) represents the step size that p decreases or increases.
-
Step 2 Assign Sparks: Each one of the \(\lambda \) sparks will be assigned to n fireworks according to Algorithm 3, the firework with greater probability will generate more sparks.
-
Step 3 Perform Explosion: For each firework, the explosion amplitude is calculated by (3). Sparks generated uniformly within a hypercube. The explosion amplitude is the radius of the hypercube and the center of the hypercube is the firework. Generate sparks by Algorithm 1
-
Step 4 Generate Guiding Sparks: Generate the guiding spark by (4) and (5).
-
Step 5 Select Fireworks: Evaluate the fitness value of sparks and guiding vector. Select the best individual among the sparks, guiding spark, and the firework as a new firework for each firework.
-
Step 6 Update Probability: Update p according to (12) and (13) and sort p.
-
Step 7 Decrease Linearly: Complete linear decrement of m by performing (14).
-
Step 8 Terminal Condition Check: If any of pre-defined termination criteria is satisfied, the algorithm terminates. Otherwise, repeat from Step 2.
![figure d](http://media.springernature.com/lw685/springer-static/image/chp%3A10.1007%2F978-3-030-53956-6_6/MediaObjects/497235_1_En_6_Figd_HTML.png)
4 Experimental Results and Comparisons
In this section, experiments are carried out to illustrate the advantages of LAFWA in comparison with four pioneering FWA variants.
4.1 Benchmark and Experimental Settings
Parameter setting in LAFWA is given in the following. The main parameters include:
-
n: The number of fireworks.
-
\(\lambda \): The total number of sparks.
-
\(\rho ^-\) and \(\rho ^+\): The reduction and amplification factors.
-
\(\varDelta \): The step size of LA.
-
M: The initial number of m.
For each firework, a larger n can explore more but generate less sparks. In the proposed LAFWA, we set n = 10 to get a good global search ability in the early stage. The reduction and amplification factors \(\rho ^-\) and \(\rho ^+\) are two important parameters for dynamic search. We set the two coefficients to 0.9 and 1.2 respectively according to [12]. \(\varDelta \) and M is set to 0.01 and 4 according our experiments.
The experimental results are evaluated on the CEC 2013 single objective optimization benchmark suite [22], including 5 single-mode functions and 23 multi-mode functions (shown in Table 2). Standard settings are adopted for the range of parameters, such as dimensions, maximum functional evaluation numbers, and have been widely used for testing algorithms. The search ranges of all the 28 test functions are set to [−100, 100]\(^D \) and D is set to 30. According to the suggestions of this benchmark suite, all the algorithms repeated for 51 times for each function and the maximal number of function evaluations in each run is 1000*D. All the experiments are carried out using MATLAB R2016a on a PC with Intel(R) Core(TM) i5-8400 running at 2.80 GHz with 8G RAM.
4.2 Experimental Results and Comparison
To validate the effectiveness of LAFWA, we compare it with four pioneering FWA variants, including AFWA, dynFWA, COFFWA [24], GFWA. Parameters for these four algorithms are set to the suggested values according to their published papers. The results of the performance on the solution accuracy are listed in Table 2. Boldface indicates the best results from all listed algorithms, “Mean” is the mean results of 51 independent runs, “AR” is the average ranking of an algorithm, calculated by the sum of ranking on the 28 functions divided by the number of functions. The less AR, the better performance. LAFWA shows its outstanding convergence accuracy among all the listed FWA variants. In the unimodal part of Table 3, LAFWA performs well, achieves first 3 times among 5 functions. For the challenging multimodal and composition functions, the global optimal value is more difficult to locate. LAFWA shows its superiority and get first 15 times and second 6 times among 23 functions. The A.R. of LAFWA is 1.5 in general, which ranks the first compared with other competitors.
5 Conclusion
In this work, Learning Automata-based Fireworks Algorithm (LAFWA) is proposed to assign sparks to the fireworks more reasonable by using LA. The state probability vector is average so that the global search ability is well at the early search stage. As the search proceeds, the probability vector converges which leads to a strong local search ability in the late search stage. Experimental results performed on CEC2013 benchmark functions show that the LAFWA outperforms several pioneering FWA variants. Future work focuses on improving the update strategy of the state probability vector of LA.
References
Tan, Y., Zhu, Y.: Fireworks algorithm for optimization. In: Proceedings of the International Conference on Swarm Intelligence, Bei**g, China, pp. 355–364, June 2010
Li, J., Tan, Y.: A comprehensive review of the fireworks algorithm. ACM Comput. Surv. 52(6), 1–28 (2019)
He, W., Mi, G., Tan, Y.: Parameter optimization of local-concentration model for spam detection by using fireworks algorithm. In: Tan, Y., Shi, Y., Mo, H. (eds.) ICSI 2013. LNCS, vol. 7928, pp. 439–450. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38703-6_52
Imran, A.M., Kowsalya, M.: A new power system reconfiguration scheme for power loss minimization and voltage profile enhancement using fireworks algorithm. Int. J. Electr. Power Energ. Syst. 62, 312–322 (2014)
Abdelaziz, M.M., Elghareeb, H.A., Ksasy, M.S.M.: Hybrid heuristic algorithm for solving capacitated vehicle routing problem. Int. J. Comput. Technol. 12(9), 3845–3851 (2014)
Zheng, Z., Ying, T.: Group explosion strategy for searching multiple targets using swarm robotic. In: IEEE Congress on Evolutionary Computation (CEC) (2013)
Zheng, Z., Li, J., Jie, L., Ying, T.: Avoiding decoys in multiple targets searching problems using swarm robotics. In: 2014 IEEE Congress on Evolutionary Computation (CEC) (2014)
Hamou, H.A., Rahmani, A., Bouarara, H.A., Amine, A.: A fireworks algorithm for modern web information retrieval with visual results mining. Int. J. Swarm Intell. Res. 6(3), 1–23 (2015)
Liu, S., Zhang, Z., Guan, C., Zhu, L., Zhang, M., Guo, P.: An improved fireworks algorithm for the constrained single-row facility layout problem. Int. J. Prod. Res. 1–19 (2020)
Zheng, S., Janecek, A., Tan, Y.: Enhanced fireworks algorithm. In: IEEE Congress on Evolutionary Computation (CEC), pp. 2069–2077, June 2013
Li, J., Zheng, S., Tan, Y.: Adaptive fireworks algorithm. In: IEEE Congress on Evolutionary Computation (CEC), pp. 3214–3221, July 2014
Zheng, S., Janecek, A., Li, J., Tan, Y.: Dynamic search in fireworks algorithm. In: IEEE Congress on Evolutionary Computation (CEC), pp. 3222–3229, July 2014
Li, J., Tan, Y.: Loser-out tournament-based fireworks algorithm for multimodal function optimization. IEEE Trans. Evol. Comput. 22(5), 679–691 (2017)
Narendra, K.S., Thathachar, M.A.L.: Learning automata - a survey. IEEE Trans. Syst. Man Cybern. SMC–4(4), 323–334 (1974)
Hashemi, A.B., Meybodi, M.R.: A note on the learning automata based algorithms for adaptive parameter selection in PSO. Appl. Soft Comput. J. 11(1), 689–705 (2009)
Zhang, J., Xu, L., Li, J., Kang, Q., Zhou, M.: Integrating particle swarm optimization with learning automata to solve optimization problems in noisy environment. In: Proceeding IEEE International Conference on Systems, Man, and Cybernetics , pp. 1432–1437, October 2014
Li, J., Zheng, S., Tan, Y.: The effect of information utilization: introducing a novel guiding spark in the fireworks algorithm. IEEE Trans. Evol. Comput. 21(1), 153–166 (2017)
Oommen, B.J., Lanctot, J.K.: Discretized pursuit learning automata. IEEE Trans. Syst. Man Cybern. 20(4), 931–938 (1990)
Oommen, B.J., Agache, M.: Continuous and discretized pursuit learning schemes: various algorithms and their comparison. IEEE Trans. Syst. Man Cybern. 31(3), 277–287 (2001)
Agache, M., Oommen, B.J.: Generalized pursuit learning schemes: new families of continuous and discretized learning automata. IEEE Trans. Syst. Man Cybern. Part B Cybern. 32(6), 738–749 (2002)
Zhang, J., Cheng, W., Zhou, M.: Last-position elimination-based learning automata. IEEE Trans. Cybern. 44(12), 2484–2492 (2014)
Zambrano, H., Felipe, R.M.R.: Standard particle swarm optimization 2011 at CEC-2013: a baseline for future PSO improvements. In: IEEE Congress on Evolutionary Computation (CEC) (2013)
Liang, J.J., Qu, B.Y., Suganthan, P.N., Hernández-Díaz, A.G.: Problem definitions and evaluation criteria for the CEC 2013 special session on real-parameter optimization, Zhengzhou University, China and Nanyang Technological University, Singapore, 201212, January 2013
Zheng, S., Li, J., Janecek, A., Tan, Y.: A cooperative framework for fireworks algorithm. IEEE/ACM Trans. Comput. Biol. Bioinf. 14(1), 27–41 (2017)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, J., Che, L., Chen, J. (2020). Learning Automata-Based Fireworks Algorithm on Adaptive Assigning Sparks. In: Tan, Y., Shi, Y., Tuba, M. (eds) Advances in Swarm Intelligence. ICSI 2020. Lecture Notes in Computer Science(), vol 12145. Springer, Cham. https://doi.org/10.1007/978-3-030-53956-6_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-53956-6_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-53955-9
Online ISBN: 978-3-030-53956-6
eBook Packages: Computer ScienceComputer Science (R0)