Abstract
Although conventional control systems are simple and widely used, they may not be effective for complex and uncertain systems. This study proposes a Hermite broad-learning recurrent neural network (HBRNN) with a wide network structure and an internal feedback loop that enables good approximation of system dynamics without the need for a large number of training parameters. Furthermore, a Hermite broad-learning recurrent neural control (HBRNC) with HBRNN as the main controller is proposed, which requires no prior knowledge about the system dynamics and has no off-line learning phase. All the HBRNN network parameters are updated according to parameter learning laws through the gradient descent approach. To prevent network parameter overtraining of the HBRNN, a discrete-type Lyapunov function is used to determine the least upper bound for the learning rates. Additionally, an adaptive learning rate (ALR) approach is designed to dynamically fine-tune the learning rates within these specified limits, thereby achieving an optimal convergence speed between network parameters and tracking error. Finally, the HBRNC system with ALR is applied to a chaotic circuit and a reaction wheel pendulum, and its effectiveness is validated through simulation and experimentation.
Similar content being viewed by others
Availability of data and materials
Not applicable.
Code availability
Not applicable.
References
Bai K, Zhu X, Wen S, Zhang R, Zhang W (2022) Broad learning based dynamic fuzzy inference system with adaptive structure and interpretable fuzzy rules. IEEE Trans Fuzzy Syst 30(8):3270–3283
Chang CW, Hsu CF, Lee TT (2018) Backstep**-based finite-time adaptive fuzzy control of unknown nonlinear systems. Int J Fuzzy Syst 20:2545–2555
Chen CLP, Liu Z (2018) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Trans Neural Netw Learn Syst 29(1):10–24
Chen CH, Shih P, Srivastava G, Hung ST, Lin CW (2023) Evolutionary trading signal prediction model optimization based on Chinese news and technical indicators in the internet of things. IEEE Internet Things J 10(3):2162–2173
Cheng L, Wang Z, Jiang F, Zhou C (2019) Real-time optimal control for spacecraft orbit transfer via multiscale deep neural networks. IEEE Trans Aerosp Electron Syst 55(5):2436–2450
Du J, Vong CM, Chen CLP (2021) Novel efficient RNN and LSTM-like architectures: recurrent and gated broad learning systems and their applications for text classification. IEEE Trans Cybern 51(3):1586–1597
Elhaki O, Shojaei K (2020) A robust neural network approximation-based prescribed performance output-feedback controller for autonomous underwater vehicles with actuators saturation. Eng Appl Artif Intell 88:103382
Fan L, Zhang T, Zhao X, Wang H, Zheng M (2019) Deep topology network: a framework based on feedback adjustment learning rate for image classification. Adv Eng Inf 42:100935
Feng S, Chen CLP (2018) Broad learning system for control of nonlinear dynamic systems. In: 2018 IEEE international conference on systems, man and cybernetics, pp 2230–223
Fang W, Chao F, Yang L, Lin CM, Shang C, Zhou C, Shen Q (2019) Arecurrent emotional cmac neural network controller for vision-based mobile robots. Neurocomputing 334:227–238
Gan J, **e X, Zhai Y, He G, Mai C, Luo H (2023) Facial beauty prediction fusing transfer learning and broad learning system. Soft Comput 27:13391–13404
Han HG, Liu Z, Liu H, Qiao J, Chen CLP (2022) Type-2 fuzzy broad learning system. IEEE Trans Cybern 52(10):10352–10363
Hsu CF, Lee TT (2017) Emotional fuzzy sliding-mode control for unknown nonlinear systems. Int J Fuzzy Syst 19:942–953
Hsu CF, Chen BR, Wu BF (2022a) Fuzzy broad learning adaptive control for voice coil motor drivers. Int J Fuzzy Syst 24(3):1696–1707
Hsu CF, Chen BR, Wu BF (2022b) Broad-learning recurrent Hermite neural control for unknown nonlinear systems. Knowl Based Syst 242:108263
Huang H, Zhang T, Yang C, Chen CLP (2020) Motor learning and generalization using broad learning adaptive neural control. IEEE Trans Ind Electron 67(10):8608–8617
Huang S, Rong L, Chang X, Wang Z, Yuan Z, Wei C, Santos OJ (2021) BLSTM-based adaptive finite-time output-constrained control for a class of AUSs with dynamic disturbances and actuator faults. Math Problems Eng 2021:2221495
Huynh T, Lin C, Le T, Cho H, Pham TT, Le N, Chao F (2020) A new self-organizing fuzzy cerebellar model articulation controller for uncertain nonlinear systems using overlapped Gaussian membership functions. IEEE Trans Ind Electron 67(11):9671–9682
Jiang J, Astolfi A (2021) Stabilization of a class of underactuated nonlinear systems via underactuated back-step**. IEEE Trans Autom Control 66(11):5429–5435
Kumar R (2023) Double internal loop higher-order recurrent neural network-based adaptive control of the nonlinear dynamical system. Soft Comput 27:17313–17331
Le TL, Ngo VB (2022) The synchronization of hyperchaotic systems using a novel interval type-2 fuzzy neural network controller. IEEE Access 10:105966–105982
Lin CT, Lee CSG (1996) Neural fuzzy systems—a neural-fuzzy synergism to intelligent systems. Prentice-Hall, Englewood Cliffs
Lin CM, Nguyen HB, Huynh TT (2021) A new self-organizing double function-link brain emotional learning controller for MIMO nonlinear systems using sliding surface. IEEE Access 9:73826–73842
Mostafa E, Elshazly O, El-Bardini M, El-Nagar A (2023) Embedded adaptive fractional-order sliding mode control based on TSK fuzzy system for nonlinear fractional-order systems. Soft Comput 27:15463–15477
Slotine JJE, Li WP (1991) Applied nonlinear control. Prentice-Hall, Englewood Cliffs
Sui S, Chen CLP, Tong S, Feng S (2020) Finite-time adaptive quantized control of stochastic nonlinear systems with input quantization: a broad learning system based identification method. IEEE Trans Ind Electron 67(10):8555–8565
Tian W, Zhao F, Min C, Feng X, Liu R, Mei X, Chen G (2022) Broad learning system based on binary grey wolf optimization for surface roughness prediction in slot milling. IEEE Trans Instrum Meas 71:1–10 (Art No. 2502310)
Tsai CC, Chan CC, Li YC, Tai FC (2020) Intelligent adaptive PID control using fuzzy broad learning system: an application to tool-grinding servo control systems. Int J Fuzzy Syst 22:2149–2162
Wai RJ, Lin YF, Chuang KL (2014) Total sliding-mode-based particle swarm optimization control for linear induction motor. J Franklin Inst 351(5):2755–2780
Wang LX (1994) Adaptive fuzzy systems and control: design and stability analysis. Prentice-Hall, Englewood Cliffs
Wang CH, Lin TC, Lee TT, Liu HL (2002) Adaptive hybrid intelligent control for uncertain nonlinear dynamical systems. IEEE Trans Syst Man Cybern 32(5):583–597
Wang B, Zhao Y, Chen CLP (2021) Hybrid transfer learning and broad learning system for wearing mask detection in the COVID-19 Era. IEEE Trans Instrum Meas 70:1–12 (Art No. 5009612)
Wang X, Huang T, Zhu K, Zhao X (2022) LSTM-based broad learning system for remaining useful life prediction. Mathematics 10(12):2066
Wu BF, Chen BR, Hsu CF (2021) Design of a facial landmark detection system using a dynamic optical flow approach. IEEE Access 9:68737–68745
Xu S, Liu J, Yang C, Wu X, Xu T (2022) A learning-based stable servo control strategy using broad learning system applied for microrobotic control. IEEE Trans Cybern 52(12):13727–13737
Ye R, Yan B, Shi K, Chen M (2020) Interval type-2 fuzzy sliding-mode control of three-axis stabilization gimbal. IEEE Access 8:180510–180519
Yi J, Huang J, Zhou W, Chen G, Zhao M (2022) Intergroup cascade broad learning system with optimized parameters for chaotic time series prediction. IEEE Trans Artif Intell 3(5):709–721
Yuan L, Li T, Tong S, **ao Y, Shan Q (2022) Broad learning system approximation-based adaptive optimal control for unknown discrete-time nonlinear systems. IEEE Trans Syst Man Cybern Syst 52(8):5028–5038
Zhang QQ, Wai RJ (2022) Design of adaptive distributed secondary control using double-hidden-layer recurrent-neural-network-inherited total-sliding-mode scheme for islanded micro-grid. IEEE Access 10:5990–6009
Zhang P, Wu Z, Dong H, Tan M, Yu J (2020) Reaction-wheel-based roll stabilization for a robotic fish using neural network sliding mode control. IEEE/ASME Trans Mechatron 25(4):1904–1911
Zhang C, Ding S, Guo L, Zhang J (2022a) Broad learning system based ensemble deep model. Soft Comput 26:7029–7041
Zhang J, Chao F, Zeng H, Lin CM, Yang L (2022b) A recurrent wavelet-based brain emotional learning network controller for nonlinear systems. Soft Comput 26:3013–3028
Zhao J, Lin CM (2019) Wavelet-TSK-type fuzzy cerebellar model neural network for uncertain nonlinear systems. IEEE Trans Fuzzy Syst 27(3):549–558
Zheng Q, Zhao P, Li Y, Wang H, Yang Y (2021a) Spectrum interference-based two-level data augmentation methodin deep learning for automatic modulation classification. Neural Comput Appl 33:7723–7745
Zheng Q, Zhao P, Zhang D, Wang H (2021b) MR-DCAE: Manifold regularization-based deep convolutional autoencoder for unauthorized broadcasting identification. Int J Intell Syst 36(12):7204–7238
Zheng Q, Zhao P, Wang H, Elhanashi A, Saponara S (2022) Fine-grained modulation classification using multi-scale radio transformer with dual-channel representation. IEEE Commun Lett 26(6):1298–1302
Zheng Q, Tian X, Yu Z, Wang H, Elhanashi A, Saponara S (2023) DL-PR: Generalized automatic modulation classification method based on deep learning with priori regularization. Eng Appl Artif Intell 122:106082
Zhou J, Han T, **ao F, Gui G, Adebisi B, Gacanin H, Sari H (2022) Multiscale network traffic prediction method based on deep echo-state network for internet of things. IEEE Internet Things J 9(21):21862–21874
Acknowledgements
The authors are grateful to the associate editor and the reviewers for their valuable comments. This work was supported by the Ministry of Science and Technology (MOST), Taiwan, the Republic of China, under contract MOST 110-2221-E-032-038-MY2.
Funding
The study was funded by the Ministry of Science and Technology of Republic of China under Grant MOST 110-2221-E-032-038-MY2.
Author information
Authors and Affiliations
Contributions
We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed.
Corresponding author
Ethics declarations
Conflict of interest
We confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.
Consent to participate
Not applicable.
Consent for publication
We confirm that there are no impediments to publication, including the timing of publication, with respect to intellectual property.
Ethics approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Theorem 1
Let \(\eta_\alpha\), \(\eta_\beta\), \(\eta_w\) and \(\eta_v\) be the learning rates for the parameters learning laws of HBRNN. Define \(P_{\alpha \max }\) as \(P_{\alpha \max } = {\mathop {\max }\limits_N} \left\| {P_\alpha (N)} \right\|\), where \(P_\alpha (N) = \frac{{\partial \tau_{nc} }}{\partial \alpha_j }\); define \(P_{\beta \max }\) as \(P_{\beta \max } = {\mathop {\max }\limits_N} \left\| {P_\beta (N)} \right\|\), where \(P_\beta (N) = \frac{{\partial u_{nc} }}{\partial \beta_k }\); define \(P_{w\max }\) as \(P_{w\max } = {\mathop {\max }\limits_N} \left\| {P_w (N)} \right\|\), where \(P_w (N) = \frac{{\partial \tau_{nc} }}{{\partial w_{jk} }}\); define \(P_{v\max }\) as \(P_{v\max } = {\mathop {\max }\limits_N} \left\| {P_v (N)} \right\|\), where \(P_v (N) = \frac{{\partial u_{nc} }}{{\partial v_{ji} }}\). Thus, the system stability can be guaranteed if \(\eta_\alpha\) and \(\eta_\beta\) are chosen as \(\eta_\alpha^* = \frac{1}{{m(V_{\max } )^2 }}\) and \(\eta_\beta^* = \frac{2}{n}\), respectively, in which \(V_{\max } = {\mathop {\max }\limits_j} \left| {V_j } \right|\); \(\eta_w\) and \(\eta_v\) are chosen as \(\eta_w^* = \frac{2}{{mn(\beta_{\max } V_{\max } )^2 }}\) and \(\eta_v^* = \frac{2}{{n(\alpha_{\max } + \beta_{\max } w_{\max } )^2 }}\), respectively, in which \(\alpha_{\max } = {\mathop {\max }\limits_j} \left| {\alpha_j } \right|\), \(\beta_{\max } = {\mathop {\max }\limits_k} \left| {\beta_k } \right|\) and \(w_{\max } = {\mathop {\max }\limits_{j,k}} \left| {w_{jk} } \right|\).
Proof
Since \(P_\alpha (N) = \frac{{\partial \tau_{nc} }}{\partial \alpha_j } = V_j\) and \(P_\beta (N) = \frac{{\partial u_{nc} }}{\partial \beta_k } = W_k\), the following result can be concluded:
The upper bounds of \(P_w (N)\) and \(P_v (N)\) can be derived as follows:
From (A3) and (A4), the inequalities can be obtained as:
To ensure the system stability, consider a discrete-type Lyapunov function as follows:
where N denotes the number of iteration. The change of discrete-type Lyapunov function can be expressed as:
The error difference can be represented by:
where \(\Delta e(N)\) respects a change in system output, and \(\Delta \alpha_j\), \(\Delta \beta_k\), \(\Delta w_{jk}\), and \(\Delta v_{ji}\) respect a parameter change in output layer, enhancement layer and recurrent feature layer, respectively. Define \(\xi = \frac{\partial e}{{\partial u_{nc} }}\) as a positive constant designed by the user, (A9) using (33)–(36), (A1), (A2), (A5) and (A6) can be obtained as:
If the learning rates for the parameters learning laws of HBRNN are selected as follows:
the term \(\left\| {1 - \eta_\alpha \delta^2 P_\alpha^T (N)P_\alpha (N)} \right\|\), \(\left\| {1 - \eta_\beta \delta^2 P_\beta^T (N)P_\beta (N)} \right\|\), \(\left\| {1 - \eta_w \delta^2 P_w^T (N)P_w (N)} \right\|\) and \(\left\| {1 - \eta_v \delta^2 P_v^T (N)P_v (N)} \right\|\) are less than 1. According to \(\left\| {e(N + 1)} \right\| < \left\| {e(N)} \right\|\), the Lyapunov stability of \(V_2 (N) > 0\) and \(\Delta V_2 (N) < 0\) can be guaranteed. Thus, the discrete-type Lyapunov approach can find the learning rate ranges that allows the ALR can efficiently train HBRNN, where the learning rates are designed as \(\eta_\alpha^* = \frac{\eta_\alpha }{2}\), \(\eta_\beta^* = \frac{\eta_\beta }{2}\), \(\eta_w^* = \frac{\eta_w }{2}\) and \(\eta_v^* = \frac{\eta_v }{2}\) for the parameters learning laws of HBRNN (33)–(36), respectively.□
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Hsu, CF., Chen, BR. Hermite broad-learning recurrent neural control with adaptive learning rate for nonlinear systems. Soft Comput 28, 6307–6326 (2024). https://doi.org/10.1007/s00500-023-09481-2
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-023-09481-2