Log in

Hermite broad-learning recurrent neural control with adaptive learning rate for nonlinear systems

  • Application of soft computing
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Although conventional control systems are simple and widely used, they may not be effective for complex and uncertain systems. This study proposes a Hermite broad-learning recurrent neural network (HBRNN) with a wide network structure and an internal feedback loop that enables good approximation of system dynamics without the need for a large number of training parameters. Furthermore, a Hermite broad-learning recurrent neural control (HBRNC) with HBRNN as the main controller is proposed, which requires no prior knowledge about the system dynamics and has no off-line learning phase. All the HBRNN network parameters are updated according to parameter learning laws through the gradient descent approach. To prevent network parameter overtraining of the HBRNN, a discrete-type Lyapunov function is used to determine the least upper bound for the learning rates. Additionally, an adaptive learning rate (ALR) approach is designed to dynamically fine-tune the learning rates within these specified limits, thereby achieving an optimal convergence speed between network parameters and tracking error. Finally, the HBRNC system with ALR is applied to a chaotic circuit and a reaction wheel pendulum, and its effectiveness is validated through simulation and experimentation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Availability of data and materials

Not applicable.

Code availability

Not applicable.

References

  • Bai K, Zhu X, Wen S, Zhang R, Zhang W (2022) Broad learning based dynamic fuzzy inference system with adaptive structure and interpretable fuzzy rules. IEEE Trans Fuzzy Syst 30(8):3270–3283

    Google Scholar 

  • Chang CW, Hsu CF, Lee TT (2018) Backstep**-based finite-time adaptive fuzzy control of unknown nonlinear systems. Int J Fuzzy Syst 20:2545–2555

    MathSciNet  Google Scholar 

  • Chen CLP, Liu Z (2018) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Trans Neural Netw Learn Syst 29(1):10–24

    MathSciNet  Google Scholar 

  • Chen CH, Shih P, Srivastava G, Hung ST, Lin CW (2023) Evolutionary trading signal prediction model optimization based on Chinese news and technical indicators in the internet of things. IEEE Internet Things J 10(3):2162–2173

    Google Scholar 

  • Cheng L, Wang Z, Jiang F, Zhou C (2019) Real-time optimal control for spacecraft orbit transfer via multiscale deep neural networks. IEEE Trans Aerosp Electron Syst 55(5):2436–2450

    Google Scholar 

  • Du J, Vong CM, Chen CLP (2021) Novel efficient RNN and LSTM-like architectures: recurrent and gated broad learning systems and their applications for text classification. IEEE Trans Cybern 51(3):1586–1597

    Google Scholar 

  • Elhaki O, Shojaei K (2020) A robust neural network approximation-based prescribed performance output-feedback controller for autonomous underwater vehicles with actuators saturation. Eng Appl Artif Intell 88:103382

    Google Scholar 

  • Fan L, Zhang T, Zhao X, Wang H, Zheng M (2019) Deep topology network: a framework based on feedback adjustment learning rate for image classification. Adv Eng Inf 42:100935

    Google Scholar 

  • Feng S, Chen CLP (2018) Broad learning system for control of nonlinear dynamic systems. In: 2018 IEEE international conference on systems, man and cybernetics, pp 2230–223

  • Fang W, Chao F, Yang L, Lin CM, Shang C, Zhou C, Shen Q (2019) Arecurrent emotional cmac neural network controller for vision-based mobile robots. Neurocomputing 334:227–238

    Google Scholar 

  • Gan J, **e X, Zhai Y, He G, Mai C, Luo H (2023) Facial beauty prediction fusing transfer learning and broad learning system. Soft Comput 27:13391–13404

    Google Scholar 

  • Han HG, Liu Z, Liu H, Qiao J, Chen CLP (2022) Type-2 fuzzy broad learning system. IEEE Trans Cybern 52(10):10352–10363

    Google Scholar 

  • Hsu CF, Lee TT (2017) Emotional fuzzy sliding-mode control for unknown nonlinear systems. Int J Fuzzy Syst 19:942–953

    MathSciNet  Google Scholar 

  • Hsu CF, Chen BR, Wu BF (2022a) Fuzzy broad learning adaptive control for voice coil motor drivers. Int J Fuzzy Syst 24(3):1696–1707

    Google Scholar 

  • Hsu CF, Chen BR, Wu BF (2022b) Broad-learning recurrent Hermite neural control for unknown nonlinear systems. Knowl Based Syst 242:108263

    Google Scholar 

  • Huang H, Zhang T, Yang C, Chen CLP (2020) Motor learning and generalization using broad learning adaptive neural control. IEEE Trans Ind Electron 67(10):8608–8617

    Google Scholar 

  • Huang S, Rong L, Chang X, Wang Z, Yuan Z, Wei C, Santos OJ (2021) BLSTM-based adaptive finite-time output-constrained control for a class of AUSs with dynamic disturbances and actuator faults. Math Problems Eng 2021:2221495

    Google Scholar 

  • Huynh T, Lin C, Le T, Cho H, Pham TT, Le N, Chao F (2020) A new self-organizing fuzzy cerebellar model articulation controller for uncertain nonlinear systems using overlapped Gaussian membership functions. IEEE Trans Ind Electron 67(11):9671–9682

    Google Scholar 

  • Jiang J, Astolfi A (2021) Stabilization of a class of underactuated nonlinear systems via underactuated back-step**. IEEE Trans Autom Control 66(11):5429–5435

    MathSciNet  Google Scholar 

  • Kumar R (2023) Double internal loop higher-order recurrent neural network-based adaptive control of the nonlinear dynamical system. Soft Comput 27:17313–17331

    Google Scholar 

  • Le TL, Ngo VB (2022) The synchronization of hyperchaotic systems using a novel interval type-2 fuzzy neural network controller. IEEE Access 10:105966–105982

    Google Scholar 

  • Lin CT, Lee CSG (1996) Neural fuzzy systems—a neural-fuzzy synergism to intelligent systems. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  • Lin CM, Nguyen HB, Huynh TT (2021) A new self-organizing double function-link brain emotional learning controller for MIMO nonlinear systems using sliding surface. IEEE Access 9:73826–73842

    Google Scholar 

  • Mostafa E, Elshazly O, El-Bardini M, El-Nagar A (2023) Embedded adaptive fractional-order sliding mode control based on TSK fuzzy system for nonlinear fractional-order systems. Soft Comput 27:15463–15477

    Google Scholar 

  • Slotine JJE, Li WP (1991) Applied nonlinear control. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  • Sui S, Chen CLP, Tong S, Feng S (2020) Finite-time adaptive quantized control of stochastic nonlinear systems with input quantization: a broad learning system based identification method. IEEE Trans Ind Electron 67(10):8555–8565

    Google Scholar 

  • Tian W, Zhao F, Min C, Feng X, Liu R, Mei X, Chen G (2022) Broad learning system based on binary grey wolf optimization for surface roughness prediction in slot milling. IEEE Trans Instrum Meas 71:1–10 (Art No. 2502310)

    Google Scholar 

  • Tsai CC, Chan CC, Li YC, Tai FC (2020) Intelligent adaptive PID control using fuzzy broad learning system: an application to tool-grinding servo control systems. Int J Fuzzy Syst 22:2149–2162

    Google Scholar 

  • Wai RJ, Lin YF, Chuang KL (2014) Total sliding-mode-based particle swarm optimization control for linear induction motor. J Franklin Inst 351(5):2755–2780

    Google Scholar 

  • Wang LX (1994) Adaptive fuzzy systems and control: design and stability analysis. Prentice-Hall, Englewood Cliffs

    Google Scholar 

  • Wang CH, Lin TC, Lee TT, Liu HL (2002) Adaptive hybrid intelligent control for uncertain nonlinear dynamical systems. IEEE Trans Syst Man Cybern 32(5):583–597

    Google Scholar 

  • Wang B, Zhao Y, Chen CLP (2021) Hybrid transfer learning and broad learning system for wearing mask detection in the COVID-19 Era. IEEE Trans Instrum Meas 70:1–12 (Art No. 5009612)

    Google Scholar 

  • Wang X, Huang T, Zhu K, Zhao X (2022) LSTM-based broad learning system for remaining useful life prediction. Mathematics 10(12):2066

    Google Scholar 

  • Wu BF, Chen BR, Hsu CF (2021) Design of a facial landmark detection system using a dynamic optical flow approach. IEEE Access 9:68737–68745

    Google Scholar 

  • Xu S, Liu J, Yang C, Wu X, Xu T (2022) A learning-based stable servo control strategy using broad learning system applied for microrobotic control. IEEE Trans Cybern 52(12):13727–13737

    Google Scholar 

  • Ye R, Yan B, Shi K, Chen M (2020) Interval type-2 fuzzy sliding-mode control of three-axis stabilization gimbal. IEEE Access 8:180510–180519

    Google Scholar 

  • Yi J, Huang J, Zhou W, Chen G, Zhao M (2022) Intergroup cascade broad learning system with optimized parameters for chaotic time series prediction. IEEE Trans Artif Intell 3(5):709–721

    Google Scholar 

  • Yuan L, Li T, Tong S, **ao Y, Shan Q (2022) Broad learning system approximation-based adaptive optimal control for unknown discrete-time nonlinear systems. IEEE Trans Syst Man Cybern Syst 52(8):5028–5038

    Google Scholar 

  • Zhang QQ, Wai RJ (2022) Design of adaptive distributed secondary control using double-hidden-layer recurrent-neural-network-inherited total-sliding-mode scheme for islanded micro-grid. IEEE Access 10:5990–6009

    Google Scholar 

  • Zhang P, Wu Z, Dong H, Tan M, Yu J (2020) Reaction-wheel-based roll stabilization for a robotic fish using neural network sliding mode control. IEEE/ASME Trans Mechatron 25(4):1904–1911

    Google Scholar 

  • Zhang C, Ding S, Guo L, Zhang J (2022a) Broad learning system based ensemble deep model. Soft Comput 26:7029–7041

    Google Scholar 

  • Zhang J, Chao F, Zeng H, Lin CM, Yang L (2022b) A recurrent wavelet-based brain emotional learning network controller for nonlinear systems. Soft Comput 26:3013–3028

    Google Scholar 

  • Zhao J, Lin CM (2019) Wavelet-TSK-type fuzzy cerebellar model neural network for uncertain nonlinear systems. IEEE Trans Fuzzy Syst 27(3):549–558

    Google Scholar 

  • Zheng Q, Zhao P, Li Y, Wang H, Yang Y (2021a) Spectrum interference-based two-level data augmentation methodin deep learning for automatic modulation classification. Neural Comput Appl 33:7723–7745

    Google Scholar 

  • Zheng Q, Zhao P, Zhang D, Wang H (2021b) MR-DCAE: Manifold regularization-based deep convolutional autoencoder for unauthorized broadcasting identification. Int J Intell Syst 36(12):7204–7238

    Google Scholar 

  • Zheng Q, Zhao P, Wang H, Elhanashi A, Saponara S (2022) Fine-grained modulation classification using multi-scale radio transformer with dual-channel representation. IEEE Commun Lett 26(6):1298–1302

    Google Scholar 

  • Zheng Q, Tian X, Yu Z, Wang H, Elhanashi A, Saponara S (2023) DL-PR: Generalized automatic modulation classification method based on deep learning with priori regularization. Eng Appl Artif Intell 122:106082

    Google Scholar 

  • Zhou J, Han T, **ao F, Gui G, Adebisi B, Gacanin H, Sari H (2022) Multiscale network traffic prediction method based on deep echo-state network for internet of things. IEEE Internet Things J 9(21):21862–21874

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to the associate editor and the reviewers for their valuable comments. This work was supported by the Ministry of Science and Technology (MOST), Taiwan, the Republic of China, under contract MOST 110-2221-E-032-038-MY2.

Funding

The study was funded by the Ministry of Science and Technology of Republic of China under Grant MOST 110-2221-E-032-038-MY2.

Author information

Authors and Affiliations

Authors

Contributions

We confirm that the manuscript has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed.

Corresponding author

Correspondence to Chun-Fei Hsu.

Ethics declarations

Conflict of interest

We confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome.

Consent to participate

Not applicable.

Consent for publication

We confirm that there are no impediments to publication, including the timing of publication, with respect to intellectual property.

Ethics approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Theorem 1

Let \(\eta_\alpha\), \(\eta_\beta\), \(\eta_w\) and \(\eta_v\) be the learning rates for the parameters learning laws of HBRNN. Define \(P_{\alpha \max }\) as \(P_{\alpha \max } = {\mathop {\max }\limits_N} \left\| {P_\alpha (N)} \right\|\), where \(P_\alpha (N) = \frac{{\partial \tau_{nc} }}{\partial \alpha_j }\); define \(P_{\beta \max }\) as \(P_{\beta \max } = {\mathop {\max }\limits_N} \left\| {P_\beta (N)} \right\|\), where \(P_\beta (N) = \frac{{\partial u_{nc} }}{\partial \beta_k }\); define \(P_{w\max }\) as \(P_{w\max } = {\mathop {\max }\limits_N} \left\| {P_w (N)} \right\|\), where \(P_w (N) = \frac{{\partial \tau_{nc} }}{{\partial w_{jk} }}\); define \(P_{v\max }\) as \(P_{v\max } = {\mathop {\max }\limits_N} \left\| {P_v (N)} \right\|\), where \(P_v (N) = \frac{{\partial u_{nc} }}{{\partial v_{ji} }}\). Thus, the system stability can be guaranteed if \(\eta_\alpha\) and \(\eta_\beta\) are chosen as \(\eta_\alpha^* = \frac{1}{{m(V_{\max } )^2 }}\) and \(\eta_\beta^* = \frac{2}{n}\), respectively, in which \(V_{\max } = {\mathop {\max }\limits_j} \left| {V_j } \right|\); \(\eta_w\) and \(\eta_v\) are chosen as \(\eta_w^* = \frac{2}{{mn(\beta_{\max } V_{\max } )^2 }}\) and \(\eta_v^* = \frac{2}{{n(\alpha_{\max } + \beta_{\max } w_{\max } )^2 }}\), respectively, in which \(\alpha_{\max } = {\mathop {\max }\limits_j} \left| {\alpha_j } \right|\), \(\beta_{\max } = {\mathop {\max }\limits_k} \left| {\beta_k } \right|\) and \(w_{\max } = {\mathop {\max }\limits_{j,k}} \left| {w_{jk} } \right|\).

Proof

Since \(P_\alpha (N) = \frac{{\partial \tau_{nc} }}{\partial \alpha_j } = V_j\) and \(P_\beta (N) = \frac{{\partial u_{nc} }}{\partial \beta_k } = W_k\), the following result can be concluded:

$$ \left\| {P_\alpha (N)} \right\| < \sqrt {m} V_{\max } $$
(A1)
$$ \left\| {P_\beta (N)} \right\| < \sqrt {n} $$
(A2)

The upper bounds of \(P_w (N)\) and \(P_v (N)\) can be derived as follows:

$$ \begin{aligned}P_w (N) &= \frac{{\partial u_{nc} }}{{\partial w_{jk} }} = \frac{{\partial u_{nc} }}{\partial W_k }\frac{\partial W_k }{{\partial w_{jk} }} \\ &= \beta_k \left( {1 - W_k^2 } \right)V_j \\ &\le \beta_k V_j\end{aligned}$$
(A3)
$$ \begin{aligned}P_v (N) &= \frac{{\partial u_{nc} }}{{\partial v_{ji} }} = \left( {\frac{{\partial \,u_{nc} }}{\partial \,V_j } + \sum_{k = 1}^n {\frac{{\partial \,u_{nc} }}{\partial \,W_k }\frac{\partial \,W_k }{{\partial \,V_j }}} } \right)\frac{\partial \,V_j }{{\partial \,v_{ji} }} \\ &= \left( {\alpha_j + \sum_{k = 1}^n {\beta_k w_{jk} } \left( {1 - W_k^2 } \right)} \right)h_{ji}\\ &\le \alpha_j + \sum_{k = 1}^n {\beta_k w_{jk} } \left( {1 - W_k^2 } \right)\\ &\le \alpha_j + \sum_{k = 1}^n {\left| {\beta_k } \right|\left| {w_{jk} } \right|} \end{aligned}$$
(A4)

From (A3) and (A4), the inequalities can be obtained as:

$$ \left\| {P_w (N)} \right\| \le \left\| {\beta_k V_j } \right\| \le \left\| {\beta_k } \right\|\left\| {V_j } \right\| \le \sqrt {mn} \beta_{\max } V_{\max } $$
(A5)
$$ \begin{aligned}\left\| {P_v (N)} \right\| &\le \left\| {\alpha_j + \sum_{k = 1}^n {\left| {\beta_k } \right|\left| {w_{jk} } \right|} } \right\| \\ & \le \left\| {\alpha_j } \right\| + \left\| {\sum_{k = 1}^n {\left| {\beta_k } \right|\left| {w_{jk} } \right|} } \right\| \\ &\le \alpha_{\max } + \sqrt {n} \beta_{\max } w_{\max } \end{aligned}$$
(A6)

To ensure the system stability, consider a discrete-type Lyapunov function as follows:

$$ V_2 (N) = \frac{1}{2}e^2 (N) $$
(A7)

where N denotes the number of iteration. The change of discrete-type Lyapunov function can be expressed as:

$$ \Delta V_2 (N) = V_2 (N + 1) - V_2 (N) = \frac{1}{2}\left[ {e^2 (N + 1) - e^2 (N)} \right] $$
(A8)

The error difference can be represented by:

$$ \begin{aligned}e(N + 1) &= e(N) + \Delta e(N) \\ & = e(N) + \left[ {\frac{\partial e(N)}{{\partial \alpha_j }}} \right]^T \Delta \alpha_j + \left[ {\frac{\partial e(N)}{{\partial \beta_k }}} \right]^T \Delta \beta_k + \left[ {\frac{\partial e(N)}{{\partial w_{jk} }}} \right]^T \Delta w_{jk} + \left[ {\frac{\partial e(N)}{{\partial v_{ji} }}} \right]^T \Delta v_{ji} \end{aligned}$$
(A9)

where \(\Delta e(N)\) respects a change in system output, and \(\Delta \alpha_j\), \(\Delta \beta_k\), \(\Delta w_{jk}\), and \(\Delta v_{ji}\) respect a parameter change in output layer, enhancement layer and recurrent feature layer, respectively. Define \(\xi = \frac{\partial e}{{\partial u_{nc} }}\) as a positive constant designed by the user, (A9) using (33)–(36), (A1), (A2), (A5) and (A6) can be obtained as:

$$ \begin{aligned}\left\| {e(N + 1)} \right\| &= \left\| {e(N)\left( {1 - \eta_\alpha \xi^2 P_\alpha^T (N)P_\alpha (N)} \right)} \right. + e(N)\left( {1 - \eta_\beta \xi^2 P_\beta^T (N)P_\beta (N)} \right) \\ &\quad + e(N)\left( {1 - \eta_w \xi^2 P_w^T (N)P_w (N)} \right) \left. + e(N)\left( {1 - \eta_v \xi^2 P_v^T (N)P_v (N)} \right) \right\| \\ &\le \left\| {e(N)} \right\|\left\| {1 - \eta_\alpha \xi^2 P_\alpha^T (N)P_\alpha (N)} \right\| + \left\| {e(N)} \right\|\left\| {1 - \eta_\beta \xi^2 P_\beta^T (N)P_\beta (N)} \right\| \\ &\quad + \left\| {e(N)} \right\|\left\| {1 - \eta_w \xi^2 P_w^T (N)P_w (N)} \right\| +\left\| {e(N)} \right\|\left\| {1 - \eta_v \xi^2 P_v^T (N)P_v (N)} \right\| \end{aligned}$$
(A10)

If the learning rates for the parameters learning laws of HBRNN are selected as follows:

$$ \eta_\alpha = \frac{1}{{(\xi P_{\alpha \max } )^2 }} = \frac{1}{{m(\xi V_{\max } )^2 }} $$
(A11)
$$ \eta_\beta = \frac{1}{{(\xi P_{\beta \max } )^2 }} = \frac{1}{n\xi^2 } $$
(A12)
$$ \eta_w = \frac{1}{{(\xi P_{w\max } )^2 }} = \frac{1}{{mn(\xi \beta_{\max } V_{\max } )^2 }} $$
(A13)
$$ \eta_v = \frac{1}{{(\xi P_{v\max } )^2 }} = \frac{1}{{(\xi (\alpha_{\max } + \sqrt {n} \beta_{\max } w_{\max } ))^2 }} $$
(A14)

the term \(\left\| {1 - \eta_\alpha \delta^2 P_\alpha^T (N)P_\alpha (N)} \right\|\), \(\left\| {1 - \eta_\beta \delta^2 P_\beta^T (N)P_\beta (N)} \right\|\), \(\left\| {1 - \eta_w \delta^2 P_w^T (N)P_w (N)} \right\|\) and \(\left\| {1 - \eta_v \delta^2 P_v^T (N)P_v (N)} \right\|\) are less than 1. According to \(\left\| {e(N + 1)} \right\| < \left\| {e(N)} \right\|\), the Lyapunov stability of \(V_2 (N) > 0\) and \(\Delta V_2 (N) < 0\) can be guaranteed. Thus, the discrete-type Lyapunov approach can find the learning rate ranges that allows the ALR can efficiently train HBRNN, where the learning rates are designed as \(\eta_\alpha^* = \frac{\eta_\alpha }{2}\), \(\eta_\beta^* = \frac{\eta_\beta }{2}\), \(\eta_w^* = \frac{\eta_w }{2}\) and \(\eta_v^* = \frac{\eta_v }{2}\) for the parameters learning laws of HBRNN (33)–(36), respectively.□

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hsu, CF., Chen, BR. Hermite broad-learning recurrent neural control with adaptive learning rate for nonlinear systems. Soft Comput 28, 6307–6326 (2024). https://doi.org/10.1007/s00500-023-09481-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-023-09481-2

Keywords

Navigation