System Identification Using Kalman Filters

  • Conference paper
  • First Online:
Topics in Modal Analysis, Volume 7

Abstract

The present study focuses on Model Order Reduction (MOR) methods of non-intrusive nature that can be seen as belonging to the category of system identification techniques. Indeed, whereas the system to analyze is considered as a black box, the accurate modeling of the relationship between its input and output is the aim of the proposed techniques. In this framework, the paper deals with two different methodologies for the system identification of thermal problems. The first identifies a linear thermal system by means of an Extended Kalman Filter (EKF). The approach starts from an a priori guessed analytical model whose expression is assumed to describe appropriately the response of the system to identify. Then, the EKF is used for estimating the model transient states and parameters. However, this methodology is not extended to the processing of nonlinear systems due to the difficulty related to the analytical model construction step. Therefore, a second approach is presented, based on an Unscented Kalman Filter (UKF). Finally, a Finite Element (FE) model is used as a reference, and the good agreement between the FE results and the responses produced by the EKF and UKF methods in the linear case fully illustrate their interest.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (Canada)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info
Hardcover Book
USD 329.99
Price excludes VAT (Canada)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Andrews HC, Patterson CL (1976) Singular value decompositions and digital image processing. IEEE Trans Acoust Speech Signal Process 24(1):26–53

    Article  Google Scholar 

  2. Berkooz G, Holmes P, Lumley JL (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech 25(1):539–575

    Article  MathSciNet  Google Scholar 

  3. Boyce WE, DiPrima RC (1977) Elementary differential equations and boundary value problems. John Wiley & Sons, New York

    MATH  Google Scholar 

  4. Craig R, Bampton MCC (1968) Coupling of substructures for dynamic analyses. AIAA J 6(7):1313–1319

    Article  MATH  Google Scholar 

  5. Dormand JR, Prince PJ(1980) A family of embedded runge-kutta formulae. J Comput Appl Math 6(1):19–26

    Google Scholar 

  6. Guyan RJ (1965) Reduction of stiffness and mass matrices. AIAA J 3(2):380–380

    Article  Google Scholar 

  7. Julier SJ, Uhlmann JK (1996) A general method for approximating nonlinear transformations of probability distributions. Technical report, University of Oxford, Departement of Engineering Science

    Google Scholar 

  8. Julier SJ, Uhlmann JK (1997) A new extension of the kalman filter to nonlinear systems. In: The proceedings of aeroSense: the 11th international symposium on aerospace/defence sensing, simulation and controls, Orlando, Florida, pp 182–193

    Google Scholar 

  9. Joseph J, LaViola Jr (2003) A comparison of unscented and extended kalman filtering for estimating quaternion motion. In the proceedings of the 2003 american control conference, Denver, Colorado, pp 2435–2440

    Google Scholar 

  10. Mathews JH, Fink KD (2004) Numerical methods using MATLAB. Prentice Hall, Upper Saddle River, New Jersey

    Google Scholar 

  11. Moore BC (1981) Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans Autom Control 26(1):17–32

    Article  MATH  Google Scholar 

  12. Sorenson HW (1970) Least-squares estimation: from gauss to kalman. IEEE Spectr 7(7):63–68

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to F. Abid .

Editor information

Editors and Affiliations

Appendices

Model Construction Step Using EKF

The solution of Eq. (55.3) on the time interval t i t f is given by [1]:

$$\displaystyle{ x_{r}(t_{f}) = x_{r}(t_{i})\,{e}^{\tilde{A}\,(t_{f}-t_{i})} +\int \limits _{ t_{i}}^{t_{f} }{e}^{\tilde{A}\,(t_{f}-\tau )}\tilde{B}\,u(\tau )\,d\tau }$$
(55.7)

where the exponential matrix is defined as \({e}^{\tilde{A}t} =\sum \limits _{ k=0}^{\infty }\frac{1} {k!}{(\tilde{A}t)}^{k}\) With t i  = t k and \(t_{f} = t_{k+1}\), (55.7) becomes:

$$\displaystyle{ x_{r}(t_{k+1}) = x_{r}(t_{k})\,{e}^{\tilde{A}\,(t_{k+1}-t_{k})} +\int \limits _{ t_{k}}^{t_{k+1} }{e}^{\tilde{A}\,(t_{k+1}-\tau )}\tilde{B}\,u(\tau )\,d\tau }$$
(55.8)

Simplifying the notation by writing k instead of t k and supposing u(t) constant over the sampling interval t k t k + 1 , the discrete state space model is written as follows:

$$\displaystyle{ \left.\begin{array}{l} x_{r_{k}} =\tilde{ A}_{d}x_{r_{k-1}} +\tilde{ B}_{d}u_{k-1} \\ y_{r_{k}} =\tilde{ C}_{d}x_{r_{k}}\\ \end{array} \right \}\,\tilde{A}_{d} = {e}^{\tilde{A}\,T}\,\,;\,\,\tilde{B}_{ d} =\int \limits _{ t_{k}}^{t_{k+1} }{e}^{\tilde{A}\,(t_{k+1}-\tau )}\tilde{B}d\tau \, =\int \limits _{ 0}^{T}{e}^{\tilde{A}\tau \,}Bd\tau \,;\,\,\tilde{C}_{ d} =\tilde{ C} }$$
(55.9)

where \(x_{r_{k}}\) is the state vector of internal variables at time k, y k the observation vector at time k, u k − 1 the input data at time k − 1, and \((\tilde{A}_{d}\,\tilde{B}_{d}\,\tilde{C}_{d})\) the constitutive matrices of the discrete reduced-order model: \(\tilde{A}_{d} = \left [\begin{array}{*{20}c} {e}^{a_{1}T}&& \\ &\ddots & \\ &&{e}^{a_{n_{r}}T}\\ \end{array} \right ]\); \(\tilde{B}_{d} = ({e}^{\tilde{A}_{d}T}-I)\left [\begin{array}{l} b_{1}\\ \vdots \\ b_{n_{r}}\\ \end{array} \right ] = \left [\begin{array}{l} \frac{b_{1}} {a_{1}} ({e}^{a_{1}T} - 1)\\ \vdots \\ \frac{b_{n_{r}}} {a_{n_{r}}}({e}^{a_{n_{r}}T} - 1) \\ \end{array} \right ]\); \(\tilde{C}_{d} =\tilde{ C} = \left [\begin{array}{*{20}c} c_{11} & \cdots & c_{1n_{r}}\\ \vdots & \ddots & \vdots \\ c_{n_{r}1} & \cdots &c_{n_{r}n_{r}}\\ \end{array} \right ]\).

The objective of our procedure being the identification of parameters, they have to be included in the state vector. The functions \(\tilde{A}_{d}\) and \(\tilde{C}_{d}\) are thereby nonlinear and will be denoted \(\tilde{f}_{d}\) and \(\tilde{h}_{d}\), respectively. The discrete model is then given by:

$$\displaystyle{ \left \{\begin{array}{lcl} x_{k}& =&\left [\begin{array}{l} x_{r_{k}} \\ \theta _{k}\\ \end{array} \right ] = \left [\begin{array}{l} \tilde{f}_{d_{k}}\left (x_{r_{k-1}},u_{k-1},\theta _{k-1}\right )\\ \end{array} \right ] \\ y_{k} & =&\tilde{h}_{d}(x_{r_{k}},\theta _{k-1})\\ \end{array} \right. }$$
(55.10)

EKF and UKF Algorithms

55.2.1 Extended Kalman Filter (EKF)

Extended Kalman Filter algorithm

Description:

  1. 1:

    Initialization:

    State mean and covarianc at k = 0: \(\hat{x}_{0} = E\left [x_{0}\right ]\) and \(P_{0} = E\left [(x_{0} -\hat{ x}_{0}){(X_{0} -\hat{ x}_{0})}^{T}\right ]\)

  2. 2:

    Prediction phase

    1. (a)

      The process model Jacobian: \(F_{k} = \dfrac{\partial f_{k}} {\partial x} _{x=\hat{x}_{k-1}}\)

    2. (b)

      Predicted state mean and covariance: \(\hat{x}_{k}^{-} = f_{k}(\hat{x}_{k-1},u_{k-1})\) and \(P_{k}^{-} = F_{k}P_{k}F_{k}^{T} + Q\)

  3. 3:

    Correction phase

    1. (a)

      Measurement model Jacobian: \(H_{k} = \dfrac{\partial h_{k}} {\partial x} _{x=\hat{x}_{k}^{-}}\)

    2. (b)

      Measurement update:

      • Measurement prediction: \(\hat{y}_{k}= h_{k}\left (\hat{x}_{k}^{-}\right )\)

      • Innovation (Residual term): \(\tilde{y}_{k} = y_{k} -\hat{ y}_{k}\)

      • Innovation covariance matrix: \(M_{k} = \mathop{cov}\left (\tilde{y}_{k}\right )\,=\,H_{k}P_{k}^{-}H_{k}^{T} + R\)

    3. (c)

      Updated state mean and Covariance:

      • Kalman Gain matrix: \(K_{k} = P_{k}^{-}H_{k}^{T}M_{k}^{-1}\)

      • State update: \(\hat{x}_{k} =\hat{ x}_{k}^{-} + K_{k}\tilde{y}_{k}\)

      • Covariance update: \(P_{k} = \left (I - K_{k}H_{k}\right )P_{k}^{-}\)

55.2.2 Unscented Transform (UT)

Unscented Transform

Let x ∈  n be a Gaussian random vector and y = g(x) a general nonlinear function, g :  n  →  m; \(y = g(x);\,\,E\left [x\right ] =\bar{ x};\,E[(x -\bar{ x})\) \({(x -\bar{ x})}^{T}]\,=\,P_{xx}\)

1: Decomposition of the distribution in 2n + 1 sigma-points \(\{\chi _{i,\,\omega _{i}}\}_{i=0\,\ldots \,2n} = UT(\bar{x},P_{xx})\) where \(\chi _{0} =\bar{ x}\ \ \ ;\ \ \ \omega _{0} = \frac{\kappa } {(n+\kappa )}\) \(\left.\chi _{i} =\bar{ x} + [\sqrt{(n+\kappa )P_{xx}}]\ \ \ ;\ \ \ \omega _{i} = \frac{1} {2(n+\kappa )} \chi _{i+n} =\bar{ x} - [\sqrt{(n+\kappa )P_{xx}}]\ \ \ ;\ \ \ \omega _{i+n} = \frac{1} {2(n+\kappa )}\right \}\,i = 1\,\ldots \,n\) N.B. The term \(\left [\sqrt{(n+\kappa )P_{xx}}\right ]_{i}\) represents the ith column vector of the matrix square root (n + κ)P xx and is derived via the Cholesky factorisation. The parameter κ is a scaling parameter and ω i an associated weight of each sigma-point.

55.2.3 Unscented Kalman Filter (UKF)

Unscented Kalman Filter algorithm

Description:

  1. 1:

    Initialization:

    State mean and covarianc at k = 0: \(\hat{x}_{0} = E\left [x_{0}\right ]\) and \(P_{0} = E\left [(x_{0} -\hat{ x}_{0}){(x_{0} -\hat{ x}_{0})}^{T}\right ]\)

  2. 2:

    Prediction phase

    1. (a)

      Generation of 2n + 1 sigma-points \(\{\chi _{_{i},k-1},\omega _{i}\}_{i=0\,\ldots \,2n} = UT(\hat{x}_{k-1},P_{x_{k-1}})\)

    2. (b)

      Predicted state: \(\chi _{_{i},k}^{-} = f_{k}(\chi _{i,k-1},u_{k-1})\) and \(\hat{x}_{_{k}}^{-} =\sum \limits _{ i=0}^{2n}\omega _{i}\chi _{_{i},k}^{-}\)

    3. (c)

      Predicted covariance: \(P_{x_{k}}^{-} =\sum \limits _{ i=0}^{2n}\omega _{i}(\chi _{_{i},k}^{-}-\hat{ x}_{_{k}}^{-}){(\chi _{_{i},k}^{-}-\hat{ x}_{_{k}}^{-})}^{T} + Q\)

  3. 3:

    Correction phase

    1. (a)

      Measurement update: \(Y _{i,k} = h_{k}(\chi _{_{i,k}}^{-})\)

    2. (b)

      Measurement prediction: \(\hat{y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}Y _{_{i},k}\)

    3. (c)

      Innovation (Residual term): \(\tilde{y}_{k} = Y _{i,k} -\hat{ y_{k}}\)

    4. (d)

      Innovation covariance: \(P_{y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}\tilde{y}_{k}\tilde{y}_{k}^{T} + R\)

    5. (e)

      Cross covariance: \(P_{x_{k}y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}(\chi _{_{i,k}}^{-}-\hat{ x}_{_{k}}^{-}){(Y _{i,k} -\hat{ y_{k}})}^{T} + R\)

    6. (f)

      Updated state mean and Covariance:

      • Kalman Gain matrix: \(K_{k} = P_{x_{k}y_{k}}P_{_{y_{ k}}}^{-1}\)

      • State update: \(\hat{x}_{k} =\hat{ x}_{k}^{-} + K_{k}\tilde{y}_{k}\)

      • Covariance update: \(P_{x_{k}} = P_{x_{k}}^{-}- K_{k}P_{y_{k}}K_{k}^{T}\)

Rights and permissions

Reprints and permissions

Copyright information

© 2014 The Society for Experimental Mechanics

About this paper

Cite this paper

Abid, F., Chevallier, G., Blanchard, J.L., Dion, J.L., Dauchez, N. (2014). System Identification Using Kalman Filters. In: Allemang, R., De Clerck, J., Niezrecki, C., Wicks, A. (eds) Topics in Modal Analysis, Volume 7. Conference Proceedings of the Society for Experimental Mechanics Series. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6585-0_55

Download citation

  • DOI: https://doi.org/10.1007/978-1-4614-6585-0_55

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4614-6584-3

  • Online ISBN: 978-1-4614-6585-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics

Navigation