Abstract
The present study focuses on Model Order Reduction (MOR) methods of non-intrusive nature that can be seen as belonging to the category of system identification techniques. Indeed, whereas the system to analyze is considered as a black box, the accurate modeling of the relationship between its input and output is the aim of the proposed techniques. In this framework, the paper deals with two different methodologies for the system identification of thermal problems. The first identifies a linear thermal system by means of an Extended Kalman Filter (EKF). The approach starts from an a priori guessed analytical model whose expression is assumed to describe appropriately the response of the system to identify. Then, the EKF is used for estimating the model transient states and parameters. However, this methodology is not extended to the processing of nonlinear systems due to the difficulty related to the analytical model construction step. Therefore, a second approach is presented, based on an Unscented Kalman Filter (UKF). Finally, a Finite Element (FE) model is used as a reference, and the good agreement between the FE results and the responses produced by the EKF and UKF methods in the linear case fully illustrate their interest.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Andrews HC, Patterson CL (1976) Singular value decompositions and digital image processing. IEEE Trans Acoust Speech Signal Process 24(1):26–53
Berkooz G, Holmes P, Lumley JL (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech 25(1):539–575
Boyce WE, DiPrima RC (1977) Elementary differential equations and boundary value problems. John Wiley & Sons, New York
Craig R, Bampton MCC (1968) Coupling of substructures for dynamic analyses. AIAA J 6(7):1313–1319
Dormand JR, Prince PJ(1980) A family of embedded runge-kutta formulae. J Comput Appl Math 6(1):19–26
Guyan RJ (1965) Reduction of stiffness and mass matrices. AIAA J 3(2):380–380
Julier SJ, Uhlmann JK (1996) A general method for approximating nonlinear transformations of probability distributions. Technical report, University of Oxford, Departement of Engineering Science
Julier SJ, Uhlmann JK (1997) A new extension of the kalman filter to nonlinear systems. In: The proceedings of aeroSense: the 11th international symposium on aerospace/defence sensing, simulation and controls, Orlando, Florida, pp 182–193
Joseph J, LaViola Jr (2003) A comparison of unscented and extended kalman filtering for estimating quaternion motion. In the proceedings of the 2003 american control conference, Denver, Colorado, pp 2435–2440
Mathews JH, Fink KD (2004) Numerical methods using MATLAB. Prentice Hall, Upper Saddle River, New Jersey
Moore BC (1981) Principal component analysis in linear systems: controllability, observability, and model reduction. IEEE Trans Autom Control 26(1):17–32
Sorenson HW (1970) Least-squares estimation: from gauss to kalman. IEEE Spectr 7(7):63–68
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Model Construction Step Using EKF
The solution of Eq. (55.3) on the time interval t i t f is given by [1]:
where the exponential matrix is defined as \({e}^{\tilde{A}t} =\sum \limits _{ k=0}^{\infty }\frac{1} {k!}{(\tilde{A}t)}^{k}\) With t i = t k and \(t_{f} = t_{k+1}\), (55.7) becomes:
Simplifying the notation by writing k instead of t k and supposing u(t) constant over the sampling interval t k t k + 1 , the discrete state space model is written as follows:
where \(x_{r_{k}}\) is the state vector of internal variables at time k, y k the observation vector at time k, u k − 1 the input data at time k − 1, and \((\tilde{A}_{d}\,\tilde{B}_{d}\,\tilde{C}_{d})\) the constitutive matrices of the discrete reduced-order model: \(\tilde{A}_{d} = \left [\begin{array}{*{20}c} {e}^{a_{1}T}&& \\ &\ddots & \\ &&{e}^{a_{n_{r}}T}\\ \end{array} \right ]\); \(\tilde{B}_{d} = ({e}^{\tilde{A}_{d}T}-I)\left [\begin{array}{l} b_{1}\\ \vdots \\ b_{n_{r}}\\ \end{array} \right ] = \left [\begin{array}{l} \frac{b_{1}} {a_{1}} ({e}^{a_{1}T} - 1)\\ \vdots \\ \frac{b_{n_{r}}} {a_{n_{r}}}({e}^{a_{n_{r}}T} - 1) \\ \end{array} \right ]\); \(\tilde{C}_{d} =\tilde{ C} = \left [\begin{array}{*{20}c} c_{11} & \cdots & c_{1n_{r}}\\ \vdots & \ddots & \vdots \\ c_{n_{r}1} & \cdots &c_{n_{r}n_{r}}\\ \end{array} \right ]\).
The objective of our procedure being the identification of parameters, they have to be included in the state vector. The functions \(\tilde{A}_{d}\) and \(\tilde{C}_{d}\) are thereby nonlinear and will be denoted \(\tilde{f}_{d}\) and \(\tilde{h}_{d}\), respectively. The discrete model is then given by:
EKF and UKF Algorithms
55.2.1 Extended Kalman Filter (EKF)
Extended Kalman Filter algorithm
Description:
-
1:
Initialization:
State mean and covarianc at k = 0: \(\hat{x}_{0} = E\left [x_{0}\right ]\) and \(P_{0} = E\left [(x_{0} -\hat{ x}_{0}){(X_{0} -\hat{ x}_{0})}^{T}\right ]\)
-
2:
Prediction phase
-
(a)
The process model Jacobian: \(F_{k} = \dfrac{\partial f_{k}} {\partial x} _{x=\hat{x}_{k-1}}\)
-
(b)
Predicted state mean and covariance: \(\hat{x}_{k}^{-} = f_{k}(\hat{x}_{k-1},u_{k-1})\) and \(P_{k}^{-} = F_{k}P_{k}F_{k}^{T} + Q\)
-
(a)
-
3:
Correction phase
-
(a)
Measurement model Jacobian: \(H_{k} = \dfrac{\partial h_{k}} {\partial x} _{x=\hat{x}_{k}^{-}}\)
-
(b)
Measurement update:
-
Measurement prediction: \(\hat{y}_{k}= h_{k}\left (\hat{x}_{k}^{-}\right )\)
-
Innovation (Residual term): \(\tilde{y}_{k} = y_{k} -\hat{ y}_{k}\)
-
Innovation covariance matrix: \(M_{k} = \mathop{cov}\left (\tilde{y}_{k}\right )\,=\,H_{k}P_{k}^{-}H_{k}^{T} + R\)
-
-
(c)
Updated state mean and Covariance:
-
Kalman Gain matrix: \(K_{k} = P_{k}^{-}H_{k}^{T}M_{k}^{-1}\)
-
State update: \(\hat{x}_{k} =\hat{ x}_{k}^{-} + K_{k}\tilde{y}_{k}\)
-
Covariance update: \(P_{k} = \left (I - K_{k}H_{k}\right )P_{k}^{-}\)
-
-
(a)
55.2.2 Unscented Transform (UT)
Unscented Transform
Let x ∈ ℝ n be a Gaussian random vector and y = g(x) a general nonlinear function, g : ℝ n → ℝ m; \(y = g(x);\,\,E\left [x\right ] =\bar{ x};\,E[(x -\bar{ x})\) \({(x -\bar{ x})}^{T}]\,=\,P_{xx}\)
1: Decomposition of the distribution in 2n + 1 sigma-points \(\{\chi _{i,\,\omega _{i}}\}_{i=0\,\ldots \,2n} = UT(\bar{x},P_{xx})\) where \(\chi _{0} =\bar{ x}\ \ \ ;\ \ \ \omega _{0} = \frac{\kappa } {(n+\kappa )}\) \(\left.\chi _{i} =\bar{ x} + [\sqrt{(n+\kappa )P_{xx}}]\ \ \ ;\ \ \ \omega _{i} = \frac{1} {2(n+\kappa )} \chi _{i+n} =\bar{ x} - [\sqrt{(n+\kappa )P_{xx}}]\ \ \ ;\ \ \ \omega _{i+n} = \frac{1} {2(n+\kappa )}\right \}\,i = 1\,\ldots \,n\) N.B. The term \(\left [\sqrt{(n+\kappa )P_{xx}}\right ]_{i}\) represents the ith column vector of the matrix square root (n + κ)P xx and is derived via the Cholesky factorisation. The parameter κ is a scaling parameter and ω i an associated weight of each sigma-point.
55.2.3 Unscented Kalman Filter (UKF)
Unscented Kalman Filter algorithm
Description:
-
1:
Initialization:
State mean and covarianc at k = 0: \(\hat{x}_{0} = E\left [x_{0}\right ]\) and \(P_{0} = E\left [(x_{0} -\hat{ x}_{0}){(x_{0} -\hat{ x}_{0})}^{T}\right ]\)
-
2:
Prediction phase
-
(a)
Generation of 2n + 1 sigma-points \(\{\chi _{_{i},k-1},\omega _{i}\}_{i=0\,\ldots \,2n} = UT(\hat{x}_{k-1},P_{x_{k-1}})\)
-
(b)
Predicted state: \(\chi _{_{i},k}^{-} = f_{k}(\chi _{i,k-1},u_{k-1})\) and \(\hat{x}_{_{k}}^{-} =\sum \limits _{ i=0}^{2n}\omega _{i}\chi _{_{i},k}^{-}\)
-
(c)
Predicted covariance: \(P_{x_{k}}^{-} =\sum \limits _{ i=0}^{2n}\omega _{i}(\chi _{_{i},k}^{-}-\hat{ x}_{_{k}}^{-}){(\chi _{_{i},k}^{-}-\hat{ x}_{_{k}}^{-})}^{T} + Q\)
-
(a)
-
3:
Correction phase
-
(a)
Measurement update: \(Y _{i,k} = h_{k}(\chi _{_{i,k}}^{-})\)
-
(b)
Measurement prediction: \(\hat{y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}Y _{_{i},k}\)
-
(c)
Innovation (Residual term): \(\tilde{y}_{k} = Y _{i,k} -\hat{ y_{k}}\)
-
(d)
Innovation covariance: \(P_{y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}\tilde{y}_{k}\tilde{y}_{k}^{T} + R\)
-
(e)
Cross covariance: \(P_{x_{k}y_{k}} =\sum \limits _{ i=0}^{2n}\omega _{i}(\chi _{_{i,k}}^{-}-\hat{ x}_{_{k}}^{-}){(Y _{i,k} -\hat{ y_{k}})}^{T} + R\)
-
(f)
Updated state mean and Covariance:
-
Kalman Gain matrix: \(K_{k} = P_{x_{k}y_{k}}P_{_{y_{ k}}}^{-1}\)
-
State update: \(\hat{x}_{k} =\hat{ x}_{k}^{-} + K_{k}\tilde{y}_{k}\)
-
Covariance update: \(P_{x_{k}} = P_{x_{k}}^{-}- K_{k}P_{y_{k}}K_{k}^{T}\)
-
-
(a)
Rights and permissions
Copyright information
© 2014 The Society for Experimental Mechanics
About this paper
Cite this paper
Abid, F., Chevallier, G., Blanchard, J.L., Dion, J.L., Dauchez, N. (2014). System Identification Using Kalman Filters. In: Allemang, R., De Clerck, J., Niezrecki, C., Wicks, A. (eds) Topics in Modal Analysis, Volume 7. Conference Proceedings of the Society for Experimental Mechanics Series. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-6585-0_55
Download citation
DOI: https://doi.org/10.1007/978-1-4614-6585-0_55
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-6584-3
Online ISBN: 978-1-4614-6585-0
eBook Packages: EngineeringEngineering (R0)