Abstract
Compared with PLS and PCA, independent component analysis (ICA) uses higher-order statistical information (above third-order) of the signal, so as to extract the non-Gaussian characteristic. In recent years, ICA was commonly used as a fault diagnosis method in the field of non-Gaussian process monitoring.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lee JM, Qin SJ, Lee IB (2006) Fault detection and diagnosis based on modified independent component analysis. AIChE J 52:3501–3514
Fan JC, Wang YQ (2014) Fault detection and diagnosis of non-linear non-Gaussian dynamic processes using kernel dynamic independent component analysis. Inf Sci 259:369–379
Stefatos G, Hamza AB (2020) Dynamic independent component analysis approach for fault detection and diagnosis. Expert Syst Appl 37:8606–8617
Lee JM, Yoo CK, Lee IB (2004) Statistical monitoring of dynamic processes based on dynamic independent component analysis. Chem Eng Sci 59:2995–3006
Zhang YW, An JY, Zhang HL (2013) Monitoring of time-varying processes using kernel independent component analysis. Chem Eng Sci 88:23–32
Cai LF, Tian XM, Chen S (2015) Monitoring nonlinear and non-Gaussian processes using Gaussian mixture model-based weighted kernel independent component analysis. IEEE Trans Neural Netw Learning Syst 28:122–135
MacGregor JF, Jaeckle C, Kiparissides C, Koutoudi M (1994) Process monitoring and diagnosis by multiblock PLS methods. AIChE J 40:826–838
Yue HH, Qin SJ (2001) Reconstruction-based fault identification using a combined index. Ind Eng Chem Res 40:4403–4414
Alcala CF, Qin SJ (2009) Reconstruction-based contribution for process monitoring. Automatica 45:1593–1600
Zhang YW, Fan YP, Wen Y (2016) Nonlinear process monitoring using regression and reconstruction method. IEEE Trans Autom Sci Eng 13:1343–1354
Cai L, Tian X, Chen S (2017) Monitoring nonlinear and non-Gaussian processes using Gaussian mixture model-based weighted kernel independent component analysis. IEEE Trans Neural Netw Learning Syst 28(1):122–135
Zhao C, Gao F, Wang F (2010) An improved independent component regression modeling and quantitative calibration procedure. AIChE J 56(6):1519–1535
Zeng J, **e L, Kruger U, Gao C (2012) A non-Gaussian regression algorithm based on mutual information maximization. Chemom Intell Lab Syst 111(1):1–9
Alcala C (2011) Fault diagnosis with reconstruction-based contributions for statistical process monitoring. Degree Thesis (Ph. D.), University of Southern California
Kong XY, Yang ZY, Luo JY, Li HZ, Yang X (2022) Extraction of reduced fault subspace based on KDICA and its application in fault diagnosis. IEEE Trans Instrum Meas 71:3505212. https://doi.org/10.1109/TIM.2022.3150589
Liu MZ, Kong XY, Luo JY, Yang L (2023) Fault detection and diagnosis in a non-Gaussian process with modified kernel independent component regression. Canadian J Chem Eng
Hyvarinen A, Karhunen J, Oja E (2001) Independent component analysis. Wiley, New York
Liu Y, Wang FL, Chang YQ (2013) Reconstruction in integrating fault spaces for fault identification with kernel independent component analysis. Chem Eng Res Des 91:1071–1084
Haykin S (1996) Neural networks expand SP’s horizons. IEEE Signal Process Mag 13:24–49
Downs JJ, Vogel EF (1993) A plant-wide industrial process control problem. Comput Chem Eng 17(3):245–255
Capaci F, Vanhatalo E, Kulahci M, Bergquist B (2019) The revised Tennessee Eastman process simulator as testbed for SPC and DoE methods. Qual Eng 31(2):221–229
Jiang L, Ge Z, Song Z (2017) Semi-supervised fault classification based on dynamic sparse stacked auto-encoders model. Chemom Intell Lab Syst 168:72–83
Cho J, Lee J, Choi SW, Lee D, Lee IB, Fault identification for process monitoring using kernel principal component analysis. Chem Eng Sci 60(1):279–288
Deng X, Tian X (2011) A new fault isolation method based on unified contribution plots. In 2011 Proceedings of the 30th Chinese Control Conference (CCC), IEEE, 4280–4285
Chen Q, Wang Y (2021) Key-performance-indicator-related state monitoring based on kernel canonical correlation analysis. Control Eng Pratice 107:104692
Peng K, Zhang K, Li G (2013) Quality-related process monitoring based on total kernel PLS model and its industrial application. Math Problem Eng 1–4:2013
Zhou DH, Li G, Qin SJ (2010) Total projection to latent structures for process monitoring. AIChE J 56:168–178
Liu J, Wong D, Chen D, Bayesian filtering of the smearing effect: fault isolation in chemical process monitoring. J Process Cont 24(3):1–21, 2–14
Author information
Authors and Affiliations
Appendix
Appendix
Proposition 1 Assuming the kernel function is a radial basis, such that \(k({\varvec{x}}_{i} ,{\varvec{x}}_{j} ) = \exp ( - \left\| {x_{i} - x_{j} } \right\|^{2} /{\text{c}})\), for a given sample \({\varvec{x}}^{b}\), if \({\varvec{x}}^{b} = {\varvec{0}}\), then \(\Psi ({\varvec{x}}^{b} )\) is approximately zero.
Proof Assume that \({\varvec{X}}_{R} \in \Re^{{{\text{N}} \times {\text{m}}}}\) is the original training matrix containing \({\text{N}}\) samples, and \({\text{m}}\) is the number of variables contained in each sample. \({\varvec{x}}_{R}^{b} \in \Re^{{1 \times {\text{m}}}}\) is an original test sample. The centralized operations of \({\varvec{X}}_{R}\) and \({\varvec{x}}_{R}^{b}\) are performed as follows:
By performing the first-order Taylor series expansion, \(k({\varvec{x}}_{i} ,{\varvec{x}}_{j} )\) can be expressed as:
Let \({\text{dis}}({\varvec{x}}_{i} ,{\varvec{x}}_{j} )\) be the Euclidean distance between \({\varvec{x}}_{i}\) and \({\varvec{x}}_{j}\), that:
Then, the Euclidean distance matrix of \({\varvec{X}}_{R}\) can be written as:
where \({\varvec{\varTheta}}_{{{\text{i}},{\text{j}}}} = {\text{dis}}({\varvec{x}}_{i} ,{\varvec{x}}_{j} )\) and \({\varvec{\varGamma}}= [x_{1} x_{1}^{{\text{T}}} ,x_{2} x_{2}^{{\text{T}}} , \ldots ,x_{{\text{N}}} x_{{\text{N}}}^{{\text{T}}} ]^{{\text{T}}} \in \Re^{{{\text{N}} \times 1}}\). Define an operation factor \({\varvec{H}} = {\varvec{I}}_{{\text{N}}} - (1/N){\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}}\), and then we can obtain \({\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{H}}^{{\text{T}}} = ({\varvec{H1}}_{{\text{N}}}^{{}} )^{{\text{T}}} = ({\varvec{1}}_{{\text{N}}}^{{}} - (1/N){\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{1}}_{{\text{N}}} )^{{\text{T}}} = {\varvec{0}}\).
According to Eqs. (9.113), and (9.114), the relationship between the \({\varvec{K}}\) and \({\varvec{\varTheta}}\) can be expressed as:
Then, \({\varvec{K}}\) can be mean centered as:
For a certain test sample \({\varvec{x}}_{{}}^{b}\), similar to Eqs. (9.114) and (9.115), its distance vector \({\varvec{\varTheta}}_{b}\) and kernel vector \({\varvec{k}}({\varvec{x}}^{b} )\) can be calculated as:
Perform the centralization operation on \({\varvec{k}}({\varvec{x}}^{b} )\) as:
If \({\varvec{x}}_{{}}^{b} = {\varvec{0}}\), then \({\varvec{k}}({\varvec{x}}_{{}}^{b} )^{*} = {\varvec{0}}\). After scaling, i.e., \(\overline{\user2{k}}({\varvec{x}}_{{}}^{b} ) = {\varvec{k}}({\varvec{x}}_{{}}^{b} )^{*} /[{\text{trace}}({\varvec{K}}^{*} )/{\text{N}}]\), it is obvious that \(\overline{\user2{k}}({\varvec{x}}_{{}}^{b} ) = {\varvec{0}}\). From Eq. (9.99), the statistic of \({\varvec{x}}_{{}}^{b}\) can be calculated as:
Substituting \(\overline{\user2{k}}({\varvec{x}}_{{}}^{b} ) = {\varvec{0}}\) into Eq. (50) gives \(\Psi ({\varvec{0}})\) = 0.
Proposition 2 For a rank deficiency matrix \({\varvec{R}} \in \Re^{{{\text{d}} \times {\text{l}}}}\), if perform SVD on \({\varvec{RR}}^{T}\) as
then, it holds the property \({\varvec{P}}_{u} \,^{{\text{T}}} {\varvec{R}} = {\varvec{0}}\).
Proof: According to the properties of SVD, for a given matrix \({\varvec{R}} \in \Re^{{{\text{d}} \times {\text{l}}}}\), if \(\gamma = {\text{rank}}({\varvec{R}}) < \min ({\text{d}},{\text{l}})\), then there exist two unitary matrices \({\varvec{U}} \in \Re^{{{\text{d}} \times {\text{d}}}}\) and \({\varvec{V}} \in \Re^{{{\text{l}} \times {\text{l}}}}\) that make.
where \({\varvec{Q}} = {\text{diag}}(\sigma_{1} ,\sigma_{2} , \ldots ,\sigma_{\gamma } )\), and its diagonal elements are arranged in order \(\sigma_{1} \ge \sigma_{2} \ge \ldots \ge \sigma_{\gamma } > 0\).
According to the properties of SVD, it holds that
That is, matrices \({\varvec{R}}\) and \({\varvec{RR}}^{{\text{T}}}\) have the same left singular matrix. Block \({\varvec{U}}\) into \({\varvec{U}} = [{\varvec{U}}_{1} \quad {\varvec{U}}_{2} ],\) \({\varvec{U}}_{1} \in \Re^{{{\text{d}} \times \gamma }} ,{\varvec{U}}_{2} \in \Re^{{{\text{d}} \times ({\text{d}} - \gamma )}}\), then Eqs. (9.122) and (9.123) can be rewritten as
Perform left multiplication by matrix \(\left[ \begin{gathered} {\varvec{U}}_{1}^{{\text{T}}} \hfill \\ {\varvec{U}}_{2}^{{\text{T}}} \hfill \\ \end{gathered} \right]\) on Eq. (9.124) as follows:
It is obvious that \({\varvec{U}}_{2}^{{\text{T}}} {\varvec{R}} = {\varvec{0}}\). Comparing Eqs. (9.121) and (9.125), then \({\varvec{U}}_{2} = {\varvec{P}}_{u}\). Thus \({\varvec{P}}_{u}^{{\text{T}}} {\varvec{R}} = {\varvec{0}}\).
Rights and permissions
Copyright information
© 2024 Science Press
About this chapter
Cite this chapter
Kong, X., Luo, J., Feng, X. (2024). Non-Gaussian Process Monitoring and Fault Diagnosis. In: Process Monitoring and Fault Diagnosis Based on Multivariable Statistical Analysis. Engineering Applications of Computational Methods, vol 19. Springer, Singapore. https://doi.org/10.1007/978-981-99-8775-7_9
Download citation
DOI: https://doi.org/10.1007/978-981-99-8775-7_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8774-0
Online ISBN: 978-981-99-8775-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)