Part of the book series: Engineering Applications of Computational Methods ((EACM,volume 19))

  • 71 Accesses

Abstract

Compared with PLS and PCA, independent component analysis (ICA) uses higher-order statistical information (above third-order) of the signal, so as to extract the non-Gaussian characteristic. In recent years, ICA was commonly used as a fault diagnosis method in the field of non-Gaussian process monitoring.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lee JM, Qin SJ, Lee IB (2006) Fault detection and diagnosis based on modified independent component analysis. AIChE J 52:3501–3514

    Article  Google Scholar 

  2. Fan JC, Wang YQ (2014) Fault detection and diagnosis of non-linear non-Gaussian dynamic processes using kernel dynamic independent component analysis. Inf Sci 259:369–379

    Article  MathSciNet  Google Scholar 

  3. Stefatos G, Hamza AB (2020) Dynamic independent component analysis approach for fault detection and diagnosis. Expert Syst Appl 37:8606–8617

    Article  Google Scholar 

  4. Lee JM, Yoo CK, Lee IB (2004) Statistical monitoring of dynamic processes based on dynamic independent component analysis. Chem Eng Sci 59:2995–3006

    Article  Google Scholar 

  5. Zhang YW, An JY, Zhang HL (2013) Monitoring of time-varying processes using kernel independent component analysis. Chem Eng Sci 88:23–32

    Article  Google Scholar 

  6. Cai LF, Tian XM, Chen S (2015) Monitoring nonlinear and non-Gaussian processes using Gaussian mixture model-based weighted kernel independent component analysis. IEEE Trans Neural Netw Learning Syst 28:122–135

    Article  Google Scholar 

  7. MacGregor JF, Jaeckle C, Kiparissides C, Koutoudi M (1994) Process monitoring and diagnosis by multiblock PLS methods. AIChE J 40:826–838

    Article  Google Scholar 

  8. Yue HH, Qin SJ (2001) Reconstruction-based fault identification using a combined index. Ind Eng Chem Res 40:4403–4414

    Article  Google Scholar 

  9. Alcala CF, Qin SJ (2009) Reconstruction-based contribution for process monitoring. Automatica 45:1593–1600

    Article  MathSciNet  Google Scholar 

  10. Zhang YW, Fan YP, Wen Y (2016) Nonlinear process monitoring using regression and reconstruction method. IEEE Trans Autom Sci Eng 13:1343–1354

    Article  Google Scholar 

  11. Cai L, Tian X, Chen S (2017) Monitoring nonlinear and non-Gaussian processes using Gaussian mixture model-based weighted kernel independent component analysis. IEEE Trans Neural Netw Learning Syst 28(1):122–135

    Google Scholar 

  12. Zhao C, Gao F, Wang F (2010) An improved independent component regression modeling and quantitative calibration procedure. AIChE J 56(6):1519–1535

    Google Scholar 

  13. Zeng J, **e L, Kruger U, Gao C (2012) A non-Gaussian regression algorithm based on mutual information maximization. Chemom Intell Lab Syst 111(1):1–9

    Article  Google Scholar 

  14. Alcala C (2011) Fault diagnosis with reconstruction-based contributions for statistical process monitoring. Degree Thesis (Ph. D.), University of Southern California

    Google Scholar 

  15. Kong XY, Yang ZY, Luo JY, Li HZ, Yang X (2022) Extraction of reduced fault subspace based on KDICA and its application in fault diagnosis. IEEE Trans Instrum Meas 71:3505212. https://doi.org/10.1109/TIM.2022.3150589

    Article  Google Scholar 

  16. Liu MZ, Kong XY, Luo JY, Yang L (2023) Fault detection and diagnosis in a non-Gaussian process with modified kernel independent component regression. Canadian J Chem Eng

    Google Scholar 

  17. Hyvarinen A, Karhunen J, Oja E (2001) Independent component analysis. Wiley, New York

    Book  Google Scholar 

  18. Liu Y, Wang FL, Chang YQ (2013) Reconstruction in integrating fault spaces for fault identification with kernel independent component analysis. Chem Eng Res Des 91:1071–1084

    Article  Google Scholar 

  19. Haykin S (1996) Neural networks expand SP’s horizons. IEEE Signal Process Mag 13:24–49

    Article  Google Scholar 

  20. Downs JJ, Vogel EF (1993) A plant-wide industrial process control problem. Comput Chem Eng 17(3):245–255

    Article  Google Scholar 

  21. Capaci F, Vanhatalo E, Kulahci M, Bergquist B (2019) The revised Tennessee Eastman process simulator as testbed for SPC and DoE methods. Qual Eng 31(2):221–229

    Article  Google Scholar 

  22. Jiang L, Ge Z, Song Z (2017) Semi-supervised fault classification based on dynamic sparse stacked auto-encoders model. Chemom Intell Lab Syst 168:72–83

    Article  Google Scholar 

  23. Cho J, Lee J, Choi SW, Lee D, Lee IB, Fault identification for process monitoring using kernel principal component analysis. Chem Eng Sci 60(1):279–288

    Google Scholar 

  24. Deng X, Tian X (2011) A new fault isolation method based on unified contribution plots. In 2011 Proceedings of the 30th Chinese Control Conference (CCC), IEEE, 4280–4285

    Google Scholar 

  25. Chen Q, Wang Y (2021) Key-performance-indicator-related state monitoring based on kernel canonical correlation analysis. Control Eng Pratice 107:104692

    Article  Google Scholar 

  26. Peng K, Zhang K, Li G (2013) Quality-related process monitoring based on total kernel PLS model and its industrial application. Math Problem Eng 1–4:2013

    Google Scholar 

  27. Zhou DH, Li G, Qin SJ (2010) Total projection to latent structures for process monitoring. AIChE J 56:168–178

    Article  Google Scholar 

  28. Liu J, Wong D, Chen D, Bayesian filtering of the smearing effect: fault isolation in chemical process monitoring. J Process Cont 24(3):1–21, 2–14

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendix

Appendix

Proposition 1 Assuming the kernel function is a radial basis, such that \(k({\varvec{x}}_{i} ,{\varvec{x}}_{j} ) = \exp ( - \left\| {x_{i} - x_{j} } \right\|^{2} /{\text{c}})\), for a given sample \({\varvec{x}}^{b}\), if \({\varvec{x}}^{b} = {\varvec{0}}\), then \(\Psi ({\varvec{x}}^{b} )\) is approximately zero.

Proof Assume that \({\varvec{X}}_{R} \in \Re^{{{\text{N}} \times {\text{m}}}}\) is the original training matrix containing \({\text{N}}\) samples, and \({\text{m}}\) is the number of variables contained in each sample. \({\varvec{x}}_{R}^{b} \in \Re^{{1 \times {\text{m}}}}\) is an original test sample. The centralized operations of \({\varvec{X}}_{R}\) and \({\varvec{x}}_{R}^{b}\) are performed as follows:

$$ {\varvec{X}} = {\varvec{X}}_{R} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{X}}_{R} , $$
(9.110)
$$ {\varvec{x}}^{b} = {\varvec{x}}_{R}^{b} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{X}}_{R} . $$
(9.111)

By performing the first-order Taylor series expansion, \(k({\varvec{x}}_{i} ,{\varvec{x}}_{j} )\) can be expressed as:

$$ k({\varvec{x}}_{i} ,{\varvec{x}}_{j} ) = \sum\limits_{{{\text{n}} = 0}}^{1} {\frac{{\left[ { - \frac{{\left\| {{\varvec{x}}_{i} - {\varvec{x}}_{j} } \right\|^{2} }}{{\text{c}}}} \right]^{{\text{n}}} }}{{{\text{n}}!}}} + o(\Delta {\varvec{x}}^{2} ) \, = \, 1 - \frac{{\left\| {{\varvec{x}}_{i} - {\varvec{x}}_{j} } \right\|^{2} }}{{\text{c}}} + o(\Delta {\varvec{x}}^{2} ). $$
(9.112)

Let \({\text{dis}}({\varvec{x}}_{i} ,{\varvec{x}}_{j} )\) be the Euclidean distance between \({\varvec{x}}_{i}\) and \({\varvec{x}}_{j}\), that:

$$ {\text{dis}}({\varvec{x}}_{i} ,{\varvec{x}}_{j} ) = \left\| {{\varvec{x}}_{i} - {\varvec{x}}_{j} } \right\|^{2} = {\varvec{x}}_{i} {\varvec{x}}_{i}^{{\text{T}}} + {\varvec{x}}_{j} {\varvec{x}}_{j}^{{\text{T}}} - 2{\varvec{x}}_{i} {\varvec{x}}_{j}^{{\text{T}}} . $$
(9.113)

Then, the Euclidean distance matrix of \({\varvec{X}}_{R}\) can be written as:

$${\varvec{\varTheta}}={\varvec{\varGamma}}\,{\varvec{1}}_{{\text{N}}}^{{\text{T}}} + {\varvec{1}}_{{\text{N}}}{\varvec{\varGamma}}^{{\text{T}}} - 2{\varvec{X}}_{R} {\varvec{X}}_{R}^{{\text{T}}} , $$
(9.114)

where \({\varvec{\varTheta}}_{{{\text{i}},{\text{j}}}} = {\text{dis}}({\varvec{x}}_{i} ,{\varvec{x}}_{j} )\) and \({\varvec{\varGamma}}= [x_{1} x_{1}^{{\text{T}}} ,x_{2} x_{2}^{{\text{T}}} , \ldots ,x_{{\text{N}}} x_{{\text{N}}}^{{\text{T}}} ]^{{\text{T}}} \in \Re^{{{\text{N}} \times 1}}\). Define an operation factor \({\varvec{H}} = {\varvec{I}}_{{\text{N}}} - (1/N){\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}}\), and then we can obtain \({\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{H}}^{{\text{T}}} = ({\varvec{H1}}_{{\text{N}}}^{{}} )^{{\text{T}}} = ({\varvec{1}}_{{\text{N}}}^{{}} - (1/N){\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{1}}_{{\text{N}}} )^{{\text{T}}} = {\varvec{0}}\).

According to Eqs. (9.113), and (9.114), the relationship between the \({\varvec{K}}\) and \({\varvec{\varTheta}}\) can be expressed as:

$$ {\varvec{K}} \approx {\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} - \frac{1}{{\text{c}}}{\varvec{\varTheta}}$$
(9.115)

Then, \({\varvec{K}}\) can be mean centered as:

$$ \begin{aligned} {\varvec{K}}^{*} & = ({\varvec{I}}_{{\text{N}}} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} ){\varvec{K}}({\varvec{I}}_{{\text{N}}} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} )^{{\text{T}}} \approx {\varvec{H}}({\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} - \frac{1}{{\text{c}}}{\varvec{\varTheta}}){\varvec{H}}^{{\text{T}}} \\ & = {\varvec{H1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{H}}^{{\text{T}}} - \frac{1}{{\text{c}}}\user2{H\Theta H}^{{\text{T}}} = - \frac{1}{{\text{c}}}\user2{H\Theta H}^{{\text{T}}} \\ & = - \frac{1}{{\text{c}}}{\varvec{H}}(\user2{\Gamma 1}_{{\text{N}}}^{{\text{T}}} + {\varvec{1}}_{{\text{N}}}{\varvec{\varGamma}}^{{\text{T}}} - 2{\varvec{X}}_{R} {\varvec{X}}_{R}^{{\text{T}}} ){\varvec{H}}^{{\text{T}}} = \frac{2}{{\text{c}}}{\varvec{HX}}_{R} {\varvec{X}}_{R}^{{\text{T}}} {\varvec{H}}^{{\text{T}}} \\ & = \frac{2}{{\text{c}}}({\varvec{I}}_{N} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} ){\varvec{X}}_{R} {\varvec{X}}_{R}^{{\text{T}}} ({\varvec{I}}_{{\text{N}}} - \frac{1}{{\varvec{N}}}{\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} )^{{\text{T}}} \\ & = \frac{2}{{\text{c}}}{\varvec{XX}}_{{}}^{{\text{T}}} . \\ \end{aligned} $$
(9.116)

For a certain test sample \({\varvec{x}}_{{}}^{b}\), similar to Eqs. (9.114) and (9.115), its distance vector \({\varvec{\varTheta}}_{b}\) and kernel vector \({\varvec{k}}({\varvec{x}}^{b} )\) can be calculated as:

$${\varvec{\varTheta}}_{b} = {\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{x}}_{R}^{b} {\varvec{x}}_{R}^{b} \,^{{\text{T}}} +{\varvec{\varGamma}}^{{\text{T}}} - 2{\varvec{x}}_{R}^{b} {\varvec{X}}_{R}^{{\text{T}}} , $$
(9.117)
$$ {\varvec{k}}({\varvec{x}}_{{}}^{b} ) \approx {\varvec{1}}_{{\text{N}}}^{{\text{T}}} - \frac{1}{{\text{c}}}{\varvec{\varTheta}}_{b} . $$
(9.118)

Perform the centralization operation on \({\varvec{k}}({\varvec{x}}^{b} )\) as:

$$ \begin{aligned} {\varvec{k}}({\varvec{x}}_{{}}^{b} )^{*} & = ({\varvec{k}}({\varvec{x}}_{{}}^{b} ) - \frac{1}{N}{\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{K}}){\varvec{H}}^{{\text{T}}} \approx ({\varvec{1}}_{{\text{N}}}^{{\text{T}}} - \frac{1}{{\text{c}}}{\varvec{\varTheta}}_{b} - \frac{{1}}{{\text{N}}}{\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{K}}){\varvec{H}}^{{\text{T}}} \\ & \approx - \frac{1}{{\text{c}}}{\varvec{\varTheta}}_{b} {\varvec{H}}^{{\text{T}}} - \frac{{1}}{{\text{N}}}{\varvec{1}}_{{\text{N}}}^{{\text{T}}} ({\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} - \frac{1}{{\text{c}}}{\varvec{\varTheta}}){\varvec{H}}^{{\text{T}}} \\ & { = } - \frac{1}{{\text{c}}}\left[ {{\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{x}}_{R}^{b} {\varvec{x}}_{R}^{b} \,^{{\text{T}}} +{\varvec{\varGamma}}^{{\text{T}}} - 2{\varvec{x}}_{R}^{b} {\varvec{X}}_{R}^{{\text{T}}} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}}^{{\text{T}}} ({\varvec{\varGamma}}\,{\varvec{1}}_{{\text{N}}}^{{\text{T}}} + {\varvec{1}}_{{\text{N}}}{\varvec{\varGamma}}^{{\text{T}}} - 2{\varvec{X}}_{R} {\varvec{X}}_{R}^{{\text{T}}} )} \right]{\varvec{H}}^{{\text{T}}} \\ & { = }\,\frac{2}{{\text{c}}}\left( {x_{R}^{b} {\varvec{X}}_{R}^{{\text{T}}} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{X}}_{R} {\varvec{X}}_{R}^{{\text{T}}} } \right)\left( {{\varvec{I}}_{{\text{N}}} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} } \right)^{{\text{T}}} \\ & { = }\,\frac{2}{{\text{c}}}\left( {x_{R}^{b} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{X}}_{R} } \right)\left( {{\varvec{X}}_{R}^{{}} - \frac{1}{{\text{N}}}{\varvec{1}}_{{\text{N}}} {\varvec{1}}_{{\text{N}}}^{{\text{T}}} {\varvec{X}}_{R}^{{}} } \right)^{{\text{T}}} = \frac{2}{{\text{c}}}x_{{}}^{b} {\varvec{X}}_{{}}^{{\text{T}}} . \\ \end{aligned} $$
(9.119)

If \({\varvec{x}}_{{}}^{b} = {\varvec{0}}\), then \({\varvec{k}}({\varvec{x}}_{{}}^{b} )^{*} = {\varvec{0}}\). After scaling, i.e., \(\overline{\user2{k}}({\varvec{x}}_{{}}^{b} ) = {\varvec{k}}({\varvec{x}}_{{}}^{b} )^{*} /[{\text{trace}}({\varvec{K}}^{*} )/{\text{N}}]\), it is obvious that \(\overline{\user2{k}}({\varvec{x}}_{{}}^{b} ) = {\varvec{0}}\). From Eq. (9.99), the statistic of \({\varvec{x}}_{{}}^{b}\) can be calculated as:

$$ \Psi ({\varvec{x}}_{{}}^{b} ) = \overline{\user2{k}}({\varvec{x}}_{{}}^{b} ){\varvec{\varSigma}}\,\overline{\user2{k}}({\varvec{x}}_{{}}^{b} )^{{\text{T}}} . $$
(9.120)

Substituting \(\overline{\user2{k}}({\varvec{x}}_{{}}^{b} ) = {\varvec{0}}\) into Eq. (50) gives \(\Psi ({\varvec{0}})\) = 0.

Proposition 2 For a rank deficiency matrix \({\varvec{R}} \in \Re^{{{\text{d}} \times {\text{l}}}}\), if perform SVD on \({\varvec{RR}}^{T}\) as

$$ {\varvec{RR}}^{{\text{T}}} = \left[ {{\varvec{P}}_{r} \quad {\varvec{P}}_{u} } \right]\left[ \begin{gathered}{\varvec{\var**}}\quad {\varvec{0}} \hfill \\ {\varvec{0}}\quad \;{\varvec{0}} \hfill \\ \end{gathered} \right]\left[ \begin{gathered} {\varvec{P}}_{{\text{r}}}^{{\text{T}}} \hfill \\ {\varvec{P}}_{{\text{u}}}^{{\text{T}}} \hfill \\ \end{gathered} \right], $$
(9.121)

then, it holds the property \({\varvec{P}}_{u} \,^{{\text{T}}} {\varvec{R}} = {\varvec{0}}\).

Proof: According to the properties of SVD, for a given matrix \({\varvec{R}} \in \Re^{{{\text{d}} \times {\text{l}}}}\), if \(\gamma = {\text{rank}}({\varvec{R}}) < \min ({\text{d}},{\text{l}})\), then there exist two unitary matrices \({\varvec{U}} \in \Re^{{{\text{d}} \times {\text{d}}}}\) and \({\varvec{V}} \in \Re^{{{\text{l}} \times {\text{l}}}}\) that make.

$$ {\varvec{R}} = {\varvec{U}}\left[ \begin{gathered} {\varvec{Q}}\quad {\varvec{0}} \hfill \\ {\varvec{0}}\quad {\varvec{0}} \hfill \\ \end{gathered} \right]{\varvec{V}}^{{\text{T}}} , $$
(9.122)

where \({\varvec{Q}} = {\text{diag}}(\sigma_{1} ,\sigma_{2} , \ldots ,\sigma_{\gamma } )\), and its diagonal elements are arranged in order \(\sigma_{1} \ge \sigma_{2} \ge \ldots \ge \sigma_{\gamma } > 0\).

According to the properties of SVD, it holds that

$$ {\varvec{RR}}^{{\text{T}}} = {\varvec{U}}\left[ \begin{gathered} {\varvec{Q}}\quad \;{\varvec{0}} \hfill \\ {\varvec{0}}\quad \;\;{\varvec{0}} \hfill \\ \end{gathered} \right]{\varvec{V}}^{{\text{T}}} {\varvec{V}}\left[ \begin{gathered} {\varvec{Q}}\quad \;{\varvec{0}} \hfill \\ {\varvec{0}}\quad \;\;{\varvec{0}} \hfill \\ \end{gathered} \right]{\varvec{U}}^{{\text{T}}} = {\varvec{U}}\left[ \begin{gathered} {\varvec{Q}}^{2} \quad {\varvec{0}} \hfill \\ {\varvec{0}}\quad \;\;{\varvec{0}} \hfill \\ \end{gathered} \right]{\varvec{U}}^{{\text{T}}} . $$
(9.123)

That is, matrices \({\varvec{R}}\) and \({\varvec{RR}}^{{\text{T}}}\) have the same left singular matrix. Block \({\varvec{U}}\) into \({\varvec{U}} = [{\varvec{U}}_{1} \quad {\varvec{U}}_{2} ],\) \({\varvec{U}}_{1} \in \Re^{{{\text{d}} \times \gamma }} ,{\varvec{U}}_{2} \in \Re^{{{\text{d}} \times ({\text{d}} - \gamma )}}\), then Eqs. (9.122) and (9.123) can be rewritten as

$$ {\varvec{R}} = [{\varvec{U}}_{1} \quad {\varvec{U}}_{2} ]\left[ \begin{gathered} {\varvec{Q}}\quad \;{\varvec{0}} \hfill \\ {\varvec{0}}\quad \;\;{\varvec{0}} \hfill \\ \end{gathered} \right]\left[ \begin{gathered} {\varvec{V}}_{1}^{{\text{T}}} \hfill \\ {\varvec{V}}_{2}^{{\text{T}}} \hfill \\ \end{gathered} \right], $$
(9.124)
$$ {\varvec{RR}}^{T} = \left[ {\begin{array}{*{20}c} {{\varvec{U}}_{1} } & {{\varvec{U}}_{2} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\varvec{Q}}^{2} } & 0 \\ 0 & 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\varvec{U}}_{1}^{T} } \\ {{\varvec{U}}_{2}^{T} } \\ \end{array} } \right]. $$
(9.125)

Perform left multiplication by matrix \(\left[ \begin{gathered} {\varvec{U}}_{1}^{{\text{T}}} \hfill \\ {\varvec{U}}_{2}^{{\text{T}}} \hfill \\ \end{gathered} \right]\) on Eq. (9.124) as follows:

$$ \left[ \begin{gathered} {\varvec{U}}_{1}^{{\text{T}}} \hfill \\ {\varvec{U}}_{2}^{{\text{T}}} \hfill \\ \end{gathered} \right]{\varvec{R}} = \left[ \begin{gathered} {\varvec{Q}}\quad \;{\varvec{0}} \hfill \\ {\varvec{0}}\quad \;\;{\varvec{0}} \hfill \\ \end{gathered} \right]\left[ \begin{gathered} V_{1}^{{\text{T}}} \hfill \\ V_{2}^{{\text{T}}} \hfill \\ \end{gathered} \right]. $$
(9.126)

It is obvious that \({\varvec{U}}_{2}^{{\text{T}}} {\varvec{R}} = {\varvec{0}}\). Comparing Eqs. (9.121) and (9.125), then \({\varvec{U}}_{2} = {\varvec{P}}_{u}\). Thus \({\varvec{P}}_{u}^{{\text{T}}} {\varvec{R}} = {\varvec{0}}\).

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Science Press

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kong, X., Luo, J., Feng, X. (2024). Non-Gaussian Process Monitoring and Fault Diagnosis. In: Process Monitoring and Fault Diagnosis Based on Multivariable Statistical Analysis. Engineering Applications of Computational Methods, vol 19. Springer, Singapore. https://doi.org/10.1007/978-981-99-8775-7_9

Download citation

Publish with us

Policies and ethics

Navigation