Performance of Explainable AI Methods in Asset Failure Prediction

  • Conference paper
  • First Online:
Computational Science – ICCS 2022 (ICCS 2022)

Abstract

Extensive research on machine learning models, which in the majority are black-boxes, created a great need for the development of Explainable Artificial Intelligence (XAI) methods. Complex machine learning (ML) models usually require an external explanation method to understand their decisions. The interpretation of the model predictions are crucial in many fields, i.e., predictive maintenance, where it is not only required to evaluate the state of an asset, but also to determine the root causes of the potential failure. In this work, we present a comparison of state-of-the-art ML models and XAI methods, which we used for the prediction of the RUL of aircraft turbofan engines. We trained five different models on the C-MAPSS dataset and used SHAP and LIME to assign numerical importance to the features. We have compared the results of explanations using stability and consistency metrics and evaluated the explanations qualitatively by visual inspection. The obtained results indicate that SHAP method outperforms other methods in the fidelity of explanations. We observe that there exist substantial differences in the explanations depending on the selection of a model and XAI method, thus we find a need for further research in XAI field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (Canada)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abid, K., Sayed-Mouchaweh, M., Cornez, L.: Deep ensemble approach for RUL estimation of aircraft engines. In: Hasic Telalovic, J., Kantardzic, M. (eds.) MeFDATA 2020. CCIS, vol. 1343, pp. 95–109. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72805-2_7

    Chapter  Google Scholar 

  2. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods. CoRR abs/1806.08049 (2018). http://arxiv.org/abs/1806.08049

  3. Bobek, S., Bałaga, P., Nalepa, G.J.: Towards model-agnostic ensemble explanations. In: Paszynski, M., Kranzlmüller, D., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds.) ICCS 2021. LNCS, vol. 12745, pp. 39–51. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77970-2_4

    Chapter  Google Scholar 

  4. Frank, A.G., Dalenogare, L.S., Ayala, N.F.: Industry 4.0 technologies: implementation patterns in manufacturing companies. Int. J. Prod. Econ. 210, 15–26 (2019). https://doi.org/10.1016/j.ijpe.2019.01.004

  5. Frederick, D., DeCastro, J., Litt, J.: User’s guide for the commercial modular aero-propulsion system simulation (C-MAPSS). NASA Technical Manuscript 2007–215026 (2007)

    Google Scholar 

  6. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 38(3), 50–57 (2017). https://doi.org/10.1609/aimag.v38i3.2741

  7. Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019). https://doi.org/10.1609/aimag.v40i2.2850

    Article  Google Scholar 

  8. Hastie, T., Tibshirani, R.: Generalized additive models. Stat. Sci. 1(3), 297–310 (1986). https://doi.org/10.1214/ss/1177013604

    Article  MathSciNet  MATH  Google Scholar 

  9. Khelif, R., Chebel-Morello, B., Malinowski, S., Laajili, E., Fnaiech, F., Zerhouni, N.: Direct remaining useful life estimation based on support vector regression. IEEE Trans. Industr. Electron. 64(3), 2276–2285 (2017). https://doi.org/10.1109/TIE.2016.2623260

    Article  Google Scholar 

  10. Li, X., Ding, Q., Sun, J.Q.: Remaining useful life estimation in prognostics using deep convolution neural networks. Reliab. Eng. Syst. Saf. 172, 1–11 (2018). https://doi.org/10.1016/j.ress.2017.11.021

    Article  Google Scholar 

  11. Listou Ellefsen, A., Bjørlykhaug, E., Æsøy, V., Ushakov, S., Zhang, H.: Remaining useful life predictions for turbofan engine degradation using semi-supervised deep architecture. Reliab. Eng. Syst. Saf. 183, 240–251 (2019). https://doi.org/10.1016/j.ress.2018.11.027

    Article  Google Scholar 

  12. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017)

    Google Scholar 

  13. Molnar, C.: Interpretable Machine Learning, 2 edn. (2022). https://christophm.github.io/interpretable-ml-book

  14. Mosallam, A., Medjaher, K., Zerhouni, N.: Data-driven prognostic method based on Bayesian approaches for direct remaining useful life prediction. J. Intell. Manuf. 27(5), 1037–1048 (2014). https://doi.org/10.1007/s10845-014-0933-4

    Article  Google Scholar 

  15. Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: a unified framework for machine learning interpretability. CoRR abs/1909.09223 (2019). http://arxiv.org/abs/1909.09223

  16. de Oliveira da Costa, P.R., Akçay, A., Zhang, Y., Kaymak, U.: Remaining useful lifetime prediction via deep domain adaptation. Reliab. Eng. Syst. Saf. 195, 106682 (2020). https://doi.org/10.1016/j.ress.2019.106682

  17. Petkovic, D., Altman, R., Wong, M., Vigil, A.: Improving the explainability of random forest classifier - user centered approach. In: Biocomputing 2018, pp. 204–215. WORLD SCIENTIFIC (2017). https://doi.org/10.1142/9789813235533_0019

  18. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2939672.2939778

  19. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)

    Google Scholar 

  20. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x

  21. Saxena, A., Goebel, K., Simon, D., Eklund, N.: Damage propagation modeling for aircraft engine run-to-failure simulation. In: 2008 International Conference on Prognostics and Health Management, pp. 1–9 (2008). https://doi.org/10.1109/PHM.2008.4711414

  22. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vision 128(2), 336–359 (2019). https://doi.org/10.1007/s11263-019-01228-7

    Article  Google Scholar 

  23. Wang, Y., Zhao, Y., Addepalli, S.: Remaining useful life prediction using deep learning approaches: a review. Procedia Manuf. 49, 81–88 (2020). https://doi.org/10.1016/j.promfg.2020.06.015. Proceedings of the 8th International Conference on Through-Life Engineering Services – TESConf 2019

Download references

Acknowledgements

This paper is funded from the XPM (Explainable Predictive Maintenance) project funded by the National Science Center, Poland under CHIST-ERA programme Grant Agreement No. 857925 (NCN UMO-2020/02/Y/ST6/00070).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jakub Jakubowski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jakubowski, J., Stanisz, P., Bobek, S., Nalepa, G.J. (2022). Performance of Explainable AI Methods in Asset Failure Prediction. In: Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds) Computational Science – ICCS 2022. ICCS 2022. Lecture Notes in Computer Science, vol 13353. Springer, Cham. https://doi.org/10.1007/978-3-031-08760-8_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08760-8_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08759-2

  • Online ISBN: 978-3-031-08760-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation