How Explainable Is Explainability? Towards Better Metrics for Explainable AI

  • Conference paper
  • First Online:
Research and Innovation Forum 2023 (RIIFORUM 2023)

Part of the book series: Springer Proceedings in Complexity ((SPCOM))

Included in the following conference series:

  • 336 Accesses

Abstract

Despite the fact that machine learning has been applied in innumerable domains, its models have usually operated in a black box fashion, i.e. without revealing the rationale behind their decisions. For human users, insufficient model transparency may result in the lack of trust in the technology, effectively hindering its development and adoption. This pressing need for the society to understand the reasoning for the model’s decisions gave rise to the concept of explainable AI (xAI), which has been the subject of extensive research. Since its introduction, a number of techniques providing explainability have been proposed. Yet, there has been no consensus reached regarding how to measure and evaluate the performance of the explainability methods. So far, the state-of-the-art literature has proposed two directions for evaluating explainable AI–the technical one and the human-centered one. Although the literature deems the technical way to be the objective one, it still suggests supplementing the evaluation with the subjective, human-centered approach, which proves to be time-consuming and requires considerable effort. This paper highlights the need to enhance and improve the existing technical metrics, to quantify the explainability of ML models in an objective, automated and time and cost-effective way. The text presents the current state-of-the-art in xAI evaluation metrics, discussing both human-centered and computer-centered approaches. Its contribution is in its comprehensive discussion of the evaluation methods for explainable AI (xAI).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Vouros, G.A.: Explainable deep reinforcement learning: state of the art and challenges. ACM Comput. Surv. 55(5), 1–39 (2022)

    Google Scholar 

  2. Song, H., Kim, S.: Explainable artificial intelligence (xai): how to make image analysis deep learning models transparent. In: 2022 22nd International Conference on Control, Automation and Systems (ICCAS), pp. 1595–1598. IEEE (2022)

    Google Scholar 

  3. Parkinson, M., Carter, J., Nawaz, R.: Leveraging artificial intelligence (AI) to build SMEs’ resilience amid the global Covid-19 pandemic. In: Research and Innovation Forum 2022. RIIFORUM 2022. Springer Proceedings in Complexity, pp. 547–556. Springer, Cham (2023)

    Google Scholar 

  4. Szczepański, M., Choraś, M., Pawlicki, M., Kozik, R.: Achieving explainability of intrusion detection system by hybrid oracle-explainer approach. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2020)

    Google Scholar 

  5. Yu, J., Cristea, A.I., Harit, A., Sun, Z., Aduragba, O.T., Shi, L., Al Moubayed, N.: Interaction: a generative xai framework for natural language inference explanations. In: 2022 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2022)

    Google Scholar 

  6. Zhang, Z., Hamadi, H.A., Damiani, E., Yeun, C.Y., Taher, F.: Explainable artificial intelligence applications in cyber security: state-of-the-art in research (2022). ar**v preprint ar**v:2208.14937

  7. Nomm, S.: Towards the linear algebra based taxonomy of xai explanations (2023). ar**v preprint ar**v:2301.13138

  8. Srikanth, K., Ramesh, T., Palaniswamy, S., Srinivasan, R.: Xai based model evaluation by applying domain knowledge. In: 2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), pp. 1–6. IEEE (2022)

    Google Scholar 

  9. Bora, A., Sah, R., Singh, A., Sharma, D., Ranjan, R.K.: Interpretation of machine learning models using xai-a study on health insurance dataset. In: 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO), pp. 1–6. IEEE (2022)

    Google Scholar 

  10. Arrieta, A.B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Google Scholar 

  11. Borrego-Díaz, J., Galán-Páez, J.: Explainable artificial intelligence in data science. Minds Mach. 32(3), 485–531 (2022)

    Google Scholar 

  12. Wang, Q., Huang, K., Chandak, P., Zitnik, M., Gehlenborg, N.: Extending the nested model for user-centric xai: a design study on gnn-based drug repurposing. IEEE Trans. Visual. Comput. Graphics 29(1), 1266–1276 (2022)

    Google Scholar 

  13. Szczepański, M., Pawlicki, M., Kozik, R., Choraś, M.: Fast hybrid oracle-explainer approach to explainability using optimized search of comprehensible decision trees. In: 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA), pp. 1–10. IEEE (2022)

    Google Scholar 

  14. Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., Fernández-Leal, Á.: Human-in-the-loop machine learning: a state of the art. Artif. Intell. Rev. 56, 3005–3054 (2022)

    Google Scholar 

  15. Holzinger, A., Plass, M., Kickmeier-Rust, M., Holzinger, K., Crişan, G.C., Pintea, C.M., Palade, V.: Interactive machine learning: experimental evidence for the human in the algorithmic loop. Appl. Intell. 49(7), 2401–2414 (2019)

    Google Scholar 

  16. Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods –a brief overview. In: xxAI–Beyond Explainable AI. xxAI 2020. Lecture Notes in Computer Science, vol. 13200, pp. 13–38. Springer, Cham (2022)

    Google Scholar 

  17. Liao, Q.V., Varshney, K.R.: Human-centered explainable AI (XAI): from algorithms to user experiences (2021)

    Google Scholar 

  18. Hanif, A., Zhang, X., Wood, S.: A survey on explainable artificial intelligence techniques and challenges. In: 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW), pp. 81–89. IEEE (2021)

    Google Scholar 

  19. Markus, A.F., Kors, J.A., Rijnbeek, P.R.: The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inf. 113, 103655 (2021)

    Google Scholar 

  20. Molnar, C., Casalicchio, G., Bischl, B.: Interpretable machine learning–a brief history, state-of-the-art and challenges. J. Biomed. Inf. 113, 103655 (2020)

    Google Scholar 

  21. Lopes, P., Silva, E., Braga, C., Oliveira, T., Rosado, L.: XAI systems evaluation: a review of human and computer-centred methods. Appl. Sci. 12(19), 9423 (2022)

    Google Scholar 

  22. Zhou, J., Gandomi, A.H., Chen, F., Holzinger, A.: Evaluating the quality of machine learning explanations: a survey on methods and metrics. Electronics 10(5), 593 (2021)

    Google Scholar 

  23. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)

    Google Scholar 

  24. Pawlicka, A., Choraś, M., Pawlicki, M., Kozik, R.: A \$10 million question and other cybersecurity-related ethical dilemmas amid the COVID-19 pandemic. Bus. Horiz. 64(6), 729–734 (2021)

    Google Scholar 

  25. Lu, X., Tolmachev, A., Yamamoto, T., Takeuchi, K., Okajima, S., Takebayashi, T., Maruhashi, K., Kashima, H.: Crowdsourcing evaluation of saliency-based XAI Methods. In: Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track. ECML PKDD 2021. Lecture Notes in Computer Science, vol. 12979, pp. 431–446. Springer, Cham (2021)

    Google Scholar 

  26. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. 11(3-4), 1–45 (2021)

    Google Scholar 

  27. Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M., Schmitt, Y., Schlötterer, J., van Keulen, M., Seifert, C.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. ACM Comput. Surv. 55(13s), 1–42 (2022)

    Google Scholar 

  28. Di Martino, F., Delmastro, F.: Explainable AI for clinical and remote health applications: a survey on tabular and time series data. Artif. Intell. Rev. 56, 5261–5315 (2023)

    Google Scholar 

  29. Belaid, M.K., Hüllermeier, E., Rabus, M., Krestel, R.: Do we need another explainable AI method? Toward unifying post-hoc XAI evaluation methods into an interactive and multi-dimensional benchmark (2022)

    Google Scholar 

  30. Rosenfeld, A.: Better metrics for evaluating explainable artificial intelligence. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. AAMAS ’21, Richland, SC, pp. 45–50. International Foundation for Autonomous Agents and Multiagent Systems (2021)

    Google Scholar 

  31. Sisk, M., Majlis, M., Page, C., Yazdinejad, A.: Analyzing XAI metrics: summary of the literature review (2022)

    Google Scholar 

  32. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 1–11. Curran Associates, Inc. (2018)

    Google Scholar 

  33. Hooker, S., Erhan, D., Kindermans, P.J., Kim, B.: A benchmark for interpretability methods in deep neural networks. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 1–12. Curran Associates, Inc. (2019)

    Google Scholar 

  34. Deng, H., Zou, N., Du, M., Chen, W., Feng, G., Hu, X.: A unified Taylor framework for revisiting attribution methods. Proc. AAAI Conf. Artif. Intell. 35(13), 11462–11469 (2021)

    Google Scholar 

  35. AHMED, N.A.M., ALPKOÇAK, A.: A quantitative evaluation of explainable AI methods using the depth of decision tree. Turk. J. Electr. Eng. Comput. Sci. 30(6), 2054–2072 (2022)

    Google Scholar 

  36. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Google Scholar 

  37. Nunes, I., Jannach, D.: A systematic review and taxonomy of explanations in decision support and recommender systems. User Model User Adap. Inter. 27, 393–444 (2017)

    Google Scholar 

Download references

Acknowledgements

This research is funded under the Horizon Europe ULTIMATE Project, which has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No. 101070162.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michał Choraś .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pawlicka, A., Pawlicki, M., Kozik, R., Kurek, W., Choraś, M. (2024). How Explainable Is Explainability? Towards Better Metrics for Explainable AI. In: Visvizi, A., Troisi, O., Corvello, V. (eds) Research and Innovation Forum 2023. RIIFORUM 2023. Springer Proceedings in Complexity. Springer, Cham. https://doi.org/10.1007/978-3-031-44721-1_52

Download citation

Publish with us

Policies and ethics

Navigation