Abstract
Knowing the basis of decisions is essential for using Machine Learning (ML) in various regions, such as medical diagnosis, automated driving, and organizational decision-making. The LIME algorithm is an XAI method that can be applied to many black-box models. However, there is a problem in that the local fidelity of the interpretable model decreases. The problem is due to the sampling and weighting step of LIME; using an autoencoder is an effective way to improve this. In this study, we aim to simultaneously improve the sampling and weighting of LIME in a classification task by using a conditional variational autoencoder and filtering samples by classes. Experiments were conducted to compare the local fidelity of the proposed and existing methods for neural network classifiers trained on three medical diagnostic data sets. The results show that the proposed method improves the local fidelity compared to the existing methods. In addition, we visualized the distribution of samples in the autoencoder latent space and conducted comparative experiments by attaching and detaching components of the existing and proposed methods to analyze the factors that improve fidelity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Parimbelli, E., Buonocore, T.M., Nicora, G., Michalowski, W., Wilk, S., Bellazzi, R.: Why did AI get this one wrong? Tree-based explanations of machine learning model predictions. Artif. Intell. Med. 135, 102471 (2023)
Ahmad, M.A., et al.: Interpretable machine learning in healthcare. In: 2018 IEEE International Conference on Healthcare Informatics (ICHI), pp. 447–447 (2018). https://doi.org/10.1109/ICHI.2018.00095
Diaconis, P., et al.: Computer-intensive methods in statistics. Sci. Am. 248(5), 116–131 (1983). http://www.jstor.org/stable/24968902
Greenwell, B.M., et al.: A simple and effective model-based variable importance measure. Ar**v ar**v:1805.04755 (2018)
Kingma, D.P., et al.: Auto-encoding variational bayes. ar**v preprint ar**v:1312.6114 (2013)
van der Maaten, L., et al.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(86), 2579–2605 (2008)
Mangasarian, O.L., et al.: Breast cancer diagnosis and prognosis via linear programming. Oper. Res. 43(4), 570–577 (1995)
Molnar, C.: Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2019)
Information Technology Promotion Agency Japan: AI white paper (2019). (in Japanese)
Rajaraman, A., et al.: Mining of Massive Datasets. Cambridge University Press, Cambridge (2011)
Ramamurthy, K.N., et al.: Model agnostic multilevel explanations. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS 2020, Red Hook, NY, USA. Curran Associates Inc. (2020)
Ramana, B.V., et al.: A critical study of selected classification algorithms for liver disease diagnosis. Int. J. Database Manag. Syst. 3(2), 101–114 (2011)
Ribeiro, M.T., et al.: “Why should i trust you?”: explaining the predictions of any classifier. Association for Computing Machinery, New York (2016)
Schockaert, C., Macher, V., et al.: VAE-LIME: deep generative model based approach for local data-driven model interpretability applied to the ironmaking industry. CoRR ar**v:2007.10256 (2020)
Selvaraju, R.R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2017)
Shankaranarayana, S.M., Runje, D.: ALIME: autoencoder based approach for local interpretability. In: Yin, H., Camacho, D., Tino, P., Tallón-Ballesteros, A.J., Menezes, R., Allmendinger, R. (eds.) IDEAL 2019. LNCS, vol. 11871, pp. 454–463. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33607-3_49
Sohn, K., et al.: Learning structured output representation using deep conditional generative models. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems. Curran Associates Inc. (2015)
Vidal, T., et al.: Born-again tree ensembles (2020)
Vincent, P., et al.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yasui, D., Sato, H., Kubo, M. (2023). Improving Local Fidelity of LIME by CVAE. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1903. Springer, Cham. https://doi.org/10.1007/978-3-031-44070-0_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-44070-0_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44069-4
Online ISBN: 978-3-031-44070-0
eBook Packages: Computer ScienceComputer Science (R0)