Abstract
In this paper, we present a novel method to compute decision rules to build a more accurate interpretable machine learning model, denoted as ExMo. The ExMo interpretable machine learning model consists of a list of IF...THEN... statements with a decision rule in the condition. This way, ExMo naturally provides an explanation for a prediction using the decision rule that was triggered. ExMo uses a new approach to extract decision rules from the training data using term frequency-inverse document frequency (TF-IDF) features. With TF-IDF, decision rules with feature values that are more relevant to each class are extracted. Hence, the decision rules obtained by ExMo can distinguish the positive and negative classes better than the decision rules used in the existing Bayesian Rule List (BRL) algorithm, obtained using the frequent pattern mining approach. The paper also shows that ExMo learns a qualitatively better model than BRL. Furthermore, ExMo demonstrates that the textual explanation can be provided in a human-friendly way so that the explanation can be easily understood by non-expert users. We validate ExMo on several datasets with different sizes to evaluate its efficacy. Experimental validation on a real-world fraud detection application shows that ExMo is \(\approx \)20% more accurate than BRL and that it achieves accuracy similar to those of deep learning models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
A right to explanation. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf. Accessed 7 May 2021
Anchor Implementation. https://github.com/marcotcr/anchor. Accessed 18 Apr 2021
Default credit card dataset. https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients. Accessed 18 Oct 2021
Diabetes plasma glucose ranges. https://labtestsonline.org.uk/tests/glucose-tests, under: What does the test result mean? Accessed 18 Oct 2021
How AI Boosts Industry Profits and Innovation. https://www.accenture.com/fr-fr/_acnmedia/36dc7f76eab444cab6a7f44017cc3997.pdf. Accessed 18 Oct 2020
IEEE-CIS Fraud Dataset. https://www.kaggle.com/c/ieee-fraud-detection/overview. Accessed 30 Sept 2020
Income over 50K. https://archive.ics.uci.edu/ml/datasets/adult. Accessed 18 Oct 2021
LIME Implementation. https://github.com/marcotcr/lime. Accessed 18 Apr 2021
PIMA diabetes dataset. https://www.kaggle.com/kumargh/pimaindiansdiabetescsv. Accessed 18 Oct 2021
SHAP Implementation. https://github.com/slundberg/shap. Accessed 18 Apr 2021
Understanding Machines: Explainable AI. https://www.accenture.com/_acnmedia/PDF-85/Accenture-Understanding-Machines-Explainable-AI.pdf. Accessed 18 Oct 2020
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), 1–46 (2015)
Bertossi, L.E., Li, J., Schleich, M., Suciu, D., Vagena, Z.: Causality-based explanation of classification outcomes. CoRR abs/2003.06868 (2020). https://arxiv.org/abs/2003.06868
Brito, L.C., Susto, G.A., Brito, J.N., Duarte, M.A.V.: An explainable artificial intelligence approach for unsupervised fault detection and diagnosis in rotating machinery. CoRR (2021). https://arxiv.org/abs/2102.11848
Collaris, D., Vink, L.M., van Wijk, J.J.: Instance-level explanations for fraud detection: a case study. CoRR (2018). http://arxiv.org/abs/1806.07129
Coma-Puig, B., Carmona, J.: An iterative approach based on explainability to improve the learning of fraud detection models. CoRR abs/2009.13437 (2020). https://arxiv.org/abs/2009.13437
Fayyad, U.M., Irani, K.B.: Multi-interval discretization of continuous-valued attributes for classification learning. In: International Joint Conferences on Artificial Intelligence, pp. 1022–1029 (1993)
Han, J.: Mining frequent patterns without candidate generation: a frequent-pattern tree approach. Data Min. Knowl. Discov. 8, 53–87 (2004)
Huang, Q., Yamada, M., Tian, Y., Singh, D., Yin, D., Chang, Y.: GraphLIME: local interpretable model explanations for graph neural networks. CoRR (2020). https://arxiv.org/abs/2001.06216
Klaise, J., Van Looveren, A., Vacanti, G., Coca, A.: Alibi: algorithms for monitoring and explaining machine learning models (2019). https://github.com/SeldonIO/alibi
Letham, B., Rudin, C., McCormick, T., Madigan, D.: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 9, 1350–1371 (2015)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777 (2017)
Makki, S., Assaghir, Z., Taher, Y., Haque, R., Hacid, M.S., Zeineddine, H.: An experimental study with imbalanced classification approaches for credit card fraud detection. IEEE Access 7, 93010–93022 (2019)
Molnar, C.: Interpretable Machine Learning (2019). https://christophm.github.io/interpretable-ml-book/
Nguyen, T.T., Tahir, H., Abdelrazek, M., Babar, A.: Deep learning methods for credit card fraud detection. CoRR (2020). https://arxiv.org/abs/2012.03754
Okajima, Y., Sadamasa, K.: Deep neural networks constrained by decision rules. In: Proceedings of the AAAI Conference on Artificial Intelligence, July 2019, vol. 33, no. 01 (2019)
Psychoula, I., Gutmann, A., Mainali, P., Lee, S.H., Dunphy, P., Petitcolas, F.A.P.: Explainable machine learning for fraud detection. CoRR (2021). https://arxiv.org/abs/2105.06314
Rao, S.X., et al.: xFraud: explainable fraud transaction detection on heterogeneous graphs. CoRR (2020). https://arxiv.org/abs/2011.12193
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016, pp. 1135–1144 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (AAAI) (2018)
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153. JMLR.org (2017)
Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHA: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February 2020, pp. 180–186 (2020)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, 06–11 August 2017, vol. 70, pp. 3319–3328 (2017)
Wang, F., Rudin, C.: Falling rule lists. In: Proceedings of Artificial Intelligence and Statistics (AISTATS) (2015)
Watson, M., Moubayed, N.A.: Attack-agnostic adversarial detection on medical data using explainable machine learning (2021)
Yang, H., Rudin, C., Seltzer, M.: Scalable Bayesian rule lists. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3921–3930. JMLR.org (2017)
Zafar, M.R., Khan, N.M.: DLIME: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. CoRR (2019). http://arxiv.org/abs/1906.10263
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mainali, P., Psychoula, I., Petitcolas, F.A.P. (2022). ExMo: Explainable AI Model Using Inverse Frequency Decision Rules. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-05643-7_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05642-0
Online ISBN: 978-3-031-05643-7
eBook Packages: Computer ScienceComputer Science (R0)