ExMo: Explainable AI Model Using Inverse Frequency Decision Rules

  • Conference paper
  • First Online:
Artificial Intelligence in HCI (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13336))

Included in the following conference series:

Abstract

In this paper, we present a novel method to compute decision rules to build a more accurate interpretable machine learning model, denoted as ExMo. The ExMo interpretable machine learning model consists of a list of IF...THEN... statements with a decision rule in the condition. This way, ExMo naturally provides an explanation for a prediction using the decision rule that was triggered. ExMo uses a new approach to extract decision rules from the training data using term frequency-inverse document frequency (TF-IDF) features. With TF-IDF, decision rules with feature values that are more relevant to each class are extracted. Hence, the decision rules obtained by ExMo can distinguish the positive and negative classes better than the decision rules used in the existing Bayesian Rule List (BRL) algorithm, obtained using the frequent pattern mining approach. The paper also shows that ExMo learns a qualitatively better model than BRL. Furthermore, ExMo demonstrates that the textual explanation can be provided in a human-friendly way so that the explanation can be easily understood by non-expert users. We validate ExMo on several datasets with different sizes to evaluate its efficacy. Experimental validation on a real-world fraud detection application shows that ExMo is \(\approx \)20% more accurate than BRL and that it achieves accuracy similar to those of deep learning models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (Germany)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 85.59
Price includes VAT (Germany)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 106.99
Price includes VAT (Germany)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. A right to explanation. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf. Accessed 7 May 2021

  2. Anchor Implementation. https://github.com/marcotcr/anchor. Accessed 18 Apr 2021

  3. Default credit card dataset. https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients. Accessed 18 Oct 2021

  4. Diabetes plasma glucose ranges. https://labtestsonline.org.uk/tests/glucose-tests, under: What does the test result mean? Accessed 18 Oct 2021

  5. How AI Boosts Industry Profits and Innovation. https://www.accenture.com/fr-fr/_acnmedia/36dc7f76eab444cab6a7f44017cc3997.pdf. Accessed 18 Oct 2020

  6. IEEE-CIS Fraud Dataset. https://www.kaggle.com/c/ieee-fraud-detection/overview. Accessed 30 Sept 2020

  7. Income over 50K. https://archive.ics.uci.edu/ml/datasets/adult. Accessed 18 Oct 2021

  8. LIME Implementation. https://github.com/marcotcr/lime. Accessed 18 Apr 2021

  9. PIMA diabetes dataset. https://www.kaggle.com/kumargh/pimaindiansdiabetescsv. Accessed 18 Oct 2021

  10. SHAP Implementation. https://github.com/slundberg/shap. Accessed 18 Apr 2021

  11. Understanding Machines: Explainable AI. https://www.accenture.com/_acnmedia/PDF-85/Accenture-Understanding-Machines-Explainable-AI.pdf. Accessed 18 Oct 2020

  12. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  13. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), 1–46 (2015)

    Google Scholar 

  14. Bertossi, L.E., Li, J., Schleich, M., Suciu, D., Vagena, Z.: Causality-based explanation of classification outcomes. CoRR abs/2003.06868 (2020). https://arxiv.org/abs/2003.06868

  15. Brito, L.C., Susto, G.A., Brito, J.N., Duarte, M.A.V.: An explainable artificial intelligence approach for unsupervised fault detection and diagnosis in rotating machinery. CoRR (2021). https://arxiv.org/abs/2102.11848

  16. Collaris, D., Vink, L.M., van Wijk, J.J.: Instance-level explanations for fraud detection: a case study. CoRR (2018). http://arxiv.org/abs/1806.07129

  17. Coma-Puig, B., Carmona, J.: An iterative approach based on explainability to improve the learning of fraud detection models. CoRR abs/2009.13437 (2020). https://arxiv.org/abs/2009.13437

  18. Fayyad, U.M., Irani, K.B.: Multi-interval discretization of continuous-valued attributes for classification learning. In: International Joint Conferences on Artificial Intelligence, pp. 1022–1029 (1993)

    Google Scholar 

  19. Han, J.: Mining frequent patterns without candidate generation: a frequent-pattern tree approach. Data Min. Knowl. Discov. 8, 53–87 (2004)

    Article  MathSciNet  Google Scholar 

  20. Huang, Q., Yamada, M., Tian, Y., Singh, D., Yin, D., Chang, Y.: GraphLIME: local interpretable model explanations for graph neural networks. CoRR (2020). https://arxiv.org/abs/2001.06216

  21. Klaise, J., Van Looveren, A., Vacanti, G., Coca, A.: Alibi: algorithms for monitoring and explaining machine learning models (2019). https://github.com/SeldonIO/alibi

  22. Letham, B., Rudin, C., McCormick, T., Madigan, D.: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 9, 1350–1371 (2015)

    Article  MathSciNet  Google Scholar 

  23. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 4768–4777 (2017)

    Google Scholar 

  24. Makki, S., Assaghir, Z., Taher, Y., Haque, R., Hacid, M.S., Zeineddine, H.: An experimental study with imbalanced classification approaches for credit card fraud detection. IEEE Access 7, 93010–93022 (2019)

    Article  Google Scholar 

  25. Molnar, C.: Interpretable Machine Learning (2019). https://christophm.github.io/interpretable-ml-book/

  26. Nguyen, T.T., Tahir, H., Abdelrazek, M., Babar, A.: Deep learning methods for credit card fraud detection. CoRR (2020). https://arxiv.org/abs/2012.03754

  27. Okajima, Y., Sadamasa, K.: Deep neural networks constrained by decision rules. In: Proceedings of the AAAI Conference on Artificial Intelligence, July 2019, vol. 33, no. 01 (2019)

    Google Scholar 

  28. Psychoula, I., Gutmann, A., Mainali, P., Lee, S.H., Dunphy, P., Petitcolas, F.A.P.: Explainable machine learning for fraud detection. CoRR (2021). https://arxiv.org/abs/2105.06314

  29. Rao, S.X., et al.: xFraud: explainable fraud transaction detection on heterogeneous graphs. CoRR (2020). https://arxiv.org/abs/2011.12193

  30. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016, pp. 1135–1144 (2016)

    Google Scholar 

  31. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (AAAI) (2018)

    Google Scholar 

  32. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019)

    Article  Google Scholar 

  33. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153. JMLR.org (2017)

    Google Scholar 

  34. Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHA: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February 2020, pp. 180–186 (2020)

    Google Scholar 

  35. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, 06–11 August 2017, vol. 70, pp. 3319–3328 (2017)

    Google Scholar 

  36. Wang, F., Rudin, C.: Falling rule lists. In: Proceedings of Artificial Intelligence and Statistics (AISTATS) (2015)

    Google Scholar 

  37. Watson, M., Moubayed, N.A.: Attack-agnostic adversarial detection on medical data using explainable machine learning (2021)

    Google Scholar 

  38. Yang, H., Rudin, C., Seltzer, M.: Scalable Bayesian rule lists. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3921–3930. JMLR.org (2017)

    Google Scholar 

  39. Zafar, M.R., Khan, N.M.: DLIME: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. CoRR (2019). http://arxiv.org/abs/1906.10263

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pradip Mainali .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mainali, P., Psychoula, I., Petitcolas, F.A.P. (2022). ExMo: Explainable AI Model Using Inverse Frequency Decision Rules. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05643-7_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05642-0

  • Online ISBN: 978-3-031-05643-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation