Evaluating the Interpretability of Threshold Operators

  • Conference paper
  • First Online:
Knowledge Engineering and Knowledge Management (EKAW 2022)

Abstract

Weighted Threshold Operators are n-ary operators that compute a weighted sum of their arguments and verify whether it reaches a certain threshold. They have been extensively studied in the area of circuit complexity theory, as well as in the neural network community under the name of perceptrons. In Knowledge Representation, they have been introduced in the context of standard Description Logics (DL) languages by adding a new concept constructor, the Tooth operator (\(\nabla \!\!\!\nabla \)). Tooth expressions can provide a powerful yet natural tool to represent local explanations of black box classifiers in the context of Explainable AI. In this paper, we present the result of a user study in which we evaluated the interpretability of tooth expressions, and we compared them with Disjunctive Normal Forms (DNF). We evaluated interpretability through accuracy, response time, confidence, and perceived understandability by human users. We expected tooth expressions to be generally more interpretable than DNFs. In line with our hypothesis, the study revealed that tooth expressions are generally faster to use, and that they are perceived as more understandable by users who are less familiar with logic. Our study also showed that the type of task, the type of DNF, and the background of the respondents affect the interpretability of the formalism used to represent explanations .

This research is partially supported by Italian National Research Project PRIN2020 2020SSKZ7R and by unibz RTD2020 project HULA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Interpretability describes the possibility to comprehend a black box model and to present the underlying basis for decision-making in a way that is understandable to humans [13].

  2. 2.

    More precisely, non-nested tooth-expressions are not able to represent the XOR. Nested tooth can however overcome this difficulty.

References

  1. Allahyari, H., Lavesson, N.: User-oriented assessment of classification model understandability. In: SCAI 2011 Proceedings, vol. 227, pp. 11–19. IOS Press (2011)

    Google Scholar 

  2. Barredo Arrieta, A., et al.: Explainable Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58(October 2019), 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012

  3. Bishop, C.M.: Pattern Recognition and Machine Learning, 5th Edition. Information science and statistics, Springer, New York (2007). ISBN 9780387310732. https://www.worldcat.org/oclc/71008143

  4. Booth, S., Muise, C., Shah, J.: Evaluating the interpretability of the knowledge compilation map. In: Kraus, S. (ed.) Proceedings of IJCAI, pp. 5801–5807 (2019)

    Google Scholar 

  5. Coba, L., Confalonieri, R., Zanker, M.: RecoXplainer: a library for development and offline evaluation of explainable recommender systems. IEEE Comput. Intell. Mag. 17(1), 46–58 (2022). https://doi.org/10.1109/MCI.2021.3129958

    Article  Google Scholar 

  6. Confalonieri, R., Coba, L., Wagner, B., Besold, T.R.: A historical perspective of explainable artificial intelligence. WIREs Data Min. Knowl. Disc. 11(1), e1391 (2021). https://doi.org/10.1002/widm.1391

    Article  Google Scholar 

  7. Confalonieri, R., Galliani, P., Kutz, O., Porello, D., Righetti, G., Troquard, N.: Towards knowledge-driven distillation and explanation of black-box models. In: Confalonieri, R., Kutz, O., Calvanese, D. (eds.) Proceedings of the Workshop on Data meets Applied Ontologies in Explainable AI (DAO-XAI 2021) part of Bratislava Knowledge September (BAKS 2021), CEUR Workshop Proceedings, Bratislava, Slovakia, 18–19 September 2021, vol. 2998. CEUR-WS.org (2021)

    Google Scholar 

  8. Confalonieri, R., Lucchesi, F., Maffei, G., Solarz, S.C.: A unified framework for managing sex and gender bias in AI models for healthcare. In: Sex and Gender Bias in Technology and Artificial Intelligence. Elsevier, pp. 179–204 (2022)

    Google Scholar 

  9. Confalonieri, R., Weyde, T., Besold, T.R., Moscoso del Prado Martín, F.: Using ontologies to enhance human understandability of global post-hoc explanations of black-box models. Artif. Intell. 296 (2021). https://doi.org/10.1016/j.artint.2021.103471

  10. Confalonieri, R., Weyde, T., Besold, T.R., del Prado Martín, F.M.: Trepan reloaded: a knowledge-driven approach to explaining black-box models. In: Proceedings of the 24th European Conference on Artificial Intelligence, pp. 2457–2464 (2020). https://doi.org/10.3233/FAIA200378

  11. Craven, M.W., Shavlik, J.W.: Extracting tree-structured representations of trained networks. In: NIPS 1995, pp. 24–30. MIT Press (1995)

    Google Scholar 

  12. Darwiche, A., Marquis, P.: A knowledge compilation map. J. Artif. Intell. Res. 17, 229–264 (2002). https://doi.org/10.1613/jair.989

    Article  MathSciNet  MATH  Google Scholar 

  13. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)

    Google Scholar 

  14. Galliani, P., Kutz, O., Porello, D., Righetti, G., Troquard, N.: On knowledge dependence in weighted description logic. In: Calvanese, D., Iocchi, L. (eds.) GCAI 2019. Proceedings of the 5th Global Conference on Artificial Intelligence, EPiC Series in Computing, Bozen/Bolzano, Italy, 17–19 September 2019, vol. 65, pp. 68–80. EasyChair (2019)

    Google Scholar 

  15. Galliani, P., Kutz, O., Troquard, N.: Perceptron operators that count. In: Homola, M., Ryzhikov, V., Schmidt, R.A. (eds.) Proceedings of the 34th International Workshop on Description Logics (DL 2021) part of Bratislava Knowledge September (BAKS 2021), CEUR Workshop Proceedings, Bratislava, Slovakia, 19–22 September 2021, vol. 2954. CEUR-WS.org (2021)

    Google Scholar 

  16. Galliani, P., Righetti, G., Kutz, O., Porello, D., Troquard, N.: Perceptron connectives in knowledge representation. In: Keet, C.M., Dumontier, M. (eds.) EKAW 2020. LNCS (LNAI), vol. 12387, pp. 183–193. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61244-3_13

    Chapter  Google Scholar 

  17. Garcez, A.D., Gori, M., Lamb, L.C., Serafini, L., Spranger, M., Tran, S.N.: Neural-symbolic computing: an effective methodology for principled integration of machine learning and reasoning. IfCoLoG J. Log. Appl. 6(4), 611–631 (2019)

    MathSciNet  Google Scholar 

  18. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comp. Surv. 51(5), 1–42 (2018)

    Article  Google Scholar 

  19. Hind, M.: Explaining explainable AI. XRDS 25(3), 16–19 (2019). https://doi.org/10.1145/3313096

    Article  Google Scholar 

  20. Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., Baesens, B.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011)

    Article  Google Scholar 

  21. Mariotti, E., Alonso, J.M., Confalonieri, R.: A framework for analyzing fairness, accountability, transparency and ethics: a use-case in banking services. In: 30th IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2021, Luxembourg, 11–14 July 2021, pp. 1–6. IEEE (2021). https://doi.org/10.1109/FUZZ45933.2021.9494481

  22. Masolo, C., Porello, D.: Representing concepts by weighted formulas. In: Borgo, S., Hitzler, P., Kutz, O. (eds.) Formal Ontology in Information Systems - Proceedings of the 10th International Conference, FOIS 2018, Frontiers in Artificial Intelligence and Applications, Cape Town, South Africa, 19–21 September 2018, vol. 306, pp. 55–68. IOS Press (2018). https://doi.org/10.3233/978-1-61499-910-2-55

  23. Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* 2019, pp. 279–288. ACM Press, New York (2019). https://doi.org/10.1145/3287560.3287574

  24. Parliament and Council of the European Union: General Data Protection Regulation (2016)

    Google Scholar 

  25. Porello, D., Kutz, O., Righetti, G., Troquard, N., Galliani, P., Masolo, C.: A toothful of concepts: towards a theory of weighted concept combination. In: Simkus, M., Weddell, G.E. (eds.) Proceedings of the 32nd International Workshop on Description Logics, CEUR Workshop Proceedings, Oslo, Norway, 18–21 June 2019, vol. 2373. CEUR-WS.org (2019)

    Google Scholar 

  26. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 1135–1144. ACM (2016)

    Google Scholar 

  27. Righetti, G., Masolo, C., Troquard, N., Kutz, O., Porello, D.: Concept combination in weighted logic. In: Sanfilippo, E.M., et al. (eds.) Proceedings of the Joint Ontology Workshops 2021 Episode VII, CEUR Workshop Proceedings, vol. 2969. CEUR-WS.org (2021)

    Google Scholar 

  28. Righetti, G., Porello, D., Kutz, O., Troquard, N., Masolo, C.: Pink panthers and toothless tigers: three problems in classification. In: Cangelosi, A., Lieto, A. (eds.) Proceedings of the 7th International Workshop on Artificial Intelligence and Cognition, CEUR Workshop Proceedings, vol. 2483, pp. 39–53. CEUR-WS.org (2019)

    Google Scholar 

  29. Rosch, E., Lloyd, B.B.: Cognition and categorization (1978)

    Google Scholar 

  30. Vollmer, H.: Introduction to Circuit Complexity: A Uniform Approach. Springer, Heidelberg (1999). https://doi.org/10.1007/978-3-662-03927-4

Download references

Acknowledgment

The authors thank Oliver Kutz, Nicolas Troquard, Pietro Galliani, and Antonella De Angeli for taking the pre-test and providing precious feedback about the user study.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Guendalina Righetti , Daniele Porello or Roberto Confalonieri .

Editor information

Editors and Affiliations

A A Examples used in the questionnaires

A A Examples used in the questionnaires

  1. 1.
    • DNF1: \( A \sqcup B \)

    • DNF2: \( A \sqcup (\lnot A \sqcap B)\)

    • DNF3: \( (A \sqcap B) \sqcup (\lnot A \sqcap B) \sqcup (A \sqcap \lnot B)\)

    • Tooth: \(\nabla \!\!\!\nabla ^1 ((A,1), (B,1))\)

  2. 2.
    • DNF1: \( (\lnot A \sqcap C) \sqcup B \)

    • DNF2: \( (A \sqcap B) \sqcup (\lnot A \sqcap C) \sqcup (\lnot A \sqcap B \sqcap \lnot C)\)

    • DNF3: \((A \sqcap B \sqcap C) \sqcup (\lnot A \sqcap B \sqcap C) \sqcup (\lnot A \sqcap B \sqcap \lnot C) \sqcup (\lnot A \sqcap \lnot B \sqcap C) \sqcup (A \sqcap B \sqcap \lnot C)\)

    • Tooth: \(\nabla \!\!\!\nabla ^2 ((\lnot A,1), (B,2), (C,1)) \equiv \nabla \!\!\!\nabla ^1 ((A, -1), (B, 2), (C, 1))\)

  3. 3.
    • DNF1: \((\lnot A \sqcap B) \sqcup C\)

    • DNF2: \((\lnot A \sqcap B) \sqcup (A \sqcap \lnot B \sqcap C) \sqcup (A \sqcap B \sqcap C) \sqcup (\lnot A \sqcap \lnot B \sqcap C) \)

    • DNF3: \( (\lnot A \sqcap B \sqcap C) \sqcup (\lnot A \sqcap B \sqcap \lnot C) \sqcup (A \sqcap \lnot B \sqcap C) \sqcup (A \sqcap B \sqcap C) \sqcup (\lnot A \sqcap \lnot B \sqcap C) \)

    • Tooth: \(\nabla \!\!\!\nabla ^2 ((A,-1), (B,2), (C,3))\)

  4. 4.
    • DNF1: \((A \sqcap B) \sqcup (B \sqcap C) \sqcup (A \sqcap C)\)

    • DNF2: \((A \sqcap B) \sqcup (A \sqcap \lnot B \sqcap C) \sqcup (\lnot A \sqcap B \sqcap C)\)

    • DNF3: \((A \sqcap B \sqcap C) \sqcup (\lnot A \sqcap B \sqcap C) \sqcup ( A \sqcap \lnot B \sqcap C) \sqcup ( A \sqcap B \sqcap \lnot C)\)

    • Tooth: \(\nabla \!\!\!\nabla ^2 ((A,1), (B,1), (C,1))\)

  5. 5.
    • DNF1: \((A \sqcap D) \sqcup (A \sqcap B \sqcap C) \sqcup (D \sqcap B)\sqcup (D \sqcap C)\)

    • DNF2: \((A \sqcap D) \sqcup (A \sqcap B \sqcap C \sqcap \lnot D) \sqcup (\lnot A \sqcap B \sqcap D)\sqcup (\lnot A \sqcap \lnot B \sqcap C \sqcap D)\)

    • DNF3: \((\lnot A \sqcap \lnot B \sqcap C \sqcap D) \sqcup (\lnot A \sqcap B \sqcap \lnot C \sqcap D) \sqcup (\lnot A \sqcap B \sqcap C \sqcap D)\sqcup (A \sqcap \lnot B \sqcap \lnot C \sqcap D) \sqcup (A \sqcap \lnot B \sqcap C \sqcap D) \sqcup (A \sqcap B \sqcap \lnot C \sqcap D) \sqcup (A \sqcap B \sqcap C \sqcap \lnot D)\sqcup (A \sqcap B \sqcap C \sqcap D)\)

    • Tooth: \(\nabla \!\!\!\nabla ^5 ((A,3), (B,1), (C,1), (D,4))\)

  6. 6.
    • DNF1: \((A \sqcap B) \sqcup (A \sqcap C) \sqcup (A \sqcap D) \sqcup (B \sqcap D)\)

    • DNF2: \((A \sqcap B \sqcap \lnot D) \sqcup (\lnot A \sqcap B \sqcap C \sqcap D) \sqcup (A \sqcap \lnot B \sqcap C \sqcap \lnot D) \sqcup (\lnot A \sqcap B \sqcap \lnot C \sqcap D) \sqcup (A \sqcap D)\)

    • DNF3: \((\lnot A \sqcap B \sqcap \lnot C \sqcap D) \sqcup (\lnot A \sqcap B \sqcap C \sqcap D) \sqcup (A \sqcap \lnot B \sqcap \lnot C \sqcap D) \sqcup (A \sqcap \lnot B \sqcap C \sqcap \lnot D) \sqcup (A \sqcap \lnot B \sqcap C \sqcap D) \sqcup (A \sqcap B \sqcap \lnot C \sqcap \lnot D) \sqcup (A \sqcap B \sqcap \lnot C \sqcap D) \sqcup (A \sqcap B \sqcap C \sqcap \lnot D) \sqcup (A \sqcap B \sqcap C \sqcap D) \)

    • Tooth: \(\nabla \!\!\!\nabla ^3 ((A,2), (B, 1.5), (C, 1), (D, 1.5)) \)

  7. 7.
    • DNF 1: \((A \sqcap B) \sqcup (A \sqcap C \sqcap D) \sqcup (B \sqcap C \sqcap D)\)

    • DNF 2: \( (A \sqcap B) \sqcup (\lnot A \sqcap B \sqcap C \sqcap D) \sqcup (A \sqcap \lnot B \sqcap C \sqcap D) \)

    • DNF 3: \( (A \sqcap B \sqcap C \sqcap D) \sqcup (A \sqcap B \sqcap \lnot C \sqcap \lnot D) \sqcup (A \sqcap B \sqcap \lnot C \sqcap D) \sqcup (A \sqcap B \sqcap C \sqcap \lnot D) \sqcup (\lnot A \sqcap B \sqcap C \sqcap D) \sqcup (A \sqcap \lnot B \sqcap C \sqcap D) \)

    • Tooth: \(\nabla \!\!\!\nabla ^4 ((A, 2), (B, 2), (C, 1), (D, 1))\)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Righetti, G., Porello, D., Confalonieri, R. (2022). Evaluating the Interpretability of Threshold Operators. In: Corcho, O., Hollink, L., Kutz, O., Troquard, N., Ekaputra, F.J. (eds) Knowledge Engineering and Knowledge Management. EKAW 2022. Lecture Notes in Computer Science(), vol 13514. Springer, Cham. https://doi.org/10.1007/978-3-031-17105-5_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17105-5_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17104-8

  • Online ISBN: 978-3-031-17105-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation