The Use of Partial Order Relations and Measure Theory in Develo** Objective Measures of Explainability

  • Conference paper
  • First Online:
Explainable and Transparent AI and Multi-Agent Systems (EXTRAAMAS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13283))

  • 580 Accesses

Abstract

In this paper we describe the use of two mathematical constructs in develo** objective measures of explainability. The first one is measure theory, which has a long and interesting history, and which establishes abstract principles for comparing the size of general sets. At least some of the underpinnings of this theory can equally well be applied to evaluate the degree of explainability of given explanations. However, we suggest that it is meaningless, or at least undesired, to construct objective measures that allow the comparison of any two given explanations. Explanations might be non compatible, in the sense that integrating such explanations results in decreasing rather than increasing explainability. In other words, explainability is best considered as a partial order relation. Notwithstanding the use of partial order relations and measure theory, it is unwise to unconditionally apply these mathematical concepts to the field of explainability. It is demonstrated that the law of diminishing returns from economics offers a neat way to make these concepts applicable to the domain of explainability. The legal field is used as an illustration of the presented ideas.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.cnbc.com/2022/02/11/fed-rate-debate-ukraine-tensions-could-jolt-markets-in-the-week-ahead.html.

  2. 2.

    Ibid.

  3. 3.

    See, e.g., http://geneontology.org/docs/ontology-documentation/.

  4. 4.

    Definition from Encyclopaedia Britannica.

  5. 5.

    From https://personalexcellence.co/blog/diminishing-returns/.

References

  1. Lim, T., Loh, W., Shih, Y.: A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Mach. Learn. 40(3), 203–229 (2000)

    Article  Google Scholar 

  2. Adadi, A., Berrada, P.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  3. Burkart, N., Huber, M.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)

    Article  MathSciNet  Google Scholar 

  4. van Lent, M., Fisher, W., Mancuso, M.: An explainable artificial intelligence system for small-unit tactical behavior. In: Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, pp. 900–907 (2004)

    Google Scholar 

  5. Lipton, Z.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  6. Voosen, P.: How AI detectives are cracking open the black box of deep learning. https://www.science.org/content/article/how-ai-detectives-are-cracking-open-black-box-deep-learning. Accessed 8 Feb 2022

  7. Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2013). https://doi.org/10.1007/s10115-013-0679-x

    Article  Google Scholar 

  8. Henelius, A., Puolamäki, K., Ukkonen, A.: Interpreting classifiers through attribute interactions in datasets. In: Kim, B., Malioutov, D., Varshney, K., Weller, A. (eds.) Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017) (2017)

    Google Scholar 

  9. Freitas, A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl 15(1), 1–10 (2013)

    Article  Google Scholar 

  10. Bibal, A., Lognoul, M., de Streel, A., Frénay, B.: Legal requirements on explainability in machine learning. Artif. Intell. Law 29(2), 149–169 (2020). https://doi.org/10.1007/s10506-020-09270-4

    Article  Google Scholar 

  11. Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 7–99 (2017)

    Google Scholar 

  12. Goodman, B., Flaxman, S.: EU regulations on algorithmic decision-making and a “right to explanation". AI Mag. 38(3) (2016)

    Google Scholar 

  13. Malgieri, G., Comandé, G.: Why a right to legibility of automated decision-making exists in the general data protection regulation. Int. Data Priv. Law 7(3), 243–265 (2017)

    Article  Google Scholar 

  14. Edwards, L., Veale, M.: Enslaving the algorithm: from a ‘right to an explanation’ to a ‘right to better decisions’? IEEE Secur. Priv. 16(3), 46–54 (2018)

    Article  Google Scholar 

  15. Selbst, A.D., Powles, J.: Meaningful information and the right to explanation. Int. Data Priv. Law 7(4), 233–242 (2017)

    Article  Google Scholar 

  16. De Mulder, W., Valcke, P.: The need for a numeric measure of explainability. In: IEEE International Conference on Big Data (Big Data), pp. 2712–2720 (2021)

    Google Scholar 

  17. Mohseni, S., Zarei, N., Ragan, E.: A survey of evaluation methods and measures for interpretable machine learning. ar**v preprint ar**v:1811.11839 (2018)

  18. Rosenfeld, A.: Better metrics for evaluating explainable artificial intelligence. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pp. 45–50 (2021)

    Google Scholar 

  19. Islam, S., Eberle, W., Ghafoor, S.: Towards quantification of explainability in explainable artificial intelligence methods. https://arxiv.org/abs/1911.10104. Accessed 8 Feb 2022

  20. Sovrano, F., Vitali, F.: An objective metric for explainable AI: how and why to estimate the degree of explainability. https://arxiv.org/abs/2109.05327. Accessed 8 Feb 2022

  21. Poursabzi-Sangdeh, F., Goldstein, D., Hofman, J., Wortman Vaughan, J., Wallach, H.: Manipulating and measuring model interpretability. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–52 (2021)

    Google Scholar 

  22. De Mulder, W., Valcke, P., Vanderstichele, G., Baeck, J.: Are judges more transparent than black boxes? a scheme to improve judicial decision-making by establishing a relationship with mathematical function maximization. Law Contemp. Probl. 84(3), 47–67 (2021)

    Google Scholar 

  23. De Mulder, W., Baeck, J., Valcke, P.: Explainable black box models. In: Arai, K. (ed.) IntelliSys 2022. Lecture Notes in Networks and Systems, vol. 542, pp. 573–587. Springer, Cham. (2022). https://doi.org/10.1007/978-3-031-16072-1_42

    Chapter  Google Scholar 

  24. Miller, T.: Explanation inf artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  Google Scholar 

  25. Martin, K., Liret, A., Wiratunga, N., Owusu, G., Kern, M.: Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intell. 35, 397–411 (2021)

    Article  Google Scholar 

  26. Bard, J., Rhee, S.: Ontologies in biology: design, applications and future challenges. Nat. Rev. Genet. 5, 213–222 (2004)

    Article  Google Scholar 

  27. Hoekstra, R., Breuker, J., Di Bello, M., Boer, A.: The LKIF core ontology of basic legal concepts. In: CEUR Workshop Proceedings, pp. 43–63 (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wim De Mulder .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

De Mulder, W. (2022). The Use of Partial Order Relations and Measure Theory in Develo** Objective Measures of Explainability. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds) Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2022. Lecture Notes in Computer Science(), vol 13283. Springer, Cham. https://doi.org/10.1007/978-3-031-15565-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-15565-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-15564-2

  • Online ISBN: 978-3-031-15565-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics

Navigation