Testing for Causality in Artificial Intelligence (AI)

  • Chapter
  • First Online:
AI, Consciousness and The New Humanism
  • 121 Accesses

Abstract

In the 1950 in a landmark paper on artificial intelligence (AI), Alan Turing posed a fundamental question “Can machines think?” Towards answering this, he devised a three-party ‘imitation game’ (now famously dubbed as the Turing Test) where a human interrogator is tasked to correctly identify a machine from another human by employing only written questions to make this determination. Turing went on and argued against all the major objections to the proposition that ‘machines can think’. In this chapter, we investigate whether machines can think causally. Having come a long way since Turing, today’s AI systems and algorithms such as deep learning (DL), machine learning (ML), and artificial neural networks (ANN) are very efficient in finding patterns in data by means of heavy computation and sophisticated information processing via probabilistic and statistical inference, not to mention the recent stunning human-like performance of large language models (ChatGPT and others). However, they lack an inherent ability for true causal reasoning and judgement. Heralding our entry into an era of causal revolution from information revolution, Judea Pearl proposed a “Ladder of Causation” to characterize graded levels of intelligence, based on the power of causal reasoning. Despite tremendous success of today’s AI systems, Judea Pearl placed these algorithms (DL/ML/ANN) at the lowest rung of this ladder since they learn only by associations and statistical correlations (like most animals and babies). On the other hand, intelligent humans are capable of interventional learning (second rung) as well as counterfactual and retrospective reasoning (third rung) aided with imagination, creativity, and intuitive reasoning. It is acknowledged that humans have a highly adaptable, rich, and dynamic causal model of reality which is non-trivial to be programmed in machines. What are the specific factors that make causal thinking so difficult for machines to learn? Is it possible to design an imitation game for causal intelligence machines (a causal Turing Test)? This chapter will explore some possible ways to address these challenging and fascinating questions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (Canada)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 129.99
Price excludes VAT (Canada)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free ship** worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    ‘Artificial intelligence’ (or ‘AI’) was coined in 1956 by the American computer scientist and cognitive scientist John McCarthy, one of the founding fathers of the field.

References

  • Bishop, J. M. (2021). Artificial intelligence is stupid and causal reasoning will not fix it. Frontiers in Psychology, 11, 2603.

    Google Scholar 

  • Hume, D. (1896). A treatise of human nature. Clarendon Press.

    Google Scholar 

  • Kathpalia, A., & Nagaraj, N. (2021). Measuring causality. Resonance, 26(2), 191–210.

    Article  Google Scholar 

  • Kıcıman, Emre, et al. 2023. Causal reasoning and large language models: Opening a new frontier for causality. ar**v:2305.00050.

  • Lewis, D. (1974). Causation. The Journal of Philosophy, 70(17), 556–567.

    Article  Google Scholar 

  • Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Penguin.

    Google Scholar 

  • Morris, W. E., Brown, C. R., & Hume, D. (2022) The stanford Encyclopedia of philosophy (Summer 2022 Edition). In Edward N. Zalta (Ed.). https://plato.stanford.edu/archives/sum2022/entries/hume/.

  • Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect (1st ed.). Basic Books.

    Google Scholar 

  • Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (Global Edition 4th).

    Google Scholar 

  • Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–457.

    Article  Google Scholar 

  • Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379–423.

    Article  Google Scholar 

  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence (1st ed.). Knopf.

    Google Scholar 

  • Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H. T., **, A., Bos, T., Baker, L., Du, Y., & Li, Y. (2022). LaMDA: Language models for dialog applications. ar**v preprint ar**v:2201.08239

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.

    Article  Google Scholar 

  • Von Ahn, L., Blum, M., & Langford, J. (2003). CAPTCHA: Using hard AI problems for security. In Advances in Cryptology—EUROCRYPT 2003: International Conference on the Theory and Applications of Cryptographic Techniques. Lecture Notes in Computer Science (vol. 2656, pp. 294–311).

    Google Scholar 

  • Willig, M., et al. (2023). Causal parrots: Large language models may talk causality but are not causal. preprint. URL: https://openreview.net/forum?id=tv46tCzs83 (under review, Transactions on Machine Learning Research).

  • Zhou, C., et al. (2023). A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. ar**v:2302.09419.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nithin Nagaraj .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nagaraj, N. (2024). Testing for Causality in Artificial Intelligence (AI). In: Menon, S., Todariya, S., Agerwala, T. (eds) AI, Consciousness and The New Humanism. Springer, Singapore. https://doi.org/10.1007/978-981-97-0503-0_3

Download citation

Publish with us

Policies and ethics

Navigation