Abstract
In the 1950 in a landmark paper on artificial intelligence (AI), Alan Turing posed a fundamental question “Can machines think?” Towards answering this, he devised a three-party ‘imitation game’ (now famously dubbed as the Turing Test) where a human interrogator is tasked to correctly identify a machine from another human by employing only written questions to make this determination. Turing went on and argued against all the major objections to the proposition that ‘machines can think’. In this chapter, we investigate whether machines can think causally. Having come a long way since Turing, today’s AI systems and algorithms such as deep learning (DL), machine learning (ML), and artificial neural networks (ANN) are very efficient in finding patterns in data by means of heavy computation and sophisticated information processing via probabilistic and statistical inference, not to mention the recent stunning human-like performance of large language models (ChatGPT and others). However, they lack an inherent ability for true causal reasoning and judgement. Heralding our entry into an era of causal revolution from information revolution, Judea Pearl proposed a “Ladder of Causation” to characterize graded levels of intelligence, based on the power of causal reasoning. Despite tremendous success of today’s AI systems, Judea Pearl placed these algorithms (DL/ML/ANN) at the lowest rung of this ladder since they learn only by associations and statistical correlations (like most animals and babies). On the other hand, intelligent humans are capable of interventional learning (second rung) as well as counterfactual and retrospective reasoning (third rung) aided with imagination, creativity, and intuitive reasoning. It is acknowledged that humans have a highly adaptable, rich, and dynamic causal model of reality which is non-trivial to be programmed in machines. What are the specific factors that make causal thinking so difficult for machines to learn? Is it possible to design an imitation game for causal intelligence machines (a causal Turing Test)? This chapter will explore some possible ways to address these challenging and fascinating questions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
‘Artificial intelligence’ (or ‘AI’) was coined in 1956 by the American computer scientist and cognitive scientist John McCarthy, one of the founding fathers of the field.
References
Bishop, J. M. (2021). Artificial intelligence is stupid and causal reasoning will not fix it. Frontiers in Psychology, 11, 2603.
Hume, D. (1896). A treatise of human nature. Clarendon Press.
Kathpalia, A., & Nagaraj, N. (2021). Measuring causality. Resonance, 26(2), 191–210.
Kıcıman, Emre, et al. 2023. Causal reasoning and large language models: Opening a new frontier for causality. ar**v:2305.00050.
Lewis, D. (1974). Causation. The Journal of Philosophy, 70(17), 556–567.
Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Penguin.
Morris, W. E., Brown, C. R., & Hume, D. (2022) The stanford Encyclopedia of philosophy (Summer 2022 Edition). In Edward N. Zalta (Ed.). https://plato.stanford.edu/archives/sum2022/entries/hume/.
Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect (1st ed.). Basic Books.
Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (Global Edition 4th).
Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–457.
Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379–423.
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence (1st ed.). Knopf.
Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H. T., **, A., Bos, T., Baker, L., Du, Y., & Li, Y. (2022). LaMDA: Language models for dialog applications. ar**v preprint ar**v:2201.08239
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.
Von Ahn, L., Blum, M., & Langford, J. (2003). CAPTCHA: Using hard AI problems for security. In Advances in Cryptology—EUROCRYPT 2003: International Conference on the Theory and Applications of Cryptographic Techniques. Lecture Notes in Computer Science (vol. 2656, pp. 294–311).
Willig, M., et al. (2023). Causal parrots: Large language models may talk causality but are not causal. preprint. URL: https://openreview.net/forum?id=tv46tCzs83 (under review, Transactions on Machine Learning Research).
Zhou, C., et al. (2023). A comprehensive survey on pretrained foundation models: A history from bert to chatgpt. ar**v:2302.09419.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Nagaraj, N. (2024). Testing for Causality in Artificial Intelligence (AI). In: Menon, S., Todariya, S., Agerwala, T. (eds) AI, Consciousness and The New Humanism. Springer, Singapore. https://doi.org/10.1007/978-981-97-0503-0_3
Download citation
DOI: https://doi.org/10.1007/978-981-97-0503-0_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-0502-3
Online ISBN: 978-981-97-0503-0
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)