Log in

For a contextualist and content-related understanding of the difference between human and artificial intelligence

  • Published:
Phenomenology and the Cognitive Sciences Aims and scope Submit manuscript

Abstract

The development of artificial intelligence necessarily implies the anthropological question of the difference between human and artificial intelligence for two reasons: on the one hand artificial intelligence tends to be conceived on the model of human intelligence, on the other hand, a large part of types of artificial intelligence are designed in order to exhibit at least some features of what is conceived as being human intelligence. In this article I address this anthropological question in two parts. First I bring into review and classify some of the main answers that have been proposed until now to this question. I argue that these variety of answers can be broadly classified in three categories, namely a (1) behaviorist, (2) a representational, and (3) a holistic understanding of human intelligence. In a second moment I propose an alternative way of understanding the difference between human and artificial intelligence, which is not essentialist but contextualist and content-related. Contrary to possible answers that I analyse in the first section, this alternative model does not aim at gras** the essence of human intelligence, which could or could not be reproduced in principle by artificial intelligence. It situates rather the fundamental differences between human and artificial intelligence in the context of human existence and the conceptual content of human intelligence, following the phenomenological description of one of its most fundamental features, namely its life-world. Grounding on this approach, it is possible to argue that human and artificial intelligence could be by distinct, even if one could prove that they are eidetically, i.e. by their essence, identical.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

Not applicable.

Notes

  1. Following William Robinson’s formula, artificial intelligence may have “premium intelligence”, i.e., not only a genuine form of intelligence, that would not be reducible to the mere capacity of accomplishing a task (Robinson calls this reductive form of intelligence “task intelligence”), but also a form of intelligence that would enlighten the way in which human intelligence functions (Robinson 2014, 65).

  2. The term of artificial intelligence was first introduced in 1956 at the Darmouth workshop, entitled more precisely the Darmouth Summer Research Project on Artificial Intelligence. This workshop, which lasted during eight weeks and reunited numerous mathematicians and scientists, is considered to be grounding for artificial intelligence as a special research field. However, the concept itself of artificial intelligence has been already present much earlier in the history of human thinking. Thus, we can find it already in the philosophies of Leibniz and Descartes: Leibniz conceived a universal calculating truth machine, called calculus ratiocinator (Leibniz, 1966), while Descartes considered the possibility of an automat which would imitate human behaviour, such as human speech (Descartes, 1996).

  3. See also Bender et al., 2021, who compare large language models to parrots that repeat texts without understanding them, as well as Block, 1981, who presents a similar argument to Searle’s argument of the Chinese room, called the Blockhead thought experiment. This experiment conceives a machine that is able to carry on an appropriate conversation, thus seemingly passing the Turing Test, but with no form of intelligence, since the machine has a list of possible questions and appropriate corresponding answers during a conversation, according to which it provides outputs. As we can see, this experiment is grounded in a similar argument as that of Searle, namely the idea that a machine can lead a suitable conversation through mere mecanical realization of an algorithm, without any understanding.

  4. The idea that syntax is a necessary condition for semantics is present also in Montague’s semantics, and more precisely, through its principle of compositionality, which stipulates that “the meaning of a compound expression is a function of the meanings of its parts and of the way they are syntactically combined” (Partee, 1984, p. 281). According to Richard Montague, this principle of compositionality, which requires a correct syntactic combination, applies both to natural and artificial, formal languages (Montague, 1970, p. 373). It is not clear however if this implies that in principle an artificial intelligent program could use a language endowed with semantics, since sentences produced by artificial intelligence can be endowed with semantics for the human being that reads or hears them, without this entailing that this semantic level is present also for the artificial intelligent program, in other words, that the program understands the meaning of these sentences.

  5. Searle calls this orientation of the mental states towards the world “mind-to-world direction” (Searle, 1979, p. 77).

  6. We find a similar semantic theory as presupposing a relation between language and the world in Fodor, 1978, p. 229, with a similar conclusion to that of Searle, namely that artificial intelligence is deprived of any semantics.

  7. Husserl uses in this context, i.e. in the First Logical Investigation, the notion of expression (Ausdruck), which designates a word or a complex of words.

  8. One could also argue, as does James Mensch, that Searle’s model does not take into account the specific structure of intentionality that would allow to understand intentionality as a “rule-governed synthetic process”, which could be in principle realized by artificial intelligence, since Searle treats intentionality as an “irreducible primitive” (Mensch, 1991). Such an argument does not however bring to evidence the physical conditions of the emergence of intentionality, and does not even the possibility that intentionality could be physically conditioned – hence, it does not indicate how intentionality could be instantiated by artificial intelligence, nor if this would be possible.

  9. Daniel Dennett argues that neural systems are mechanistic but not mechanical, hence opening the possibility for conceiving a mechanism that would not be incompatible with rationality and one could say also with mind (Dennett, 1973).

  10. The computational theory of mind has been particularly advocated for by Hilary Putnam (Putnam, 1967) and Jerry Fodor (Fodor, 2008). One posible critique against this theory has been addressed by Ned Block through his thought experiment of the China brain, which imagines the realization of an artificial brain by connecting the entire population of China during one hour by means of two-way radios (Block, 1978). Such an artificial brain would not be endowed nevertheless with a conscious mind, despite the fact that it simulates the neural network of a human brain. One simple objection to this argument would consist in arguing that such an artificial brain does imitate only superficially a human brain. For this reason one cannot reasonably expect from such an artefact the emergence of a conscious mind. 

    Another possible critique has been addressed by John Lucas (Lucas, 1964) and Penrose, (1994), who both argue that Gödel’s Incompleteness Theorem implies an anti-mechanist view on human mind, in other words, that mind could not be understood as a set of algorithms, governing neural networks.

  11. These two types of knowledge could also be described as know-that (declarative knowledge) and know-how (procedural knowledge) (Preston, 1993, p. 45).

  12. He writes indeed: “thus formulated, the problem [of the exclusively declarative nature of current artificial intelligence] has so far resisted solution. We predict it will continue to do so” (Dreyfus & Dreyfus, 1986, p. 99).

  13. This specific critique of what one could call a declarative representational model of intelligence is also shared by Paul Churchland (Churchland, 1986).

  14. It is possible however to introduce ambiguity and contingency into contemporary AI programs (Russell & Norvig, 2009).

  15. Such a broader understanding of the central role of the world in classical phenomenology avoids an exclusively Husserlian understanding of this question that is implied in Beavers’ use of the concept of “world constitution”.

  16. Although it is possible to argue that sociality cannot be reduced to intersubjectivity (see on this point Caminada, 2023), sociality presupposes nevertheless intersubjectivity, not in the form of an I-Thou relationship, but of an open-ended community of other real and potential subjects. Indeed, even if we accept the idea that the structure of sociality is determined by what Husserl calls common mind, i.e. a “structure of habits” and not of acts of consciousness, these habits are effectuated, repeated and modified by concrete subjects, in relationship to other subjects. Moreover they presuppose the horizon of other subjects by whom these habits are or could be accepted as normal.

  17. The notion of “social robot” is sometimes used in the literature (see for instance Gallagher, 2013), but it does not address the question of the possibility of an artificial social world, but rather the possibility of adequate interactions between robots and human beings, based on social cognition.

  18. Indeed, as James Mensch poinst out, temporality, insofar as it is a fundamental structure of intentionality - and so also of consciousness, since intentionality is an eidetic and universal structure of consciousness – “shows itself as a rule-governed synthetic process” (Mensch, 1991), which can be formalized by a set of algorithms, and is for this reason “capable of being instantiated both by machines and men” (Mensch, 1991).

  19. As James Dodd argues, the life-world is a fundament for the evidence of the meaning of concepts: the origin of the meaning of concepts roots in the life-world – conversely, forgetting these roots leads to a mere abstract meaning, which is not apprehended with evidence (Dodd, 2004, p. 14). In Husserl’s words, it leads to a “passive understanding” of the meaning of concepts, which requires a “reactivation” of their meaning in order to make it “self-evident” (Husserl, 1970b, p. 361).

  20. For a detailed account of the idea of artificial general intelligence see Goertzel, 2014.

  21. In his classes from 1954 to 1955 on institution and passivity, Merleau-Ponty argues, referring to Husserl’s theory of the origin of geometry, that there is no fixed and pre-determined relationship between human life-world and conceptual formalizations, that he conceives as structures (“structurations”), but that “other structures, other formalizations are possible” (Merleau-Ponty, 2003, p. 92). If one single type of life-world, namely the human life-world, may lead to different conceptual formalizations, a fortiori a distinct type of life-world such as an artificial life-world may lead to different formalizations.

References

  • Beavers, A. F. (2002). Phenomenology and Artificial Intelligence. Metaphilosophy, vol. 33, nr. 1/2: 70-82.

  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: can language models be too big? Proceedings of the 2021 ACM conference on fairness, accountability, and transparency.

  • Block, N. (1978). Troubles with functionalism. Minnesota Studies in the Philosophy of Science, 9, 261–325.

    Google Scholar 

  • Block, N. (1981). Psychologism and behaviorism. The Philosophical Review, 90, 5–43.

    Article  Google Scholar 

  • Brooks, R. A. (1989). A robot that walks: Emergent behaviors from a carefully evolved network. Neural Computation, 1, 153–162.

    Article  Google Scholar 

  • Brooks, R. A. (1991a). Intelligence without Reason, Proceedings of the 12th international joint conference on Artificial Intelligence, vol. 1: 569-595.

  • Brooks, R. A. (1991b). Intelligence without representation. Artificial Intelligence Journal, 47, 139–159.

    Article  Google Scholar 

  • Caminada, E. (2023). Beyond intersubjectivism: Common mind and the multipolar structure of sociality after Husserl. Continental Philosophy Review, 56, 379–400.

    Article  Google Scholar 

  • Churchland, P. (1986). Some reductive strategies in cognitive neurobiology. Mind, 95, 379: 279–309.

    Article  Google Scholar 

  • Dennett, D. C. (1973). Mechanism and responsibility, in Essays on Freedom of Action, Honderich, T. (Ed.) (157-184). London: Routledge and Kegan Paul. Reprinted in Dennett, D. C. (1978) Brainstorms (233-255). Cambridge, MA: Bradford Books.

  • Descartes, R. (1996). Discourse on the Method. In D. Weissman (Ed.), Discourse on the Method and meditations on First Philosophy (pp. 3–48). Yale University Press.

  • Dodd, J. (2004). Crisis and Reflection. An essay on Husserl’s Crisis of the European sciences. Kluwer Academic.

  • Dreyfus, H. (1972). What computers can’t do. MIT Press.

  • Dreyfus, H. (1981). From micro-worlds to knowledge representation: AI at an Impasse. In J. Haugeland (Ed.), Mind design (pp. 161–204). The MIT Press.

  • Dreyfus, H. (1992). What computers still can’t do. MIT Press.

  • Dreyfus, H., & Dreyfus, S. (1986). Mind over Machine. Free.

  • Floridi, L. (1999). Philosophy and Computing: An introduction. Routledge.

  • Floridi, L. (2023). AI as Agency without Intelligence: On ChatGPT, large Language models, and other Generative models. Philosophy & Technology, 36, 15.

    Article  Google Scholar 

  • Fodor, J. A. (1978). Tom Swift and his procedural grandmother. Cognitions, 6, 229–247.

    Article  Google Scholar 

  • Fodor, J. A. (2008). LOT: The Language of Thought Revisited. Clarendon.

  • Froese, T., & Ziemke, T. (2009). Enactive Artificial Intelligence: Investigating the Systemic Organization of Life and Mind. Artificial Intelligence, vol. 173, issues 3-4: 466-500.

  • Gallagher, S. (2013). You and I, robot. AI & Soc, 28, 455–460.

    Article  Google Scholar 

  • Goertzel, B. (2014). Artificial General Intelligence: Concept, State of the art, and future prospects. Journal of Artificial General Intelligence, 5(1), 1–46.

    Article  Google Scholar 

  • Harnad, S. (1990). The Symbol Grounding Problem. Physica D: Nonlinear Phenomena, 42, 335–346.

    Article  Google Scholar 

  • Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines, 1(1), 43–54.

    Article  Google Scholar 

  • Haugeland, J. (1985). Artificial Intelligence: The very idea. MIT Press.

  • Hoffmann, M., & Pfeifer, R. (2018). Robots as Powerful Allies for the Study of Embodied Cognition from the Bottom. In The Oxford Handbook of 4E Cognition, Newen, A., De Bruin, L. & Gallagher, S. (Eds.). Online Ed., Oxford Academic, 9 Oct. 2018.

  • Husserl, E. (1970a). Logical investigations, vol. 1, trans. J. N. Findlay. Routledge.

  • Husserl, E. (1970b). The Crisis of the European Sciences and Transcendental Phenomenology. An Introduction to Phenomenological Philosophy, trans. Carr, D. Evanston: Northwestern University Press.

  • Husserl, E. (1982). Ideas pertaining to a pure phenomenology and to a phenomenological philosophy First Book. General introduction to a pure phenomenology, trans. Kersten, F. The Hague: Martinus Nijhoff Publishers.

  • Husserl, E. (1993). Die Krisis Der europäischen Wissenschaften Und die transzendentale Phänomenologie. Ergänzungsband. Texte aus dem Nachlass. Kluwer Academic. Husserliana XXIX.

  • Husserl, E. (1997). Thing and Space. Lectures of 1907, trans. Rojcewicz, R. (Ed.). Dordrecht: Springer.

  • Husserl, E. (1977). Cartesian Meditations. An Introduction to Phenomenology, trans. D. Cairns. The Hague: Martinus Nijhoff Publishers.

  • Husserl, E. (1973). Experience and Judgment. Investigations in a Genealogy of Logic, trans. Churchill, J. S. & Ameriks, K. London: Routledge & Kegan Paul.

  • Johnson-Laird, P. N. (1977). Procedural semantics. Cognition, 5, 189–214.

    Article  Google Scholar 

  • Leibniz, G. W. (1966). Of the art of combination (1-11), trans. Parkinson, G. H. R. Clarendon.

  • Lucas, J. R. (1964). Minds, Machines, and Gödel. In A. R. Anderson (Ed.), Minds and machines (pp. 43–59). Prentice-Hall.

  • Mensch, J. R. (1991). Phenomenology and artificial intelligence: Husserl learns Chinese. Husserl Studies, 8(2), 102–127.

    Article  Google Scholar 

  • Mensch, J. R. (2006). Artificial Intelligence and the phenomenology of Flesh. PhaenEx, 1(1), 73–85.

    Article  Google Scholar 

  • Merleau-Ponty, M. (2003). L’Institution, la passivité. Notes de cours au Collège de France (1954-1955). Belin: Paris.

  • Montague, R. (1970). Universal Grammar, Theoria, 36: 373-398; reprinted in Formal Philosophy. Selected Papers of Richard Montague, Thomason, R. H. (Ed.), 1974, New Haven: Yale University Press, 7-27.

  • Partee, B. H. (1984). Compositionality, in F. Landman and F. Veldman (Eds.), Varieties of Formal Semantics: Proceedings of the 4th Amsterdam Colloquium (Groningen-Amsterdam Studies in Semantics, No. 3), 281-331, Dordrecht: Floris; reprintend in Compositionality in Formal Semantics. Selected Papers by B. H. Partee (Exploration in Semantics 1), 2004, Malden, MA: Blackwell, 153-181.

  • Penrose, R. (1994). Shadows of the mind. Oxford University Press.

  • Preston, B. (1993). Heidegger and Artificial Intelligence. Philosophy and Phenomenological Research, LIII(1), 43–69.

    Article  Google Scholar 

  • Putnam, H. (1967). Psychophysical Predicates, in Art, Mind, and Religion, W. Capitan and D. Merrill (Eds.). Pittsburgh, PA: University of Pittsburgh Press. Reprinted as Putnam, H. [1975], Mind, Language, and Reality (429 - 440). Cambridge: Cambridge University Press.

  • Robinson, W. S. (2014). Philosophical challenges. In K. Frankish, & W. M. Ramsey (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 64–86). Cambridge University Press.

  • Russell, S., & Norvig, P. (2009). Artificial Intelligence: A Modern Approach. Third Edition. Saddle River, NJ : Prentice Hall.

  • Searle, J. (1979). What is an Intentional State? Mind, New Series, vol. 88, no. 349: 74-92.

  • Searle, J. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3, 417–424.

    Article  Google Scholar 

  • Searle, J. (1997). The mystery of consciousness. New York Review of Books.

  • Smart, P., Heersmink, R., & Clowes, R. W. (2017). The Cognitive Ecology of the internet. In S. Cowley, & F. Valee-Tourangeau (Eds.), Cognition beyond the brain: Computation, interactivity and human artifice (2nd ed., pp. 251–282). Springer.

  • Turing, A. (1950). Computing Machinery and Intelligence. Mind, LIX, 433–460.

    Article  Google Scholar 

  • Varela, F. (1979). Principles of Biological Autonomy. North Holland.

Download references

Funding

This article received no funding.

Author information

Authors and Affiliations

Authors

Contributions

Unique author of the paper.

Corresponding author

Correspondence to Veronica Cibotaru.

Ethics declarations

Ethical approval

Not applicable.

Informed consent

Not applicable.

Competing Interests

The author declares no competing interests.

Statement regarding research involving human participants and/or animals

Not applicable.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cibotaru, V. For a contextualist and content-related understanding of the difference between human and artificial intelligence. Phenom Cogn Sci (2024). https://doi.org/10.1007/s11097-024-10004-z

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11097-024-10004-z

Keywords

Navigation