Log in

Intelligence as a Social Concept: a Socio-Technological Interpretation of the Turing Test

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

Alan Turing’s 1950 imitation game has been widely understood as a means for testing if an entity is intelligent. Following a series of papers by Diane Proudfoot, I offer a socio-technological interpretation of Turing’s paper and present an alternative way of understanding both the imitation game and Turing’s concept of intelligence. Turing, I claim, saw intelligence as a social concept, meaning that possession of intelligence is a property determined by society’s attitude toward the entity. He realized that as long as human society held a prejudiced attitude toward machinery—seeing machines a priori as mindless objects—machines could not be said to be intelligent, by definition. He also realized, though, that if humans’ a priori, chauvinistic attitude toward machinery changed, the existence of intelligent machines would become logically possible. Turing thought that such a change would eventually occur: He believed that when scientists overcome the technological challenge of constructing sophisticated machines that could imitate human verbal behavior—i.e., do well in the imitation game—humans’ prejudiced attitude toward machinery will have altered in such a way that machines could be said to be intelligent. The imitation game, for Turing, was not an intelligence test, but a technological aspiration whose realization would likely involve a change in society’s attitude toward machines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Similar content being viewed by others

Availability of Data and Material

Not applicable.

Code Availability

Not applicable.

Notes

  1. Other works that have inspired my reading of Turing include Whitby (1996), Piccinini (2000), Boden (2006, 1346–1356), and Sloman (2013). In addition, an anonymous reviewer introduced me to a recent doctoral dissertation by Gonçalves (2021), with which I found common ground on several issues.

  2. Turing uses the terms “thinking (entity)” and “intelligent (entity)” interchangeably, as Piccinini (2000) and others have pointed out. I will not differentiate between the terms, although I will usually use the term “intelligence”.

  3. For literature reviews, see Saygin et al. (2000), Oppy and Dowe (2011), and Gonçalves (2021, chap. 2.2).

  4. For further references to behavioristic interpretations, see Proudfoot (2013), Copeland (2004, 434–435), and Moor (2001, 81–82).

  5. The most renowned anti-behavioristic charges against the imitation game were raised by Searle (1980) in his Chinese Room thought experiment, and by Block (1981, 1995) in his Blockhead/Aunt Bubbles Machine thought experiment.

  6. Other inductive interpretations include Watt (1996), Schweizer (1998), and Shieber (2007). Sheiber suggests seeing the imitation game as an interactive proof: He shows that under any reasonable statistical measure, the chance of a non-intelligent entity faking intelligible answers over several rounds of the imitation game is negligible. Gonçalves (2021), too, might be considered as promoting an inductive interpretation, as he sees the imitation game as a means of discovering if a machine possesses the non-observable property of “the ability to learn from one’s own experience”.

  7. Turing attended Wittgenstein’s lectures in Cambridge on the foundations of mathematics. Sadly, there are no records of them discussing issues directly related to machine intelligence. Nonetheless, I think Turing’s approach to intelligence is quite Wittgensteinian, as will soon be shown.

  8. Wittgenstein himself seems to hold that the possession of any mental property by an entity (e.g., consciousness, agency, free will) is always “from a certain perspective.” I think Turing would agree with such a generalization, but I will limit my discussion to Turing’s approach to intelligence alone. (Cf. Proudfoot, 2017 and 2020), who argues that Turing held a “response-dependent” approach to both intelligence and free will, and perhaps also to consciousness).

  9. This is a suggestion I made in Danziger (2018).

  10. Turing’s approach as described in this section bears resemblance to Dennett’s “intentional stance” (Dennett, 1987a).

  11. I thank an anonymous reviewer for helpful comments regarding this section.

  12. Proudfoot’s formulation of Turing’s view of intelligence in terms of response-dependency includes explicit reference to the imitation game. My interpretation of Turing differs from Proudfoot’s in some essential points, including in the way intelligence is formalized as a response-dependent property; I present my view here and discuss Proudfoot’s interpretation later (section 4.5).

  13. Brynjarsdóttir (2008) shows that although Johnston, who had coined the term “response-dependence,” had originally mentioned “response-dependent concepts” (a term which, taken literally, is to be understood as referring to cases in which one’s conceptualization of a property depends on some subjective response), his account actually describes response-dependent properties (cases in which the property itself is ontologically dependent on some subjective response). There have been interesting attempts to formulate a response-dependent account of concepts (Pettit, (1991), Pettit, (1998), Jackson & Pettit, (2002)); according to the interpretation presented in this paper, though, it seems more suitable to say that Turing saw intelligence possession as a response-dependent property.

  14. Other formulations for response-dependent properties have been suggested. See Yates (2008) for an example of one such formulation that specifies the requirements of a prioricity (of the biconditional) and substantiality (of the terms specifying the conditions K).

  15. This is how Yates (2008) introduces the idea of a response-dependent property.

  16. In this paper, “displaying intelligent-like behavior” and “behaving in an intelligent-like manner” should be understood as “displaying behavior that under regular circumstances cannot be differentiated from that of a human”.

  17. Marvin Minsky (1986, 71), expressing a similar idea, says: “[T]he very concept of intelligence is like a stage magician’s trick. Like the concept of ‘the unexplored regions of Africa,’ it disappears as soon as we discover it.” Cf. McCorduck (2004, 204).

  18. Other remarks of Wittgenstein in this spirit are Wittgenstein (2009, §281) and Wittgenstein (1958, 47). The similarity between Turing’s and Wittgenstein’s approaches has been pointed out by Boden (2006, 1351) and Chomsky (2009, 104).

  19. Turing’s apparent agreement here with (what I called) “Wittgenstein’s iron curtain of language conventions” is the basis for the claim I made in section 3.1 above, namely that Turing, in his 1950 paper, did not see intelligence as defined by the viewpoint of some individual observer or another (as may have been his view in his earlier publications) but as defined by the attitude of society. Not unrelated, in Danziger (2018) I suggested that Turing’s refraining from explicitly stating in his 1950 paper that machines could think might imply that he had taken a step back from the stance expressed in his earlier writings.

  20. Turing’s sociological prediction has been discussed by Mays (1952, 149–151), Rapaport (2000), Beran (2014), and Gonçalves (2021). Rapaport (2000) suggests an interesting differentiation between two kinds of possible socio-linguistic changes: One consists of an extension of the scope of terms like “intelligence” to cover machines, similar to the extension of the scope of the term “flying” to cover movement-through-the-air of various aircraft, and not just of birds. Such an extension, says Rapaport, may be metaphorical; it does not necessarily imply that language users perceive the flight of airplanes (or machine intelligence) the same way they perceive the flight of birds (or human intelligence). Another possible change of the “general educated opinion,” according to Rapaport, is one in which people would come to see machines as possessing real intelligence. Albeit the fact that Turing described the change in humans’ attitude as a linguistic alteration, I think he had in mind Rapaport’s second type of change.

  21. Cf. Sloman (2013), who also claims that Turing saw the imitation game as a technological challenge.

  22. See Yampolskiy (2013) for a literature review and a more formal account of AI-completeness, including explicit reference to the imitation game.

  23. [Author's note: A “paper machine” was a person whose role was to execute a given algorithm step by step, in a fully mechanical manner; this was how a program’s functionality was tested before digital computers were available for use].

  24. As mentioned, the passages quoted from Turing which I used to exemplify his descriptive manner were taken from his 1947 and 1948 papers. It should be noted that there seems to be a slight difference between the abilities of the digital computer emphasized in these papers and the ability highlighted in Turing’s 1950 paper. In the 1947 and 1948 papers, Turing stresses that digital computers could display those specific, unique abilities that he sees as the hallmarks of intelligence, such as the ability to learn from one’s mistakes and the ability to modify one’s own program (“instruction table”) during runtime. In the 1950 paper, it seems that Turing is no longer trying to convince the reader that digital computers could imitate some specific cognitive ability or another; instead, he stresses that digital computers could imitate the entire human cognitive system, as they could imitate the entire human brain. (Hodges (2014, 530) explains the difference between Turing’s1948 and 1950 papers in a similar way. See Danziger (2018) for a detailed comparison between the approaches expressed in Turing’s 1947, 1948, and 1950 papers.) Despite this possible difference, in all three papers Turing describes the way humans would react (or actually do react) upon encountering such sophisticated machines, and the way humans’ attitude toward machinery would be influenced; he does not declare, though, that one should react this way or another, or that humans’ attitude toward machinery should change in some way or another, upon encountering sophisticated machines. This commonality between Turing’s papers is what allows me to explicate the descriptive manner in his 1950 paper by quoting from his 1947 and 1948 papers, where his descriptive intentions are more telling.

  25. Steven Harnad (in Epstein et al., 2009, 48) remarks that what Turing calls here “solipsism” is actually the “other-minds” problem in philosophy.

  26. Turing’s reply to the argument from consciousness may imply that for him, not only intelligence, but also consciousness, is to be seen as a social concept (cf. fn. 8 above). See Michie (1993, 4–7); but cf. Copeland (2004, 566–567).

  27. Indeed, it seems that Turing wanted humans’ attitude toward machinery to change. Robin Gandy, who had been Turing’s student and close friend, says that Turing sought to persuade people that “computers were not merely calculating engines but were capable of behaviour which must be accounted as intelligent” (Gandy, 1996, 125). Likewise, Piccinini (2000), who holds a similar understanding of the role of the imitation game as the one presented in this paper, suggests that Turing hoped that “by experiencing the versatility of digital computers at tasks normally thought to require intelligence, people would modify their usage of terms like ‘intelligence’ and ‘thinking,’ so that such terms apply to the machines themselves” (Piccinini, 2000), 579). See Gonçalves (2021, chap. 1) for an analysis of Turing’s ambition(s); he concludes that Turing’s 1950 paper can be seen as the point in time in which Turing assumed his role as “prophet of the machines”.

  28. Cf. Gonçalves (2021, chap. 3), who claims that Turing’s imitation game should be seen as a thought experiment in science, as opposed to scholars who see it as a thought experiment in philosophy.

  29. Note, though, that things may be different with regard to Turing’s 1947 and 1948 papers. There, Turing may have actually been focusing on the intellectual status of the machine itself (see Danziger, 2018).

  30. See Proudfoot (2020, fn. 1) for interesting remarks by Marvin Minsky and Drew McDermott regarding this issue.

  31. This is in contrast with to my own formulation of Turing’s response-dependent definition of intelligence in section 3.2 above, which includes no reference to the game.

  32. Bringsjord et al. (2001) suggest a criterion for intelligence called the restricted epistemic relation, which is similar to what Abramson saw as Turing’s epistemic-limitation condition, except that Bringsjord et al. do not think that Turing himself required this criterion. Accordingly, they suggest the Lovelace test for intelligence, by which a necessary condition for an entity’s being considered intelligent is that its creator does not know how it produces its answer. Abramson thinks that this is how Turing himself saw the imitation game.

  33. It may be noted that in the 1952 radio broadcast mentioned in section 3.4, Turing said that development of machines that do well in the imitation game would take “at least 100 years” (Turing et al., 1952, 495). Possibly, Turing realized that technological development of digital computers was progressing slower than he had expected, and so he updated the timeframe of his prediction accordingly, leaving it open-ended, with no deadline.

  34. Sloman (2013, 608) makes a similar point, and points out that while computers are now doing much cleverer things than in the past, people are becoming much harder to impress.

  35. While Turing regarded the entity’s external appearance as insignificant for intelligence attribution (Turing, 1950, 434), others have argued that external appearance may indeed have an impact on society’s attitude toward the entity, for better or worse. Moreover, such an impact is not necessarily “linear”: Under certain circumstances, cases of similarity between humans and other entities might actually sharpen the difference between them. For example, Mori’s “Uncanny Valley” graph (Mori et al., 2012) claims that an entity that is very similar to humans in one aspect but quite different from them in another, might be perceived by humans as very un-human.

  36. Cf. Davidson (1990), who claims that Turing had seen this property as irrelevant for intelligence attribution.

  37. For a list of several other properties that may trigger intelligence attribution, see Torrance (2014, 25–26).

  38. See Rapaport (2000) for a discussion of cases in which there is a controversy between a subject and society regarding the subject’s mental properties.

  39. See Dennett (1987b) and Michie (1993) for discussions regarding humans’ role in drawing the borders of what they call the “charmed circle” of consciousness (or intelligence).

  40. Indeed, although Turing is often considered to be one of the pioneers of the computational theory of the mind (which would imply that he would not be inclined to see mental properties as emotional or social concepts), his view actually goes against computationalism (cf. Proudfoot, 2017, 2020).

References

  • Abramson, D. (2008). Turing’s responses to two objections. Minds and Machines, 18, 147–167.

    Article  Google Scholar 

  • Abramson, D. (2011). Descartes’ influence on Turing. Studies in History and Philosophy of Science, 42, 544–551.

    Article  Google Scholar 

  • Beran, O. (2014). Wittgensteinian Perspectives on the Turing Test. Studia Philosophica Estonica 7(1), 35–57.

  • Block, N. (1981). Psychologism and behaviorism. Philosophical Review, 90(1), 5–43.

    Article  Google Scholar 

  • Block, N. (1995). The mind is the software of the brain. In E. E. Smith & D. N. Osherson (Eds.), Thinking (pp. 377–425). MIT Press.

    Google Scholar 

  • Boden, M. A. (2006). Mind as machine: A history of cognitive science. Oxford University Press.

    Google Scholar 

  • Bringsjord, S., Bello, P., & Ferrucci, D. (2001). Creativity, the Turing test, and the (better) Lovelace test. Minds and Machines, 11, 3–27.

    Article  Google Scholar 

  • Brynjarsdóttir. (2008). Response-dependence of concepts is not for properties. American Philosophical Quarterly, 45(4), 377–386.

    Google Scholar 

  • Chomsky, N. (2009). Turing on the “imitation game.” In Epstein et al. (pp. 103–106).

  • Copeland, B. J. (Ed.). (2004). The essential Turing. Oxford University Press.

    Google Scholar 

  • Davidson, D. (1990). Turing’s test. In K. Said, W. Newton-Smith, R. Viale, & K. Wilkes (Eds.), Modelling the mind (pp. 1–12). Clarendon Press.

    Google Scholar 

  • Danziger, S. (2018). Where intelligence lies: Externalist and sociolinguistic perspectives on the Turing test and AI. In V.C. Müller (Ed.), Philosophy and theory of artificial intelligence 2017. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol. 44 (pp. 158–174). Springer.

  • Dennett, D. C. (1987a). The intentional stance. MIT Press.

    Google Scholar 

  • Dennett, D. C. (1987b). Consciousness. In R. L. Gregory & O. L. Zangwill (Eds.), The Oxford companion to the mind (pp. 160–164). Oxford University Press.

    Google Scholar 

  • Epstein, R., Roberts, G., & Beber, G. (Eds.). (2009). Parsing the Turing test: Philosophical and methodological issues in the quest for the thinking computer. Springer.

    Google Scholar 

  • French, R. M. (1990). Subcognition and the limits of the Turing test. Mind, 99(393), 53–65.

    Article  Google Scholar 

  • Gandy, R. (1996). Human versus mechanical intelligence. In P. J. R. Millican & A. Clark (Eds.), Machines and thought: The legacy of Alan Turing (Vol. 1, pp. 125–136). Oxford University Press.

    Google Scholar 

  • Gonçalves, B. (2021). Machines will think: Structure and interpretation of Alan Turing’s imitation game. Doctoral Dissertation, Faculty of Philosophy, Languages and Human Sciences, University of São Paulo, São Paulo. https://doi.org/10.11606/T.8.2021.tde-10062021-173217. Accessed 19 Apr 2022.

  • Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines, 1, 43–54.

    Article  Google Scholar 

  • Hodges, A. (2014). Alan Turing: The enigma. Princeton University Press.

    Book  Google Scholar 

  • Jackson, F., & Pettit, P. (2002). Response-dependence without tears. Philosophical Issues, 12, 96–117.

    Article  Google Scholar 

  • Johnston, M. (1989). Dispositional theories of value. Proceedings of the Aristotelian Society, Supplementary Volumes, 63, 139–174.

    Google Scholar 

  • Mays, W. (1952). Can machines think? Philosophy, 27, 148–162.

    Article  Google Scholar 

  • McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects of artificial intelligence. CRC Press.

    Book  Google Scholar 

  • Michie, D. (1993). Turing’s test and conscious thought. Artificial Intelligence, 60(10), 1–22.

    Article  Google Scholar 

  • Minsky, M. (1986). The society of mind. Simon & Schuster.

    Google Scholar 

  • Moor, J. H. (1976). An analysis of the Turing test. Philosophical Studies, 30, 249–257.

    Article  Google Scholar 

  • Moor, J. H. (2001). The status and future of the Turing test. Minds and Machines, 11, 77–93.

    Article  Google Scholar 

  • Mori, M., MacDorman, K. F., & Kegeki, N. (2012). The uncanny valley. IEEE Robots & Automation Magazine, 19(2), 98–100.

  • Oppy, G., & Dowe, D. (2011). The Turing test. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. plato.stanford.edu/archives/spr2011/entries/turing-test. Accessed 13 Oct 2017.

  • Pettit, P. (1991). Realism and response-dependence. Mind, 100, 587–626.

    Google Scholar 

  • Pettit, P. (1998). Terms, things and response-dependence. European Review of Philosophy, 3, 61–72.

    Google Scholar 

  • Piccinini, G. (2000). Turing’s rules for the imitation game. Minds and Machines, 10, 573–582.

    Article  Google Scholar 

  • Proudfoot, D. (2013). Rethinking Turing’s test. The Journal of Philosophy, 110(7), 391–411.

    Article  Google Scholar 

  • Proudfoot, D. (2017). Turing and free will: A new take on an old debate. In J. Floyd & A. Bokulich (Eds.), Philosophical explorations of the legacy of Alan Turing (pp. 305–321). Springer Verlag.

    Chapter  Google Scholar 

  • Proudfoot, D. (2020). Rethinking Turing’s test and the philosophical implications. Minds and Machines, 30, 487–512.

    Article  Google Scholar 

  • Proudfoot, D. (2005). A new interpretation of the Turing test. The Rutherford Journal: The New Zealand Journal for the History and Philosophy of Science and Technology,1. rutherfordjournal.org/article010113.html. Accessed 6 Oct 2020.

  • Rapaport, W. J. (2000). How to pass a Turing test. Journal of Logic, Language, and Information, 9(4), 467–490.

    Article  Google Scholar 

  • Saygin, A., Cicekli, I., & Akman, V. (2000). Turing test: 50 years later. Minds and Machines, 10, 463–518.

    Article  Google Scholar 

  • Schweizer, P. (1998). The truly total Turing test. Minds and Machines, 8, 263–272.

    Article  Google Scholar 

  • Searle, J. R. (1980). Minds, brains, and programs. The Behavioral and Brain Science, 3, 417–424.

    Article  Google Scholar 

  • Shapiro, S. C. (2003). Artificial intelligence (AI). In A. Ralston, E. D. Reilly, & D. Hemmendinger (Eds.), Encyclopedia of computer science (pp. 89–93). Wiley.

    Google Scholar 

  • Shieber, S. M. (2007). The Turing test as interactive proof. Noûs, 41(4), 686–713.

    Article  Google Scholar 

  • Sloman, A. (2013). Aaron Sloman absolves Turing of—The mythical Turing test. In S. B. Cooper & J. Van Leeuwen (Eds.), Alan Turing: His work and impact (pp. 606–611). Elsevier.

    Google Scholar 

  • Tesler, L. (ca. 1970). Tesler’s theorem. http://www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html. Accessed 30 Apr 2020.

  • Torrance, S. (2014). Artificial consciousness and artificial ethics: Between realism and social relationism. Philosophy and Technology, 27(1), 9–29.

    Article  Google Scholar 

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 50, 433–460.

    Article  Google Scholar 

  • Turing, A. M., Braithwaite, R., Jefferson, G., & Newman, M. (1952). Can automatic calculating machines be said to think? Reprinted in Copeland (pp. 494–506).

  • Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Reprinted in Copeland (pp. 58–90).

  • Turing, A. M. (1947). Lecture on the automatic computing engine. Reprinted in Copeland (pp. 378–394).

  • Turing, A. M. (1948). Intelligent machinery. Reprinted in Copeland (pp. 410–432).

  • Watt, S. (1996). Naive psychology and the inverted Turing test. Psycoloquy, 7(14). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.2705&rep=rep1&type=pdf. Accessed 2 Nov 2017.

  • Whitby, B. (1996). The Turing test: AI’s biggest blind alley? In P. Millican & A. Clark (Eds.), Machines and thought: The legacy of Alan Turing (pp. 53–62). Calderon Press.

    Google Scholar 

  • Wittgenstein, L. (1958). The blue and brown books: Preliminary studies for the “philosophical investigations.” Harper & Row.

    Google Scholar 

  • Wittgenstein, L. (1976). Wittgenstein’s lectures on the foundations of mathematics, Cambridge, 1939. Cornell University Press.

    Google Scholar 

  • Wittgenstein, L. (2017). Lectures on freedom of the will. In V. A. Muntz & B. Ritter (Eds.), Wittgenstein’s Whewell’s court lectures, Cambridge, 1938–1941, from the notes by Yorick Smythies (pp. 282–296). Wiley-Blackwell.

    Google Scholar 

  • Wittgenstein, L. (2009). Philosophical investigations. Trans: G. E. M. Anscombe, P. M. S. Hacker, & J. Schulte fourth edition. Wiley-Blackwell.

  • Wright, C. (1992). Truth and objectivity. Harvard University Press.

    Book  Google Scholar 

  • Yampolskiy, R. V. (2013). Turing test as a defining feature of AI-completeness. In X. S. Yang (Ed.), Artificial intelligence, evolutionary computing and metaheuristics: In the footsteps of Alan Turing. Studies in Computational Intelligence, vol. 427 (pp. 3–17). Springer.

    Chapter  Google Scholar 

  • Yates, D. (2008). Philosophical Books, 49(4), 344–354.

    Article  Google Scholar 

Download references

Acknowledgements

I thank Orly Shenker, Oron Shagrir, Netanel Kupfer, and two anonymous reviewers, who have read various versions of this paper and added thoughtful comments. Special thanks to Nick Novelli, Nicola Damassino, and the participants of the R3T3-2018 Conference at the University of Edinburgh for stimulating discussions, and to Shira Kramer-Danziger for her assistance in editing and her wise advice.

Funding

Research for this paper was financially supported by the Sidney M. Edelstein Center for the History and Philosophy of Science, Technology, and Medicine at the Hebrew University of Jerusalem; and by the School of Philosophy, Psychology, and Language Sciences at the University of Edinburgh.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shlomo Danziger.

Ethics declarations

Competing Interests

The author declares no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Danziger, S. Intelligence as a Social Concept: a Socio-Technological Interpretation of the Turing Test. Philos. Technol. 35, 68 (2022). https://doi.org/10.1007/s13347-022-00561-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13347-022-00561-z

Keywords

Navigation