Abstract
The psychology of reasoning uses norms to categorize responses to reasoning tasks as correct or incorrect in order to interpret the responses and compare them across reasoning tasks. This raises the arbitration problem: any number of norms can be used to evaluate the responses to any reasoning task and there doesn’t seem to be a principled way to arbitrate among them. Elqayam and Evans have argued that this problem is insoluble, so they call for the psychology of reasoning to dispense with norms entirely. Alternatively, Stupple and Ball have argued that norms must be used, but the arbitration problem should be solved by favouring norms that are sensitive to the context, constraints, and goals of human reasoning. In this paper, I argue that the design of reasoning tasks requires the selection of norms that are indifferent to the factors that influence human responses to the tasks—which aren’t knowable during the task design phase, before the task has been given to human subjects. Moreover, I argue that the arbitration problem is easily dissolved: any well-designed task will contain instructions that implicitly or explicitly specify a single determinate norm, which specifies what would count as a solution to the task—independently of the context, constraints, and goals of human reasoning. Finally, I argue that discouraging the use of these a priori task norms may impair the design of novel reasoning tasks.
Similar content being viewed by others
Notes
Samuels et al., (2012) make a similar distinction between two things that can be evaluated: the exercises of cognitive capacities and the judgments that result from the exercises of the cognitive capacities. I’ve proposed here that we can evaluate judgments in relation to these things.
My conception of cognitive success is completely different from Schurz & Hertwig’s (2019). They argue that what is rational for the exercise of a cognitive capacity is whatever maximizes the likelihood of success. This is just a consequentialist conception of cognitive norms. My proposal is that task norms are distinct from cognitive norms and that achieving task norms counts as a success for cognition that is distinct from its counting as rational.
Davies et al. (1995) argue that the WST is specifiable in predicate logic, not propositional logic, so the norms of classical predicate logic should be used to categorize responses instead of classical propositional logic. Johnson-Laird & Wason (1970) recognize the same problem but suspect that it doesn’t make a significant empirical difference.
Oaksford & Wakefield (2003) design a modification of the WST that partly addresses the indeterminacy problem: it specifies the probabilities of the antecedent and consequent. However, their modified task fails to address the contradiction problem: their task instructions still ask subjects to evaluate the truth of the conditional, rather than asking them which card would maximize information gain.
To mitigate priming, he could provide them examples of modus ponens and other distractor inferences.
I thank Reviewer 1 for raising this important challenge and for offering Skovgaard-Olsen et al. (2019) as an excellent example of one such task.
I expect that many philosophers experience the use of such double standards.
Subjects still fail at high rates if the antecedent and consequent propositions attribute arbitrary properties to the rule and cards (alphanumeric, geometric, color, etc.) (Wason, 1969) or if they attribute meaningful properties (Manktelow & Evans, 1979). But cognitive success can be partially restored if the antecedent and consequent propositions describe social rules in the context of enforcement (Cox & Griggs, 1982; Pollard & Evans, 1987).
For example, they cite Putnam’s (1974) example that perturbations in the orbit of Uranus were known to contradict the prevailing Newtonian model of the solar system, but it wasn’t clear which part of the model was false until Neptune was found. By comparison, perturbations in the orbit of Mercury were also known to contradict the prevailing Newtonian model of the solar system, and it wasn’t clear which part of the model was false until Newton’s theory of gravity was supplanted by Einstein’s theory of relativity (vs. the discovery of the hypothetical planet Vulcan). So, they conclude that it wouldn’t be useful for us to acquire the capacity for falsification or modus tollens.
This explains why robust failures aren’t useful for studying cognitive control: cognitive control has the function to correct errors in the exercise of cognitive capacities. But robust failure is the result of the rational exercise of irrelevant cognitive capacities, so cognitive control shouldn’t recognize the task errors as cognitive errors, and so shouldn’t intervene. Since cognitive failures aren’t failures in cognitive control, there is little information that we can gain from them about cognitive control.
This is an across-task arbitration problem: we have to arbitrate between well-defined tasks, each of which has a single determinate norm. This is distinct from the problem that Elqayam & Evans (2011) raise, which is a within-task arbitration problem: we supposedly have to arbitrate across multiple norms for a single task.
Klauer et al. (2007) and Ragni et al. (2018) both show that Oaksford & Chater’s (1994) model fails to sufficiently explain all 16 possible types of responses to the WST. In a sense, this isn’t surprising: Oaksford & Chater attribute the same reasoning response to every subject, which is a very strong assumption. By comparison, Klauer et al. and Ragni et al. attribute different reasoning responses to different subjects using multinomial processing tree models. This is a much weaker assumption, which significantly increases the flexibility of their models. Still, Oaksford & Chater’s model is simpler (it uses 4 parameters rather than 10), so it will be easier for me to discuss its advantages vis-à-vis the classical logic model. Hence, I’ll focus on their model, even though my point would be the same if I had focused on Klauer et al. and Ragni et al.’s model instead. I thank Reviewer 1 for pressing this point.
Finally, if one has found a task norm that designs tasks that elicit very low rates of robust failure, then we should test whether the task norm corresponds to a cognitive norm by searching for non-normative modifications to the task (which hold the task norm fixed) that restore high rates of robust failure. This can correct for misleading task norms, as in the case where high rates of success can be observed on the WST under trivalent logic even though subjects are maximizing the expected information gains.
For example, I recently developed a novel kind of task design that explicitly requires the use of hard norms (Dewey, 2022). During the review process for that paper, though, I received strong criticism from some reviewers for using hard norms and was directed to Elqayam & Evans (2011). As a philosopher, though, I was confident with insisting on the use of normative assumptions and held my ground, but I worried that non-philosophers might not be so confident. That convinced me to write the current paper, to offer and defend an alternative to subjectivism, soft normativism, and descriptivism.
To their credit, Elqayam & Evans (2011) are sensitive to this. They recognize that the is-ought gap can be closed without creating an is-ought fallacy by adding an “implicit normative premise”, often known as a bridge premise. But they don’t seem to be sensitive to the fact that this undermines their original concern: whenever we draw normative conclusions from empirical premises, it’s always possible to rationalize our inference post hoc by adding an implicit normative bridge premise that validates our argument. For this reason, it seems quite infelicitous to accuse anyone of committing an is-ought fallacy. If we want to disagree with someone’s is-ought inferences, it’s much more felicitous to explicate their implicit normative bridge premise and then argue that the premise is false.
This is just an onus-shifting argument: if someone believes that norms aren’t empirical (in some sense) but other unobservable things are, then the onus is on them to show that there is a consistent, plausible way to do this.
Note that these resources are mostly focused on the norms used for evaluating moral reasoning (a favourite kind of reasoning among philosophers, including myself). However, similar arguments extend to other kinds of reasoning.
References
Anderson, J. R. (1990). The adaptive character of thought. Lawrence Erlbaum Associates, Inc.
Ballantyne, N. (2019). Epistemic trespassing. Mind, 128(510), 367–395. https://doi.org/10.1093/mind/fzx042.
Cohen, L. J. (1981). Can human irrationality be experimentally demonstrated? Behavioral and Brain Sciences, 4, 317–370.
Cox, J. R., & Griggs, R. A. (1982). The effects of experience on performance in Wason’s selection task. Memory & Cognition, 10(5), 496–502. https://doi.org/10.3758/BF03197653.
Dancy, J. (2017). Moral particularism. E. N. Zalta (ed.), Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2017/entries/moral-particularism/
Davies, P. S., Fetzer, J. H., & Foster, T. R. (1995). Logical reasoning and domain specificity. Biology and Philosophy, 10(1), 1–37. https://doi.org/10.1007/BF00851985.
De Neys, W. (2012). Bias and conflict: a case for logical intuitions. Perspectives on Psychological Science, 7(1), 28–38. https://doi.org/10.1177/1745691611429354.
De Neys, W. (2014). Conflict detection, dual processes, and logical intuitions: some clarifications. Thinking & Reasoning, 20(2), 169–187. https://doi.org/10.1080/13546783.2013.854725.
De Neys, W., Vartanian, O., & Goel, V. (2008). Smarter than we think: when our brains detect that we are biased. Psychological Science, 19(5), 483–489. https://doi.org/10.1111/j.1467-9280.2008.02113.x.
Dewey, A. R. (2022). Metacognitive control in single- vs. dual-process theory. Thinking & Reasoning, 1–36. https://doi.org/10.1080/13546783.2022.2047106.
Elqayam, S. (2011). Grounded rationality: a relativist framework for normative rationality. In K. Manktelow, D. Over, & S. Elqayam (Eds.), The science of reason: a festschrift for Jonathan St B. T. Evans (pp. 397–419). Psychology Press.
Elqayam, S. (2012). Grounded rationality: Descriptivism in epistemic context. Synthese, 189(1), 39–49. https://doi.org/10.1007/s11229-012-0153-4.
Elqayam, S., & Evans, J. S. B. T. (2011). Subtracting “ought” from “is”: Descriptivism versus normativism in the study of human thinking. Behavioral and Brain Sciences, 34(5), 233–248. https://doi.org/10.1017/S0140525X1100001X.
Elqayam, S., & Over, D. E. (2016). Editorial: From is to ought: The place of normative models in the study of human thought. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.00628
Enoch, D. (2014). Authority and reason-giving. Philosophy and Phenomenological Research, 89(2), 296–332.
Evans, J., St., B. T. (1993). Bias and rationality. In K. I. Manktelow, & D. E. Over (Eds.), Rationality: psychological and philosophical perspectives (pp. 6–30). Routledge.
Evans, J. S. B. T. (2007). On the resolution of conflict in dual process theories of reasoning. Thinking & Reasoning, 13(4), 321–339. https://doi.org/10.1080/13546780601008825.
Evans, J., St., B. T., & Over, D. E. (1996). Rationality and reasoning. Psychology Press.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42. https://doi.org/10.1257/089533005775196732.
Gigerenzer, G. (1991). From tools to theories: a heuristic of discovery in cognitive psychology. Psychological Review, 98(2), 254–267. https://doi.org/10.1037/0033-295X.98.2.254.
Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: why biased minds make better inferences. Topics in Cognitive Science, 1(1), 107–143. https://doi.org/10.1111/j.1756-8765.2008.01006.x.
Hattori, M. (1999). The effects of probabilistic information in Wason’s selection task: An analysis of strategy based on the ODS model. In Proceedings of the 16th Annual Meeting of the Japanese Cognitive Science Society, 16, 623–626.
Hoover, J. D., & Healy, A. F. (2017). Algebraic reasoning and bat-and-ball problem variants: solving isomorphic algebra first facilitates problem solving later. Psychonomic Bulletin & Review, 24(6), 1922–1928. https://doi.org/10.3758/s13423-017-1241-8.
Hoover, J. D., & Healy, A. F. (2019). The bat-and-ball problem: stronger evidence in support of a conscious error process. Decision, 6(4), 369–380. https://doi.org/10.1037/dec0000107.
Hoover, J. D., & Healy, A. F. (2021). The bat-and-ball problem: a word-problem debiasing approach. Thinking & Reasoning, 0(0), 1–32. https://doi.org/10.1080/13546783.2021.1878473.
Hume, D. (1739). A treatise of human nature. Being an attempt to introduce the experimental method of reasoning into moral subjects. Available at https://www.gutenberg.org/ebooks/4705
Johnson, E. D., Tubau, E., & De Neys, W. (2016). The doubting system 1: evidence for automatic substitution sensitivity. Acta Psychologica, 164, 56–64. https://doi.org/10.1016/j.actpsy.2015.12.008.
Johnson-Laird, P. N., & Wason, P. C. (1970). A theoretical analysis of insight into a reasoning task. Cognitive Psychology, 1(2), 134–148. https://doi.org/10.1016/0010-0285(70)90009-5.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus & Giroux.
Kahneman, D., & Frederick, S. (2005). A model of heuristic judgment. The Cambridge Handbook of thinking and reasoning (pp. 267–293). Cambridge University Press.
Klauer, K. C., Stahl, C., & Erdfelder, E. (2007). The abstract selection task: New data and an almost comprehensive model. Journal of Experimental Psychology: Learning Memory and Cognition, 33(4), 680–703. https://doi.org/10.1037/0278-7393.33.4.680.
Korsgaard, C. M. (1996). The sources of normativity. Cambridge University Press.
Lewis, R. L., Howes, A., & Singh, S. (2014). Computational rationality: linking mechanism and behavior through bounded utility maximization. Topics in Cognitive Science, 6(2), 279–311. https://doi.org/10.1111/tops.12086.
Lieder, F., & Griffiths, T. L. (2020). Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43. https://doi.org/10.1017/S0140525X1900061X
Maguire, B. (2015). Grounding the autonomy of ethics. In R. Shafer-Landau (Ed.), Oxford Studies in Metaethics (10 vol., pp. 188–215). Oxford University Press.
Manktelow, K. I., & Evans, J. S. T. B. T. (1979). Facilitation of reasoning by realism: effect or non-effect? British Journal of Psychology, 70, 477–488. https://doi.org/10.1111/j.2044-8295.1979.tb01720.x.
Oaksford, M., & Chater, N. (1994). A rational analysis of the selection task as optimal data selection. Psychological Review, 101(4), 608–631. https://doi.org/10.1037/0033-295X.101.4.608.
Oaksford, M., & Chater, N. (2007). Bayesian rationality the probabilistic approach to human reasoning. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198524496.001.0001.
Oaksford, M., & Wakefield, M. (2003). Data selection and natural sampling: probabilities do matter. Memory & Cognition, 31(1), 143–154. https://doi.org/10.3758/BF03196089.
Oberauer, K., Wilhelm, O., & Diaz, R. R. (1999). Bayesian rationality for the WST? A test of optimal data selection theory. Thinking & Reasoning, 5(2), 115–144. https://doi.org/10.1080/135467899394020.
Pennycook, G., Fugelsang, J. A., & Koehler, D. J. (2015). What makes us think? A three-stage dual-process model of analytic engagement. Cognitive Psychology, 80, 34–72. https://doi.org/10.1016/j.cogpsych.2015.05.001.
Pollard, P., & Evans, J. S. (1987). Content and context effects in reasoning. The American Journal of Psychology, 100(1), 41–60. https://doi.org/10.2307/1422641.
Putnam, H. (1974). The ‘corroboration’ of theories. In A. Schilpp (Ed.), The philosophy of Karl Popper (2 vol.). La Salle, IL: Open Court.
Quine, W. V. (1948). On what there is. The Review of Metaphysics, 2(5), 21–38.
Ragni, M., Kola, I., & Johnson-Laird, P. N. (2018). On selecting evidence to test hypotheses: a theory of selection tasks. Psychological Bulletin, 144(8), 779–796. https://doi.org/10.1037/bul0000146.
Ridge, M., & McKeever, S. (2020). Moral particularism and moral generalism. E. N. Zalta (ed.), Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2020/entries/moral-particularism-generalism/
Russell, S. J. (1997). Rationality and intelligence. Artificial Intelligence, 94(1–2), 57–77. https://doi.org/10.1016/S0004-3702(97)00026-X.
Samuels, R., Stich, S., & Bishop, M. (2012). Ending the rationality wars: how to make disputes about human rationality disappear. Collected Papers, volume 2. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199733477.003.0009.
Schurz, G., & Hertwig, R. (2019). Cognitive success: a consequentialist account of rationality in cognition. Topics in Cognitive Science, 11(1), 7–36. https://doi.org/10.1111/tops.12410.
Simon, G., Lubin, A., Houdé, O., & Neys, W. D. (2015). Anterior cingulate cortex and intuitive bias detection during number conservation. Cognitive Neuroscience, 6(4), 158–168. https://doi.org/10.1080/17588928.2015.1036847.
Skovgaard-Olsen, N., Kellen, D., Hahn, U., & Klauer, K. C. (2019). Norm conflicts and conditionals. Psychological Review, 126(5), 611–633. https://doi.org/10.1037/rev0000150.
Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: implications for the rationality debate? Behavioral and Brain Sciences, 23(5), 645–665. https://doi.org/10.1017/S0140525X00003435.
Stanovich, K. E., & West, R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 94(4), 672–695. https://doi.org/10.1037/0022-3514.94.4.672.
Stich, S. (1990). The fragmentation of reason: Preface to a pragmatic theory of cognitive evaluation. Cambridge: MIT Press.
Stupple, E. J. N., & Ball, L. J. (2014). The intersection between Descriptivism and Meliorism in reasoning research: further proposals in support of ‘soft normativism’. Frontiers in Psychology, 5, 1269. https://doi.org/10.3389/fpsyg.2014.01269.
Thompson, V. A. (2009). Dual-process theories: a metacognitive perspective. In J. S. B. T. Evans, & K. Frankish (Eds.), In two minds: dual processes and beyond (pp. 171–195). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199230167.003.0008.
Thompson, V. A., Turner, J. A. P., & Pennycook, G. (2011). Intuition, reason, and metacognition. Cognitive Psychology, 63(3), 107–140. https://doi.org/10.1016/j.cogpsych.2011.06.001.
Thompson, V. A., Turner, J. A. P., Pennycook, G., Ball, L. J., Brack, H., Ophir, Y., & Ackerman, R. (2013). The role of answer fluency and perceptual fluency as metacognitive cues for initiating analytic thinking. Cognition, 128(2), 237–251. https://doi.org/10.1016/j.cognition.2012.09.012.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124.
Wason, P. C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20(3), 273–281. https://doi.org/10.1080/14640746808400161.
Wason, P. C. (1969). Regression in reasoning? British Journal of Psychology, 60, 471–480.
Wason, P. C., & Johnson-Laird, P. N. (1972). Psychology of reasoning: structure and content. Harvard University Press.
Acknowledgements
I thank Reviewer 1 for their extensive comments and invaluable suggestions, which have significantly improved this paper. I also thank Sara Aronowitz and Mike Oaksford for their feedback, which helped shape earlier versions of this paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
I have no conflicts of interest to declare.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Dewey, A.R. Arbitrating norms for reasoning tasks. Synthese 200, 502 (2022). https://doi.org/10.1007/s11229-022-03981-8
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11229-022-03981-8