This paper argues that the well-recognized parallelism between habits-as-heuristics and habits-as-routines is real. Both kinds of habits are the outcomes of a single principle, rational choice. Rational choice informs the “cognitive economy,” which gives rise to habits-as-heuristics. It also informs what this paper calls the “physiological economy,” which generates habits-as-routines. Besides the heuristic/routine’s parallelism, the cognitive/physiological economy juxtaposition reveals two other parallelisms, amounting in total to a three-fold parallelism, as Table 1 sums up:

  1. 1.

    Habit-as-heuristic Parallels Habit-as-Routine: The cognitive/physiological economy juxtaposition entails that the habit-as-heuristic parallels the habit-as-routine.

  2. 2.

    Coginitive Illusion Parallels Physiological Malfunction: The cognitive/physiolgical economy juxtaposition entails that the slipup of the habit-as-heuristic, known as “cognitive illusion”, parallels the slipup of the habit-as-routine, what this paper calls “physiological malfunction”.

  3. 3.

    Cognitive Collapse Parallels Physiological Collapse: The cognitive/physiological economy juxtaposition entails that the breakdown of the habit-as-heuristic, what this paper calls “cognitive collapse”, parallels the breakdown of the habit-as-routine, what this paper calls “physiological collapse”.

Table 1 The three-fold parallelism

This paper, in three parts, grounds the three-fold parallelism on rational choice. The first part, consisting of “An overview”, “Some clarifications” and “Why rationality? Why dual process theory?” sections, offers the general framework. The second part, consisting of “The cognitive economy”, “The bioeconomics of habits-as-heuristics” and “Cognitive illusion versus cognitive collapse” sections, focuses on the cognitive economy and its consequences. The third part, consisting of “The physiological economy”, “The bioeconomics of habits-as-routines” and “Physiological malfunctions versus physiological collapse” sections, focuses on the physiological economy and its consequences. “Conclusion” section concludes.

An overview

Diverse theories of decision making recognize the three-fold parallelism—viz., habits per se, their slipups and their breakdowns. Pointedly, dual process theory and single process theory, in all their varieties and differences, recognize these three phenomena. The question that this paper poses is how to explain the three-fold parallelism phenomenon and explain it most efficiently in the sense of using the least number of principles without difficulty, i.e., avoid ad hoc qualifications.

Single process theory, which this paper takes to task, can easily explain the vertical dimension of Table 1, i.e., the cognitive/physiological economy juxtaposition. The contest is the horizontal dimension: how do the slipups and breakdowns relate, first, to the respective habits under focus and, second, to each other? What this paper proposes, the “rationality-based dual process theory,” can answer this question better than single process theory. The criterion of what is a “better” theory, again, is efficiency in the sense specified above.

Single process theory can offer a single principle, as the case with the proposed rationality-based dual process theory. They differ, however, with respect to the theoretical qualifications, whether they are ad hoc or not:

The Thesis: The dual-process theory, once modified and grounded on the single principle of rational choice, provides a better explanation of the origin of habits, their slipups, and their breakdown, than the single-process theory. The judgment of whether one theory is better than another relies on the parsimony of principles used, while avoiding ad hoc auxiliary qualifications to explain anomalies.

Stated differently, given that the issue is explaining the horizontal axis of Table 1, what is the most parsimonious theory? The contest between the single and dual process theories is neither, as Table 1 shows, the existence of the phenomena, i.e., the vertical parallelism of the two economies, nor the empirical fact regarding the horizontal specification of the troubles that habits usually run into. The contest is about how to explain these troubles (slipups and breakdown) in a parsimonious manner. Which theory can explain all the phenomena—viz., habits, their slipups, and their breakdowns—with the greatest efficiency?

The proposed rationality-based dual process theory can explain the first-fold parallelism with the greatest efficiency, i.e., without ad hoc auxiliary qualifications as defined by Lakatos (see Khalil, 1987). Namely, it can explain the origin of habits per se. Further, it can explain with great efficiency the second-fold parallelism, namely, the origin of slipups of habits per se. Additionally, it can explain with the greatest efficiency the third-fold parallelism, namely, the origin of breakdowns of habits per se.

The efficiency of a theory relies on two requirements. First, the theory must use the simplest principles. Second, the theory must explain all the pertinent phenomena easily, i.e., without resorting to ad hoc auxiliary qualifications (Khalil, 1989). As far as a researcher provides an efficient theory, i.e., one meeting the two requirements, it would be a rationality-based dual process theory, irrespective of the lexicon the researcher choses to name his or her theory.

This is a bold statement. It entails meeting the following challenge. How do we explain the asymmetry of behavior: Why a decision maker (DM) is ready to abandon his or her habit under focus in one circumstance, what is called above the “breakdown,” but not in another, what is called the “slipup.” To explain the asymmetry, the researcher must appeal to the cost–benefit calculus regarding expectations about future efforts and rewards of the habit under focus. Such an appeal relies on rational choice, irrespective of the lexicon the researcher choses to name his or her theory.

Further, in the case of breakdown and the necessity of abandoning the habit under focus, the researcher must rely on rationality-based dual process theory to explain such abandonment. Rational choice theory would explain that decision makers (DMs) disengage their habitual decision making in favor of deliberative decision making under certain incentives. Such a switch entails, first, that there are two processes of decision making (the habitual and the deliberative), and second, that there is a link between the two processes. The link is the rational choice principle, irrespective of the lexicon the researcher choses to name his or her theory.

Kahneman (2011) employs the dual process theory to explain the biases (cognitive illusions) that he, along with Tversky, had uncovered in the 1970s. With the concept of “mental economy,” he acknowledges the relevance of the rational choice principle, but never openly and explicitly.

Stanovich (2004) does not shy from recognizing explicitly the relevance of rationality. However, he thinks that there are two types of rationality—including an additional one that does not concern us here. One type underpins the deliberative process while the other (inherited via biological evolution) is the intuitive process. Stanovich portrays the dual processes as dichotomous, as if there is no common principle underpinning both types of rationality.

This paper’s contribution lies, first, in improving Kahneman’s approach by explicitly acknowledging rationality, and even extending it to the analysis of physiological economy. Second, this paper disputes Stanovich’s dichotomous approach. There is a common and singular principle underpinning the operation of the dual processes of decision making–regarding either the cognitive or the physiological economy. The singularity of the rational choice principle is imperative if we want to establish why dual process theory is parsimonious, i.e., able to explain the origin of habits, their slipups, their breakdown entailing why DMs switch from the intuitive to the deliberative processes, and vice versa, in the face of the cost–benefit calculus they undertake each day.

Some clarifications

The lexicon

This paper uses the term “heuristics” as generalizations functioning as prior attitudes, first impressions, stereotypes, prejudices, and what the economists call “beliefs”. Examples of heuristics, “meals in restaurants along highways are generally low-quality”; “well-groomed men are usually on time for appointments and meetings”; and “attorneys are normally strategic thinkers.” This paper uses the term “routines” as particular kind behavior, namely, repertoires and patterned series of action. Examples of routines are: “buy Folgers’s coffee, ignoring the promotion of other brands”; “take route 33 to work, disregarding the day and hour of travel”; and “take the elevators always, disregarding the hour of the day.”

Further, this paper uses the term “bounded rationality” as used by the standard economists. This paper recognizes that standard economists have hijacked the term from Simon (1957)—which is ironic since Simon coined the term as a critique of the standard economics’ notion of rational choice.

Is Kahneman’s mental economy equivalent to the economist’s bounded rationality?

Many researchers suppose Kahneman-and-Tversky’s work challenges the standard economist approach (e.g., Pressman, 2006; Thaler, 2016). Thaler indeed requires a special mention. He is a pillar of behavioral economics—which the experimental findings of Kahneman and Tversky have inspired (Thaler, 2016). Thus, it behooves this paper to state clearly the relation of Kahneman’s mental economy and the standard economist’s bounded rationality.

Kahneman’s mental economy, as already intimated above, is not a departure from standard economics—contrary to what Thaler and many others suppose. Kahneman’s concept is almost identical to the standard economist’s concept of bounded rationality.

The impetus of Tversky and Kahneman’s early research was to show that DMs make non-rational choices, specifically, by failing to follow Bayesian probability inference. At least, this is the impression of many heterodox economists critical of the neoclassical approach (e.g., Pressman, 2006). The heterodox economists warmly embraced Tversky and Kahneman’s work based on the impression that they view habits as the default state of the behavior of DMs—what Thorstein Veblen and modern institutional economists suppose (e.g., Hodgson, 1997). These economists fail to realize that Kahneman and Tversky want to urge people to minimize the reliance on the intuitive System 1 and resort to what they originally thought, namely, the supposedly more rational deliberative System 2.

This paper proposes that, at least the mature Kahneman, became aware that the intuitive System 1 is rational as well—it economizes on cognitive cost. Thus, the deeper issue is what he calls the “mental economy”—how the brain may use either system depending on what minimizes the costs most. The mature Kahneman came to realize that his early work with Tversky was, in effect, affirming rational choice theory. That is, the mature Kahneman is consistent with the early Kahneman.

Berg (2014) reaches the same conclusion regarding the continuity of the two Kahnemans but states it in different terms: The early Kahneman (along with Tversky) shows how DMs make choices deviating from the predictions of standard (simplistic) rational choice theory, while implicitly Kahneman and Tversky adhered to such a theory. The later Kahneman explicitly adheres to such a theory. He came to recognize that cognitive processes are costly. Hence, the late Kahneman came to realize the deviations uncovered by the early Kahneman are inevitable by-products of ex ante optimum heuristics—where Kahneman’s mental economy (cognitive cost) forces rational DMs to adopt such ex ante optimum heuristics.

That is, Kahneman’s early research along with Tversky is only a challenge to a “straw man” version of rational choice theory. It never was a challenge to the sophisticated version that acknowledges cognition is a scarce resource.

Interestingly, the mature Kahneman (2011, pp. 411–415) agrees with this assessment when he excuses some readers for thinking that his early work with Tversky poses a challenge to rational choice theory:

The definition of rationality as coherence is impossibly restrictive; it demands adherence to rules of logic that a finite mind is not able to implement. Reasonable people cannot be rational by that definition, but they should not be branded as irrational for that reason. Irrational is a strong word, which connotes impulsivity, emotionality, and a stubborn resistance to reasonable argument. I often cringe when my work with Amos [Tversky] is credited with demonstrating that human choices are irrational, when in fact our research only showed that Humans are not well described by the rational-agent model. (Kahneman, 2011, p. 411)

Note, Kahneman uses the capitalized term “Humans,” borrowing it from Thaler (2016), to indicate that his model of human behavior is more realistic than the rational choice model. He is not claiming that the two models are incompatible.

Even in the Introduction of this book, Kahneman takes great pains to emphasize that researchers should not take the biases that Tversky and he uncovered as evidence that human behavior is non-rational:

Much of the discussion of this book is about biases of intuition. However, the focus on error does not denigrate human intelligence, any more than the attention to diseases in medical texts denies good health. Most of us are healthy most of the time, and most of our judgments and actions are appropriate most of the time. As we navigate our lives, we normally allow ourselves to be guided by impressions and feelings, and the confidence we have in our intuitive generalizations and preferences is usually justified. But not always. We are often confident even when we are wrong, and an objective observer is more likely to detect our errors than we are (Kahneman, 2011, p. 4).

Kahneman’s early research with Tversky shows that his “Humans” are well-described by a nuanced rational-agent model, one radically differing from Thaler’s understanding of the contribution of Kahneman along with Tversky. What adds to the confusion is that Thaler has a view of human behavior differing from Kahneman-and-Tversky’s while presenting his view as a continuation of their view.

To be precise, for Thaler, actual humans, what he calls “Humans” as opposed to the “Econs” concocted by standard economics, misbehave. Unlike the “Econs,” “Humans” are the product of inclinations, temperaments, and emotions (Thaler, 2016). While Thaler does not have a coherent theory of what motivates his “Humans”, he characterizes the basis of the collection of misbehaviors as “quasi rationality”.

The focus of this paper is not Thaler’s view. Still, it is worth mentioning his view because, to add to the fog, Kahneman (2011) uses Thaler’s Humans/Econs distinction uncritically. This may potentially mislead readers into thinking that Kahneman is subscribing to Thaler’s view—when they are radically different. A thinker such as Kahneman, who posits the mental economy, i.e., the standard economists’ bounded rationality, cannot subscribe to Thaler’s Humans/Econs distinction at first approximation. Kahneman’s (2011) endorsement of the Humans/Econs lexicon can only reflect a stylistic ornament on the part of Kahneman to distinguish his model of heuristics from a naïve neoclassical, standard model that ignores cognitive cost.

Why rationality? Why dual process theory?

Why rationality?

While other principles might succeed as well, the rational choice principle can easily explain the first-fold parallelism: why does cognitive economy resemble physiological economy in a substantive sense? The first-fold parallelism entails an analysis of “habits” showing the term to be non-portmanteau—i.e., not just a term denoting what is actually unrelated phenomena (in this case, habits-as-heuristics and habits-as-routines). The first-fold parallelism registers that “habits” is a concept in the substantive sense—i.e., habits-as-heuristics and habits-as-routines are the outcome of a singular principle of operation. This paper proposes that the singular principle is rational choice. That is, rational choice can explain both, habits-as-heuristics and habits-as-routines.

This behooves us to define the concept “rational choice.” This paper defines it in the standard economist sense. It is useful to distinguish between two senses of rationality, what this paper calls the “decision sense” and the “command sense”. The decision sense of rationality involves the stipulation that DMs have consistent preferences. That is, each DM operates according to the completeness axiom (i.e., the DM can rank all possible bundles of goods) and the transitivity axiom (i.e., the DM can rank all possible bundles in a non-contradictory manner). Along with other technical axioms that allow theoreticians to employ the calculus of differentiation at the margin (e.g., the local non-satiation axiom), the DM can identify what is the best (optimal) decision.

The other sense, the command sense of rationality, states that it is insufficient merely to identify the optimal decision. It is necessary to command the decision, i.e., execute it, to qualify the action as rational. The command sense implies the decision sense but not vice versa. The command sense ensures the DM does not succumb to suboptimal options, as in the case of weakness of will. This proposed command/decision rationality distinction defies the assumption of revealed preference: whatever is executed must have been the optimal decision, as if the weakness of will problem and, corollary, self-deception phenomenon do not exist.Footnote 1

Given the focus of this paper on habits—i.e., the paper is concerned with neither weakness of will nor self-deception—the distinction between the two senses of rationality is not pertinent. However, it is important to highlight. Researchers are tempted to abandon rational choice altogether because of the ubiquity of the weakness of will and self-deception. Even if the two phenomena are ubiquitous, we need rational choice to identify them. Additionally, we need the distinction to show that the slipups and breakdowns of habits are unrelated to the phenomena of weakness of will and self-deception.

Habits amount to, up to a limit, information immunization. Habits allow the DM to ignore, to some extent, pertinent information, even when such information is freely available.Footnote 2 Even when the pertinent information could be free, the immunization is necessary, as there is a cost beside the cost of goods that the DM purchases—namely, the cognitive cost. It is cognitively costly to process the information via methodical examination of the facts of the case. That is, it is mentally costly to undertake a case-by-case consideration, what is called here “deliberation,” to infer the appropriate generalization or the appropriate behavior. To economize on the cost of deliberation, the DM immunizes the self from pertinent information, even if it is freely available, and instead rely on good-enough heuristics, in the case of generalizations, or good-enough routines, in the case of behavior.

Why dual process theory?

Why is single process theory insufficient? According to Kruglanski and Gigerenzer (2011), Osman (2004), and others (e.g., Gigerenzer & Regier, 1996; Keren & Schul, 2009), single process theory is more parsimonious than dual process theory.

Following the stipulation of Ockham’s razor (Khalil, 1989), scientists seek parsimony of theory. Otherwise, the statements scientists utter are no different from well-phrased everyday chatter. Thus, the demand of parsimony, pressed by Kruglanski, Gigerenzer, Osman, and others, is valid.

The issue is whether dual process theory violates parsimony. As modified here, dual process theory does not violate parsimony—but rather confirms it. The modified theory offers a single principle, rational choice, as the operative principle underpinning the dual processes of decision making. Indeed, rational choice is the underpinning principle that shows the link between the dual processes, System 1 and System 2. As Bellini-Leite (2022) argues, received dual process theory suffers from the “unity problem”: how are System 1 and System 2 related? Further, the modified dual process theory, unlike the single process theory, eschews anomalies related to the second- and third-fold of the parallelism.

However, first, what is dual process theory? The theory places the operation of habits—i.e., which allows decision makers to be immune from pertinent information even when such information is freely available—alongside a process that is separate from another process involving deliberation. The deliberative process, by definition, does not ignore pertinent information especially if it is freely available. Given this difference, we need the dual process of decision making.

The idea that cognitive processes operate along dual tracks was long noted by ancient philosophers and modern thinkers. Modern psychologists articulated the old idea in clearer and testable forms (Evans, 2017). Stanovich (1999) employed the terms “System 1” and “System 2” to denote, respectively, the type of decision making involving habits, i.e., intuitive and fast, and the type of decision making involving inference, i.e., deliberative and slow.

As is the case for many new theories, researchers started to find weaknesses in the dual process theory. Critics have pointed out that the advocates of dual process theory,

  1. 1.

    make vague claims;

  2. 2.

    lack unequivocal evidence as there is continuity between System 1’s and System 2’s reasoning;

  3. 3.

    conflate rational/non-rational thinking with, respectively, the System2/System1 distinction;

  4. 4.

    confuse conscious/nonconscious distinction with, respectively, the System2/System1 distinction;

  5. 5.

    fail to state unambiguously “attribute clusters” associated with each system;

  6. 6.

    neglect to relate the two systems to neurophysiology and evolutionary biology;

  7. 7.

    most importantly, violate parsimony (Ockham’s razor); and probably more shortcomings.

Others have responded to these shortcomings (e.g., Evans & Stanovich, 2013; Evans, 2017). Given its thesis, however, it behooves this paper to respond to the last, most important charge. Why is the proposed rationality-based dual process theory parsimonious?

The proposed theory can explain easily the second-fold parallelism: why do habits occasionally turn into cognitive illusions and physiological malfunctions, i.e., slipups per se. One can define slipups (biases of heuristics and routines) only if the underpinning habits-as-heuristics and habits-as-routines were rational. We cannot judge the biases as slipups without implicitly assuming the rational choice principle.

To see why, as intimated above, when the DM adopts a habit—whether a heuristic or a routine—the DM must have decided that it is efficient on average given the ex ante available information. There will be situations where the DM comes to find out that the useful habit is a slipup—but only finds out ex post. The DMs make such a finding in relation to facts that were outliers, appearing occasionally. In either case, whether the habit does not fail or when it fails and appears as a slipup, the rational choice principle is at work.

If the DM had decided via deliberation, then the DM would have noticed the outlier facts ex ante. Hence, the DM would not have taken the decision that would have been a slipup under the institutive System 1. In this scenario, let us call it “scenario A,” the deliberative System 2 is ex ante rational and ex post correct.

However, consider scenario B, where the DM is still deciding via deliberation. There could be cases where the DM took into consideration all the freely available information. However, the DM may ex post judge the decision to be a mistake, considering that DMs have discovered additional information because of acting on the decision. Or what is the same thing, the ex post outcome (the mistake) may reveal the DM should have searched for more information prior to acting. The DM had ex ante (i.e., rationally) decided to not undertake such search considering ex ante cost–benefit calculation. In this scenario B, the deliberative System 2 is ex ante rational and ex post incorrect.

The difference between scenario A and scenario B is subtle: the intuitive System 1 is implicitly operative in scenario A, but not in scenario B. To pin down this subtle difference, we need to hypothesize that there are dual processes of decision making, where rational choice is pivotal in judging the outcome of each process.

Likewise, the proposed rationality-based dual process theory can explain the third-fold parallelism: why habits might turn into cognitive collapse and physiological collapse, i.e., breakdowns per se. Such breakdowns of habits-as-heuristics and -as-routines are the result of a “shock”, defined as a massive failure of such habits. We can define such breakdowns as the result of a shock only if the underpinning habits-as-heuristics and habits-as-routines were rational. We cannot judge the collapses as breakdowns without implicitly assuming the rational choice principle.

To see why, when the DM decides on a habit—whether a heuristic or a routine—the DM must have judged it, up to a limit, to be efficient. That is, it becomes inefficient when the habit, on average, leads to slipups that are more costly than the benefit arising from the saved cognitive or physiological cost. The DMs make such judgment given the facts. Hence, the rational choice principle is at work.

In the case of breakdown, it behooves the DM to ditch the habit, whether a heuristic or a routine, i.e., abandon the intuitive System 1. The DM must resort to the deliberative System 2. After some experience, the DM comes to adopt alternative habits, i.e., to engage again the intuitive System 1. To account for the disengagement and the engagement of the intuitive System 1, we need to hypothesize that there are dual processes of decision making operating based on rational choice. Such a rationality principle allows us to decide when to disengage and when to engage the intuitive System 1.

The outlier-facts that turn habits into slipups or the shocks that turn habits into breakdowns force us to situate the DM in front and center. The DM must decide when the intuitive System 1 remains efficient despite the slipups (i.e., cognitive illusions and physiological malfunctions). The DM must decide when the intuitive System 1 is no longer efficient considering the shocks causing the breakdown. The DM must decide that with sufficient experience he or she can retire the deliberative System 2 in favor of the intuitive System 1. To make sense of these decisions we need rationality-based dual process theory.

Anomalies facing the single process theory

Why is the single process theory inefficient, contrary to the arguments of Kruglanski and Gigerenzer (2011) and others? Why should we rely on the dual process theory, the name of which supposedly suggest “dual principles,” violating parsimony (Ockham’s razor)?

First, the dual processes are not dual principles. As this paper proposes, the dual process theory is based on a single principle, rational choice. In this manner, it is similar to single process theory of the kind proposed by Simon, Gigerenzer, and collaborators. Namely, they propose that habits are the entry-point. Such entry-point specifies a pre-given threshold of satisfaction, which Simon (1957) calls “satisficing.” If the organism’s habit-controlled behavior fails to reach this threshold within its ecological niche, it would be the impetus for the organism to adopt the first habit it encounters that passes the threshold test.

A question arises. What is the operative behind the threshold level? At first approximation, the approach of Simon and his followers supposes that it is pre-given. It is upon which the DM may switch from one habit to another. In his attempt to explain altruism, Simon (1990) appeals to basically neo-Darwinian theory of natural selection, modified by learning during ontogeny, to explain behavior. However, neo-Darwinian theory is also about optimization—an optimization that produces the same result as the optimization behind rational choice theory. So, to be consistent, Simon cannot appeal to neo-Darwinian theory and, hence, left with no theory of the threshold level.

There are at least two other reasons why the rationality-based dual process theory is better than single process theory à la Simon and his followers. First, let us assume that there is a single process, where the organism is only ready to change a habit when a pregiven threshold is no longer satisfied (the satisficing notion). In the face of any decision, the DM must decide whether to deliberate or decide using the habit. Let us say that the DM chooses deliberation, and the outcome turns out to be a mistake. Such a mistake is not a slipup as defined within the dual process theory. As in scenario B above, where the intuitive System 1 is not operative, the decision is ex ante rational and ex post incorrect. Now that the DM knows, the following day he or she would take the correct decision. He or she would have learned because was hard to search for in the previous day.

However, if the slipup arises because of intuition, i.e., the use of a habit, such as having a bad meal at a restaurant even when the DM followed a heuristic, the DM does not dispense of the heuristic. The following day, the DM will continue to make the same choice, as the heuristic is correct on average. The DM expects occasional slipups, i.e., tolerate them up to a limit. Habits, by design seem to be inelastic to information feedback, unlike deliberative decisions (Khalil, 2022).

It is hard for the single process theory to explain the asymmetry: why DMs learn from one experience in the case of deliberation, but not in the case of intuition. That is, at first approximation, slipups as defined here, i.e., as insufficient for changing habits, are anomalous for single-process theory. And if the single-process theorist proclaims it is hard to change a heuristic because of habits, he or she is effectively expounding dual process theory by another name.

Second, let us assume again that there is a single process of decision making. We have another kind of anomaly considering the possibility of breakdowns—i.e., cognitive and physiological collapses. If a breakdown takes place, the DM re-considers the decision irrespective of whether he or she had taken it via deliberation or intuition. That is, there is no difference between the dual processes of decision making. However, and here is the anomaly, let us suppose the breakdown is the outcome of the intuitive System 1. Why would the breakdown force the DM to engage the deliberative System 2, but the slipups not? The single process theory cannot, at first approximation, answer this question. It is anomalous. And if the single-process theorist proclaims slipups are different from breakdowns, he or she is effectively expounding the dual process theory by another name.

To sum up, a theory, such as the dual process theory, would be an ad hoc qualification if it appealed abruptly to a principle other than the one used to explain decision making. In this case, the dual process theory uses the same principle—what Kruglanski and Gigerenzer (2011) correctly set up as a criterion in their critique of the dual process theory. However, what this paper shows, the principle is not what Gigerenzer and collaborators advocate, viz., what they call “ecological rationality” (Gigerenzer et al., 2011). It is rational choice in the standard sense. It is rational choice that is based on the relevance of the cost–benefit calculus, which can only be relevant if we assume the completeness and transitivity axioms as stated above.

The dual system theory is an example of parsimony: It uses a single principle to explain a wide range of phenomena (Khalil, 1989). The fact that the DM may switch from System 1 to System 2 because of a shock, i.e., showing continuity between the two systems, is supportive of the dual system theory. It shows the DM is not made up of dichotomous halves. Rather, based on a singular principle, viz., rational choice, the DM can suspend the intuitive System 1 in favor of the deliberative System 2 in the face of a shock. The DM also can suspend the deliberative System 2 in favor of the intuitive System 1 if circumstances settle down.

In short, to use Lakatos’ criterion of progressive scientific research program, the auxiliary qualification of dual processes of decision making is not ad hoc. The hypothesis of dual processes stems from the same principle explaining the possibility of a single process, the rational choice principle.

Figures 1 and 2 sum up rationality-based dual process theory. They show basically the bioeconomics of decision making. Figure 1 demonstrates that DMs start with experience,

Fig. 1
figure 1

The bioeconomics of the cognitive economy

Fig. 2
figure 2

The bioeconomics of the physiological economy

the operation of the deliberative System 2. After a few encounters, DMs infer a rule-of-thumb or a stereotype, i.e., a heuristic, which is stored in the intuitive System 1. The transition from System 2 to System 1 expresses the cognitive economy, which is the bioeconomics of minimizing the cost of engaging System 2 on a case-by-case examination. However, the outcomes, namely, habits-as-heuristics, will occasionally become slipups, engendering biases, i.e., “cognitive illusions.” When a cognitive illusion arises, it amounts to a setback, but only in hindsight. It is important to note that, ex ante, habits-as-heuristics are efficient on average, i.e., the DM continues to adopt such heuristics after suffering from occasional cognitive illusions. Further, the more-or-less stable heuristics may suffer from a shock, as discussed below.

Figure 2 demonstrates the bioeconomics of routines. Like the bioeconomics of heuristics, the DMs start with experience, the operation of the deliberative System 2. After a few encounters, DMs infer a good-enough routine stored in the intuitive System 1. The transition from System 2 to System 1 expresses the physiological economy, i.e., the bioeconomics of minimizing the cost of engaging System 2 regarding the merits of each behavior. However, the outcomes, namely, habits-as-routines, will occasionally become slipups, engendering deviations, i.e., “physiological malfunctions.” When a physiological malfunction emerges, it amounts to a mistake, but only in hindsight. It is important to note that, ex ante, habits-as-routines are efficient on average, i.e., the DM continues to adopt such routines even after suffering from occasional physiological malfunctions. In addition, the more-or-less stable routines may suffer from a shock, as discussed below.

The cognitive economy

Habits-as-heuristics

This section reviews four well-known behavioral biases that are not usually grouped together as the outcome of habits-as-heuristics, the working of the cognitive economy. The first is the preference reversal phenomenon, articulated by Lichtenstein and Slovic (1971) and Lindman (1971) in the psychological literature. Starting with two gambles, DMs tend to choose the option that is the least risk over the option of the greater monetary value. But when asked to place prices on the two options, they reverse their preferences: they place a higher price on the riskier option with the greater monetary value. It seems that DMs ignore the risk dimension when they value options, falling into the habit-as-heuristic of valuing options according to their face monetary reward.

The second illustration of habits-as-heuristics is the famous experiment regarding two dictionaries (Hsee et al., 1999). DMs place a greater price on a dictionary of 10,000 entries than a dictionary of 20,000 entries whose cover is somewhat torn. But once presented jointly, DMs reverse their preference. For Hsee, the joint presentation of the dictionaries uncovers a salient attribute that, otherwise, would have been missed in the separate presentation. Kahneman (2011, pp. 353–362) offers a rational choice explanation, but without calling it “rational.” Once the experimenter jointly presents the two dictionaries, DMs suspend the intuitive System 1 and start to engage the deliberative System 2. Participants start to notice what matters: the number of entries of each dictionary. When the options presented separately, DMs engage only System 1: judge the options by the cover.

The third example of habits-as-heuristics is the Wason selection task (Wason & Shapiro, 1971). The task is usually touted as an example of non-rational logical thinking (e.g., Johnson-Laird, 1999; Manktelow, 1999): only about 10% of participants offer the rational answer. However, a detailed examination of the Wason selection task should lead us to conclude that it rather expresses rational judgment. Following Margolis (2000), the experimenter’s question misled DMs to think automatically of selecting the cards prompted by the question. The picking of already mentioned cards involves less mental effort than thinking of alternative cards.

This matches another explanation many authors have proposed (e.g., Barkow et al., 1992). Namely, when presented with everyday examples as the content—e.g., the drinking age regulations in a city or state—DMs are successful in solving the Wason selection task. This buttresses rational choice explanation: The ability in the drinking age experiment was missing in the abstract version of the Wason selection task: DMs can perform logical reasoning if the cost of information processing afforded by an example from everyday life is low.

The fourth illustration of habits-as-heuristics is the over-inference problem. The iconic example of the problem—known also as the base rate neglect—is the hit-and-run accident committed by a taxi at night. Most participants of a questionnaire opined that the likelihood that the culprit is a blue taxi is close to 80%–following that the witness is of 80% reliability, ignoring the base rate of blue taxis in town is only 15%. Tversky and Kahneman initially supposed that DMs “automatically” ignore the base rate. They modified their position (Tversky & Kahneman, 1982) in light of the work of Ajzen (1977). Ajzen distinguishes between two kinds of base rate: the “incidental base rate,” which is a mere statistic, and the “causal base rate,” one that the DMs may use to construct a plausible theory of causality. Once they told the participants the causal base rate, the participants to a great extent took into consideration the base rate.

Cognitive illusions

There are many other examples of habits-as-heuristics which produce cognitive illusions (see Kahneman, 2011, pp. 19–265). To focus on the four presented above, let us start with preference reversal. DMs generally choose the safe gamble. However, as mentioned above, when the experimenter asks about the monetary valuation of the different gambles, the participants reverse their choices. DMs value more the gamble that has greater face value, ignoring the risk dimension.

While such a reversal is a cognitive illusion, the heuristic of paying attention to one dimension over another—money over risk in this example—rather expresses rational choice. Given the quest after cognitive ease (cognitive economy), the experimenter’s question about monetary valuation invokes the heuristic of focusing only on the monetary dimension, ignoring the risk dimension. The heuristic has led the DMs to a cognitive illusion, a slipup. They ignored the true valuation of the two options as revealed when they were asked about their actual choice, and focused only on what the question invokes, the monetary dimension.

Regarding Hsee’s two dictionaries experiment, it also amounts to a cognitive illusion. The appearance, i.e., whether the cover is torn or not, prompted the heuristic that appearances matter when one cannot use any alternative benchmark on how to evaluate the number of entries. The heuristic “deceived” the participants, i.e., engendering the slipup.

Similarly, we can re-cast the Wason selection task as a cognitive illusion. In the task, the participants appear to have been “primed” to provide the wrong answer by the available words “red” and “4.” To think abstractly simply involves a cognitive cost that cannot be justified by the expected benefit.

Likewise, the base rate neglect is another example of cognitive illusions. It is an example of a slipup, showing how DMs dramatically fail to undertake Bayesian reasoning, where the cognitive cost is expensive relative to expected benefit.

The bioeconomics of habits-as-heuristics

System 2 and System 1

Kahneman’s “mental economy” can be called “bioeconomics”, as it amounts to the rational choice explanation of habits-as-heuristics and, corollary, cognitive illusions. Such bioeconomics uncovers a complementarity between the intuitive System 1 and the deliberative System 2 as depicted by dual process theory. The DM allocates tasks between the dual systems according to rational choice.

DMs find it justifiable to ignore the deliberative System 2 and to engage instead the intuitive System 1, when:

  1. i)

    the benefit of finding the truth is insignificant—relative to

  2. ii)

    the cost of finding the truth that is high.

However, when the benefit is significant relative to the cost, such as when related to a job interview, the DM engages the deliberative System 2. Thus, rational choice is the basis of the DM’s judgment on whether to engage or suspend the intuitive System 1, the use of habits-as-heuristics.

The Simon-Gigerenzer approach

However, what is the origin of habits-as-heuristics according to the Simon-Gigerenzer approach identified above? This approach is caught between two fronts—which complicates the rendition where the approach stands. At the first front, rational choice cannot be the origin of heuristics; at the second front, the Tversky/Kahneman approach cannot explain the behavioral biases (cognitive illusions) as deviations from rational choice.

At the first front, for the Simon-Gigerenzer approach, heuristics are modes of being expressing how the organism functions. Heuristics enable them to mesh with their ecological niche. Thus, DMs cannot select heuristics for the same reason they cannot select their own height, eyes color, gender, or innate skills in running, mathematical acumen, and love of music. The DM is ready to improve such traits and skills or even replace them with alternatives—but only if there is a new ecological niche that renders the old heuristics obsolete. And the criterion of being obsolete is viability or functionality to sustain a specified level of wellbeing. The criterion can never be the finding of some new optimum à la rational choice.

At the second front, for the Simon-Gigerenzer approach, heuristics and whatever slipups they engender cannot be deviations from rationality. Rational options cannot be identified in the first place. The observed slipups, in fact, are exaggerated, expressing rather artificial laboratory setups, poor experimental designs, impoverished storytelling, or the use of language that mislead the participants.

There is a range of phenomena for which the Simon-Gigerenzer approach might be relevant. As argued elsewhere (Khalil, 2022), one’s mode of being is relevant to bonding: how one relates to one’s family and nation, ideology, and aspiration. Also, as argued elsewhere (Khalil, 2021), one’s mode of being is relevant to beliefs such as convictions and perspectives as in Rubin’s vase. Many non-standard economists have identified the limits of the neoclassical “bounded rationality”, namely, it lacks the toolkit to account for bonding, ideology, and visions (e.g., Denzau & North, 1994).

However, this paper focuses on habits in the sense of generalizations and routines. For the Simon-Gigerenzer approach to rule out the relevance of rational choice with regard to generalizations and routines leads to many anomalies. One of them, as discussed above, is the failure to explain the fluidity or link between system 1 and system 2: how DMs may decide, if expected benefits and costs change, to suspend the intuitive System 1 or, as a result of sufficient experience, to suspend the deliberative System 2.

Cognitive illusion versus cognitive collapse

The “Shock” and cognitive miserliness

As Fig. 1 demonstrates, the severity of cognitive illusions, called “shock”, eventually leads to a breakdown, i.e., cognitive collapse. We may define the “shock” as the case when the expected cost starts to exceed the expected benefit of the heuristic. The resulting cognitive collapse prompts the DM to deliberate, i.e., to engage System 2 with respect to the heuristics at hand. The old heuristics can no longer act as the second-best optimal technology. As the DM repeatedly engages the deliberative System 2, and if the new set of constraints settles and becomes stable, the DM would opt for a new appropriate heuristic economizing the cognitive costs.

There is a need to distinguish between cognitive illusions and cognitive collapse. To call both “cognitive miserliness” (Fiske & Taylor, 1984), in the sense of cognitive economy, suggests a conflation. Cognitive illusions are the expected cost of otherwise an optimal cognitive economy and, hence, tolerated. Cognitive collapse is a signal that the cost has exceeded the benefit and, hence, the DM must abandon the heuristic under focus.

Let us restrict the term “cognitive miser” to the case of excessive cognitive economy, as in the case of obstinate DMs who refuse to change their heuristics despite their collapse. That is, the obstinate DM clings to a heuristic that is no longer rational as a second-best technique. Such a cognitive miser violates the complementarity of the intuitive System 1 and the deliberative System 2. Instead, he or she allows the intuitive System 1 to dominate unchecked the choice regarding such situations.

Who is the prime mover—System 1 or System 2?

Considering the analysis of cognitive breakdown, the primitive or prime mover must be the deliberative System 2. This is contrary to the supposition in the received literature. Namely, it supposes that the prime mover is the intuitive System 1. This supposition may explain why many researchers—e.g., Kahneman, Tversky, and even their critics such as Gigerenzer and collaborators—conceive either heuristics or their slipups (cognitive illusions) as the entry-point of analysis.

If it is any indication, the literature names the intuitive system as “System 1”—suggesting that it trumps, at first approximation, the deliberative System 2. In contrast, the proposed rationality-based dual system theory suggests that the deliberative System 2 trumps the intuitive System 1. The starting point is the DM, how the DM experiences the environment daily and, consequently, how the DM makes inferences. With greater experience, the inferences harden into habits-as-heuristics.

Pennycook et al. (2015) offer a model that implicitly recognizes the primacy of the DM, i.e., the deliberative System 2. In their model, people are ready to de-couple from a heuristic when they recognize it to be dysfunctional, i.e., expressing cognitive collapse. The model further identifies the case of rationalization, i.e., self-deception, where people may engage in justifying what has proven to be a broken-down heuristic. (They also distinguish such a collapse from what this paper calls “slipups,” the tolerated cognitive illusions or biases.) While Pennycook et al. confirm the proposed rationality-based dual process theory, they fall short of explicitly endorsing the criticality of rational choice with regard to heuristics.

The result of the experiment of Raoelison et al. (2020) also strongly supports the supposition that the deliberative System 2 is the entry-point. They start with the common supposition that the prime mover is the intuitive System 1, which predicts that people with higher cognitive capacity are more capable of noticing and correcting biases or errors. However, contrary to the prediction, they find that people with higher cognitive capacity are better at having correct intuitions. This result will not be surprising if one starts with the proposed theory, viz., the origin of the intuitive System 1 is the deliberative System 2.

The physiological economy

Explaining habits-as-routines

When DMs follow routines, they neglect information pertaining to the particular act. Such neglect seems non-rational given the information is freely available and the outcome may engender behavioral malfunction. So why would DMs adopt routines that could result in physiological malfunctions?

This question is wrongly posed, according to the routine-as-capability approach (e.g., Felin & Foss, 2012). It is so for the same reason that the Simon-Gigerenzer finds the question about the non-rationality of heuristics is wrongly posed. For the routine-as-capability approach, routines are “ways of being.” They are the tissue binding the DM with their ecological niche. As if the DM and the environment amount to a single unit, where the routine expresses their union. The organism cannot select the routine on rational grounds. Hence, for such an approach, it does not make sense to ask why the DM adopts seemingly non-rational routines.

Stated differently, the question assumes that the DM first assesses the information and then undertakes acts to manipulate their environment. For the routine-as-capability approach, the DM first acts as embedded in the environment. The act generates an entity consisting of the organism and its environment that supposedly contextualizes the information. For such an approach, the information regarding facts about the environment at first approximation is ultimately internal, generated internally by the union of the organism and its environment.

Like the critique of the Simon-Gigerenzer approach regarding heuristics as defined here, the routine-as-capability approach is inappropriate over the range of routines where the DM stands as external actor vis-a-vis the ecological niche. The DM is not embedded in his or her niche as far as the routine is second-best technology. As such it is efficient on average, whereas deliberation over the act would produce first-best that is not really best, but suboptimal given that deliberation involves physiological cost.

The grounding of routines on rational choice amounts to viewing them as second-best techniques, i.e., as the outcome of the operation of the intuitive System 1. This allows us to undertake a bioeconomic analysis of occasional slipups of routines, namely, what was called above “physiological malfunctions.” Moreover, the proposed rationality-based dual process theory permits us to analyze physiological malfunctions along the same line of analysis of cognitive illusions, i.e., as occasionally failed heuristics. As such, we may model physiological malfunctions as the slipups of the physiological economy—paralleling cognitive illusions as the slipups of the cognitive economy.

Physiological malfunctions

Pavlov’s dog is a good illustration of the class of physiological malfunctions. While the experiment has become proverbial, it is worth recasting considering the physiological economy. Upon hearing the bell that is associated with a meal offering, it is cost effective for the digestive system to salivate. Once the experimenter withholds food on one occasion, it is still efficient for the digestive system to salivate. Although it would be a physiological malfunction, the routine is second-best, i.e., efficient on average.

Another illustration of the class of physiological malfunctions is the famous “Simon effect”: the location of the stimulus, when should not be relevant, influences how DMs select the task. As the experiment of Simon and Rudell (1967) shows, when the experimenter informed DMs via the right ear to undertake a task on the left side, or vice versa, DMs exhibited significant reaction latency. However, the reaction time was normal (regardless of the age of the participants) if the task were in sync with the auditory input: both are on the same side, whether left or right.

Simon (1969) himself offers the dominant explanation in the literature: it is natural for people to react in accordance with the physical source of the physiological input. This explanation amounts to the use of the rationality-based physiological economy. DMs intuitively want to touch the object on the right if the source of the physiological input is on the right, and to touch the object on the left if otherwise. This is the outcome of the intuitive System 1. There is a reaction latency, or other physiological malfunction, when there is interference. The interference gives rise to the slipup. The placement of the object in a position that is not in sync with the source of the physiological input is an occasional slipup of the habits-as-routines.

The Simon effect is an offshoot of the famous “Stroop Color and Word Test” (SCWT) or the “Stroop effect”, which does not involve a mismatch of location and physiological input as in the Simon effect, but a mismatch between two stimuli. People take often more time to read the word, say, “green” that is written in red, or the word “yellow” written in blue. Obviously, the physiological economy involves a second-best routine: to read what is the actual color. Here, such a routine faces the slipup of mistaken reading or reading at a slower pace than usual—i.e., a physiological malfunction.

However, we can highlight salient examples of physiological malfunctions from everyday life. A housekeeper may move the salt from the right- to the left-hand cabinet permanently. Then, while cooking, when the cue is the need for salt, the housekeeper will continue to respond by reaching the wrong cabinet for many days, if not weeks and months. Likewise, when one moves to a new house, the DM may find him- or herself walking or driving in the old direction, for many days if not weeks and months.

While the last cases prompt the DM to abandon a routine, the other cases do not. It behooves the DM to keep a routine, despite the occasional physiological malfunction, when the routine saves physiological cost of deliberation case-by-case. The kept routine would be inelastic to pertinent and even freely available information, leading to the occasional physiological malfunctions that DMs tolerate.

Physiological malfunctions versus delirious side-effects

Physiological malfunctions differ from what this paper calls “delirious routines.” A routine is delirious when it involves negative side-effects. For example, one may find a particular route to work second-best, i.e., efficiently on average. Once a year, there might be an obstruction, such as construction or flood, which makes the DM conclude ex post that it would have been better to have taken another route on that day. Still, the DM would keep the routine, ceteris paribus.

Meanwhile, the routine may entail a negative side-effect as well, say, the route involves a dirt road that calls for more frequent car-washing. Like expected physiological malfunctions, the DM must have already considered the delirious side-effect of extra dirt when he or she adopted the routine. However, there is a slight difference between the two types of cost: The occasional physiological malfunction is a probabilistic cost, while the delirious side-effect is a certain cost.

Take another example, the routine of cigarette smoking has delirious side-effects, namely, the decline of the ability of the body to fight illnesses. This slightly differs from the probabilistic cost of physiological malfunctions—e.g., the likelihood of inadvertently starting a fire or causing bodily burns.

In one regard at least, it is helpful to keep the cost of slipups (physiological malfunctions) separate from the cost of delirious side-effects. The separation allows us to identify the source of the abandonment of a routine: is abandonment prompted by greater awareness of delirious side-effects, or by more severe physiological malfunctions leading to a shock?

The bioeconomics of habits-as-routines

The bioeconomics of action may explain the information inelasticity of habits-as-routines. Such bioeconomics, as in the case of cognitive economy, is based on cost-and-benefit calculation.

Like habits-as-heuristics, which arise to economize cognitive cost, habits-as-routines arise to economize physiological cost. The routine is the outcome of physiological economy: DMs adopt routines when the cost of computing the first-best action exceeds the benefit. For example, when a hot skillet touches the skin, the DM reacts immediately to minimize the damage. If the matter is left to deliberation, the damage would be greater, without additional benefit to justify the greater cost. Hence, the routinized response to hot skillet, via the mechanics of bodily pain, enhances well-being.

Routinized responses also enhance well-being in other cases, e.g., when the coordination of a series of actions is costly. For instance, when the DM awakes in the morning and drives to work, the sequence of actions performed is well-settled into a routine. Therefore, the physiological economy assures, up to a limit, either a more significant benefit or, equivalently, a lower cost given the same benefit.

The DM adopts routines expressly to economize on physiological cost. Routines appear as a mundane, unreflective set of acts for the DM who has already adopted them. However, for the uninitiated, such as children taking their first steps, such adoption is challenging. Walking becomes a mundane, unreflective act once the child grows up, relegating the activity to a routine. Likewise, taking the first step is challenging for a DM who has lost the ability to walk after, say, a skiing accident. However, through the machinery of the physiological economy, what is challenging eventually becomes less so with practice, which leads to the adoption of a routine. The adopted routine allows the DM to put scarce physiological capabilities to other uses, facing new challenges requiring deliberative attention. Such an allocation of physiological cost between the deliberative System 2 on one hand, and the intuitive System 1 on the other, is the core of the bioeconomics of physiological economy. In such bioeconomics, the DM takes into consideration the expected cost of routines, namely, the possible appearance of physiological malfunctions.

Physiological malfunctions versus physiological collapse

The shock

As Fig. 2 shows, the shock leads to a breakdown, i.e., physiological collapse. This paper defines the “shock” as the dividing line separating physiological collapse and physiological malfunction. We have a breakdown when the expected cost of a routine exceeds its benefit. Consequently, the DM must suspend the routine and instead engage the deliberative System 2. Once the new set of constraints is settled and System 2 faces the same circumstances, the DM would notice that the decisions taken are repetitive of past decisions. Thus, it behooves the DM to turn the new behavioral pattern into a routine, engaging again the intuitive System 1.

It is possible that the DM clings to a routine that is no longer efficient out of anxiety. As in the case of the obstinate DM whose cognitive economy fails, the obstinate DM experiences a failed physiological economy. The DM fails to replace the inefficient routine—i.e., inefficient as a second-best pattern of behavior—with the deliberative System 2 in order, eventually, to find a new behavioral pattern.

We may call the anxiety-driven failure of the physiological economy “physiological miserliness.” Physiological miserliness parallels cognitive miserliness that characterizes the failed cognitive economy. Physiological miserliness can be literal as in the case of the behavior of misers, i.e., over-saving behavior. It also can involve excessive exercising, excessive attachment to rules, and so on, which can be summed up as “stiffness of will”. The anxiety can also generate contrary forms, such as succumbing to weakness of will. Examples of weakness of will include under-saving, under-exercising, reckless behavior, profligacy, and other disregards of efficient rules (Ibid.).

It is outside the scope of this paper to examine why anxiety may give rise to stiffness of will (physiological miserliness) as opposed to weakness of will (profligacy). Given the focus on stiffness of will, Fig. 2 highlights the difference between the adherence to a routine in the case of slipups and the adherence in the case of breakdowns. While the former is rational, the latter is non-rational. We need the rationality principle to delineate the two cases, at least conceptually. We also need the rationality principle to provide an endogenous account of the origin of routines: how after experience, the deliberative System 2 starts to yield to the intuitive System 1.

As for neurological underpinning of the rationality principle, Glimcher and collaborators (see Glimcher & Fehr, 2013) study the neural underpinning of information processes that encode value or what the economists call “utility” or “benefit.” Glimcher was able to show the neural basis of the calculation of value in the face of risk, and how the brain discounts future returns as posited by standard economics models regarding inter-temporal allocations. Glimcher, a pioneer in the new field of neuroeconomics, shows that neural processes are structured to compute benefits and costs to advance the wellbeing of DMs, a key pillar of the standard rational choice approach.

Friston and collaborators developed Bayesian theory in new directions (Friston et al., 2010). They offer a theory of neural functioning, where the neural mechanism operates to minimize the free energy needed in computing decisions. The minimization of free energy is the other side of the coin of maximizing benefit—what textbook economics calls “duality theory”. Friston and collaborators use Bayesian computation models, where the probability assessment is based on the available information.

It is still an open question whether Friston’s theory is a general theory of all kinds of decisions (see Bruineberg et al., 2021). For instance, it might not be capable of explaining aspiration, where the desire for distinction and achievement differs from the formation of heuristics and patterned behavior—a distinction well-recognized in the literature (e.g., Klein, 2018; Khalil et al., 2021). However, given that this paper focuses only on patterned behavior (routines), Friston’s theory is promising at least in accounting for the computation of predictive behavior that considers available information, a keystone of rational choice theory.

Conclusion

This paper proposes a rationality-based dual process theory. The proposed theory maintains that habits-as-heuristics and habits-as-routines are more-or-less optimal techniques: the former is part of cognitive economy and the latter physiological economy. The paper shows the parallelism of the two economies, extending to the origin of the two kinds of habits, their slipups (cognitive illusions and physiological malfunctions), and their breakdowns (cognitive collapse and physiological collapse).