Self-Deception
SELF-DECEPTION
If weakness of will is a pathology of agency, then it is natural to regard self-deception as a pathology of cognition. Self-deception is a species of motivated believing in which the cognition of a subject is driven by desire towards the embrace of some proposition—typically, "in the teeth of the evidence." Here we may think of the alcoholic, the terminal cancer patient, or the anorexic, who, even while in possession of compelling evidence of his condition, insists, sincerely, that it is just not so. Many investigators require that, more than this, the self-deceiver must be understood to bring about his deception intentionally and knowingly in pursuit of the doxastic embrace of some motivationally or affectively favored proposition. Were this so, self-deception would seem to involve the sort of deep or internal irrationality distinctive of weakness of will. For just as the weak-willed individual knowingly and intentionally acts against her judgment of what she takes herself to have best or sufficient reason to do, so the self-deceiver, on this picture of the phenomenon, knowingly violates her own norms or standards of reasoning—she comes to believe what she also believes there is insufficient reason to believe.
Producing a coherent account of how this is so has proved a vexing matter. Other investigators have argued that self-deception can be fully explicated without appeal to a subject's intentionally aiming to bring about her own deception against her current regard for the facts, and therefore without implicating this sort of deep irrationality. Notwithstanding these disputes, it seems clear that when we charge a subject with self-deception, we aim to offer both an explanation of how it is that a subject came to hold or retain a belief and a negative appraisal of the subject's belief-forming behavior.
In a quite literal way, the impetus behind the philosophical problem of self-deception springs from the force and puzzlement attached, in certain circumstances, to the question "How could he believe that?" We are all familiar with various unpleasant features of our cognitive lives, and there is no doubt that we do reason in ways that, as a matter of fact, violate epistemic norms that we endorse (the term 'epistemic,' meaning of or relating to knowledge, is derived from the Greek, "epistēmē"). The sources of such failures are many: we are subject to a profound confirmation bias, prone to be taken in by the vividness and salience of data (Nisbett and Ross 1980), forgetful and subject to fatigue, and so forth. Very plausibly, self-deception raises more pressing difficulties. In such cases, securing an answer to the question "How could he believe that?" compels us to reflect upon such issues as the nature of belief, doxastic agency, the unity of the self, and epistemic rationality and irrationality, among many others.
The Phenomenon
As suggested above, much controversy surrounds the effort to characterize the process of self-deception, the nature of the phenomenon itself, and the sort of irrationality characteristic of the phenomenon. Notwithstanding this disagreement, clear instances of what we call "self-deception" come readily to mind. The stock and shopworn example of the husband who, even though in possession of compelling evidence of his wife's infidelity, nonetheless insists upon her faithfulness is a case in point. Our husband may generate richly ornamented stories the apparent aim of which is to explain away the, by our lights, dispositive evidence of the fact of his wife's affairs. He may focus upon the occasions on which his wife has displayed great solicitousness and affection towards him, and he may well regard these data as clear and compelling evidence of her continued love for him. Moreover, he may subject evidence that strongly points towards his wife's infidelity to sustained and withering critical scrutiny, while precipitately embracing data indicative of her continued faithfulness. In short, our hapless husband repeatedly searches for reassuring evidence and probes various hypotheses in a sustained and continuing fashion in order to arrive at and then to retain the favored belief against various threats. Core cases of self-deception would, then, appear to involve a subject engaging in strategies the aim of which is the embrace of some proposition(s).
Traditionalism about Self-Deception
How are we to characterize and explain such behavior? An approach to such cognitive misadventures that we can term "traditionalism" aims to assimilate the dynamics of self-deception to those of interpersonal deception. As Mary Haight writes: "[I]f A deceives B, then for some proposition(s) p, A knows that p; and either A keeps or helps to keep B from knowing that p, or A makes or helps to make B believe that ∼p, or both" (1980, p.8). These lexical considerations (Mele 2001) for the traditionalist view, then, make it perfectly natural to characterize our self-deceived husband as knowing or believing that his wife is unfaithful and as aiming and ultimately succeeding in bringing it about that he comes to believe that she is, in fact, a loyal spouse. On such a model, the husband is not simply credulous, not merely stupid or epistemically careless; nor is he simply seduced by the salience or vividness of various data, or taken in by the confirmation bias. He is not, then, in the view of the traditionalist, merely a wishful thinker or believer. He aims at his own deception; he works hard to deceive himself. How else, we may ask ourselves, can he possibly believe that his wife is faithful? Why does he engage in such byzantine strategies, the apparent point of which is to avoid the implications of the evidence? Because he knows, or at the least strongly suspects, the truth—that she is unfaithful.
A typical traditionalist, then, will hold that our husband:
1. Believes that his wife is unfaithful (or believes that he ought rationally to believe that his wife is unfaithful).
2. Engages in intentional activity the aim of which is the acquisition of the belief that his wife is faithful.
3. Believes, at least for a time, both the belief adverted to in (1) and the belief adverted to in (2).
Donald Davidson, in an extremely influential essay titled "Deception and Division," embraced these three conditions. As he puts it in a much-cited passage:
The acquisition of a belief will make for self-deception only under the following conditions: A has evidence on the basis of which he believes that p is more apt to be true than its negation; the thought that p, or the thought that he ought rationally to believe that p, motivates A to act in such a way as to cause himself to believe the negation of p. The action involved may be no more than an intentional turning away from the evidence in favor of p, or it may involve the active search for evidence against p. All that self-deception demands of the action is that the motive originate in a belief that p is true … and that the action be performed with the intention of producing belief in the negation of p. Finally, and this is what makes self-deception a problem, the state that motivates the self-deception and the state it produces co-exist. (1985, p. 145)
It should be noted that Davidson's rationale for a contradictory or inconsistent belief requirement for self-deception is not—as it is on some accounts of self-deception (Demos 1960)—that the self-deceiver literally lies to himself. Davidson takes, very plausibly, the project of lying to oneself to require a self-defeating intention. Rather, Davidson takes the philosophical problem of self-deception to be a matter of our being forced to come to grips with a continuing and synchronous irrational or inconsistent state—a state he takes to be distinctive of self-deception. Davidson characterizes self-deception as a condition brought about by my intentionally causing myself to believe against what I also believe and continue to believe to be the weight of the evidence. As a result, and not surprisingly, Davidson argues that the characterization of such a state requires the postulation of mental partitions, divisions in the mind.
the puzzles of self-deception
Still, whatever the attractions of such an account, it is difficult to fathom just how this sort of mental gymnastics can be carried off. Immanuel Kant was, for example, clearly puzzled by the looming difficulties here; as he put it in his Metaphysical Principles of the Virtues, "Since a second person is required when one intends to deceive, deceiving oneself deliberately seems in itself to contain a contradiction" (cited in Darwall 1988, p. 411).
In a bit more detail, traditionalism has been taken by many to give rise to two difficulties. First, there is what Alfred Mele has termed the "static puzzle" (1987, 2001). The very state of mind of the self-deceiver might strike us as deeply puzzling. How can it be that the self-deceiver believes that p and also believes that not–p? There is no doubt, of course, that human beings often harbor inconsistent beliefs, where one of the beliefs is repressed or otherwise not currently or fully available to a subject's awareness. What is harder to understand is a case in which both such beliefs are fully available to a subject.
Second, such an account makes for a strategic puzzle. Annette Barnes puts a version of the difficulty so: if I am to be self-deceived, I must "as deceived, be taken in by a strategy that, as deceiver I know to be deceitful" (1997, p. 18). The self-deceiver might well, as Davidson suggests, intentionally turn his attention away from evidence supportive of the threatening belief and seek out evidence of the favored belief with the aim of inducing in himself the latter. But if this plan is to succeed, it is not easy to see how the self-deceiver could fail to be wholly taken in by his ruse. That is, a condition of success of such a project would appear to be that the conviction that he ought rationally to believe the epistemically sanctioned proposition be exiled or come to be regarded as epistemiclly undermined before he can come to accept that his favored proposition is true. This is, however, very near to the sort of gambit recommended by Blaise Pascal in order to induce belief in the existence of God. There is no doubt that we can intentionally bring about conditions the result of which is that we come to believe what, at the time we brought about those conditions, we took ourselves to have no good reason to believe. This, however, does not appear to make for the deep and synchronous irrationality stalked by Davidson.
It should be noted that more modest traditionalists, while rejecting the contradictory belief requirement, have argued that the cognitive biasing in self-deception must be intentional (Talbott 1995), or that the self-deceiver need only actively avoid troubling recalcitrant evidence (Bach 1997).
Notwithstanding the difficulties to which traditionalism about self-deception has been alleged to be prey, its attractions and allure are clear. It works admirably to capture some very powerful vernacular (and philosophical) intuitions about the phenomenon. Traditionalism would sharply distinguish self-deception from putatively less puzzling phenomena such as wishful believing, for the self-deceiver knowingly and actively brings about her deception, while the wishful believer is merely duped. Insofar as the self-deceiver succeeds in getting herself to believe what she also believes is not so, she would appear to be guilty of a profound form of epistemic irrationality. In addition, the sort of doxastic tension, instability, and fragility the traditionalist aims to describe has seemed to many the hallmark of self-deception. Lastly, insofar as the self-deceiver intentionally and knowingly brings about her deception, she is clearly blameworthy.
Predictably, perhaps, the modeling of self-deception upon interpersonal deception has tended to provoke three sorts of response. The first is outright skepticism about the phenomenon. As Mary Haight puts it: "[S]elf-deception is literally a paradox. Therefore it cannot happen" (1980, p. 73). The second response is a reconceptualization of self-deception as less a purely cognitive or doxastic affair and more an existential (or "actional") matter. Herbert Fingarette's pioneering work, Self-Deception (1969), is notable example of this tack. He writes of the self-deceiver that he "is one who is in some way engaged in the world but who disavows the engagement, who will not acknowledge it even to himself as his. That is, self-deception turns upon the personal identity one accepts rather than the beliefs one has" (p. 66). In this respect, Fingarette's is a powerful development and reworking of themes from Jean-Paul Sartre's famous discussion of "bad faith." Finally, in the third response one can cleave to the interpersonal model in literal fashion but seek to avoid the difficulties via a very robust partitioning or homuncularist account of self-deception. This is David Pears's account. He writes that cases of self-deception are to be explicated by appeal to a "subsystem" or homunculus that "is built up around the wish for the irrational belief [e.g. the husband's belief that his wife is faithful]. Although it is a separate centre of agency within the whole person, it is, from its own point of view, entirely rational. It wants the main system to form the irrational belief, and is aware that it will not form it, if the [belief that there is no good reason to so believe] is allowed to intervene. So with perfect rationality it stops its intervention" (1984, p. 87; see also Pears 1986). Mark Johnston (1988) develops a series of powerful objections (e.g., "Why should the deceiving subsystem be interested in the deception" (p. 64)) to homuncular explanations of self-deception.
Deflationist Accounts of Self-Deception
A second family of accounts, "deflationism," aims to circumvent many of the difficulties the traditionalist regards as fundamental to the posing of the problem of self-deception. Alfred Mele, Mark Johnston, and Annette Barnes have all developed noteworthy deflationist accounts. According to deflationists, self-deception is a matter of coming to believe that p as a consequence of biased cognitive processing that is itself the product of the various motivational states of the subject. Such accounts very often take their cue from a rejection of the lexical considerations in favor of traditionalism (Mele 1987, 1997, 2001; Barnes 1997; Johnston 1998). So, for example, it is plausibly argued that there are many clear cases of interpersonal deception that involve neither the deceiver's knowledge of the proposition the deceived comes to believe, nor intentional deception. But if this is so, there is no obvious reason to require these conditions when it comes to the characterization and, ultimately, the explanation of self-deception. Rather, if, for example, the process of self-deceiving oneself must be understood to be mediated by the subject's intention to come to believe the favored and epistemically suspect proposition, this must be established by appealing to the fact that an explanation of particular features of the phenomenon itself requires such intentional activity. This is what deflationists deny. Core cases of self-deception, it is insisted, can be fully explained without appeal to the psychological exotica characteristic of many versions of traditionalism.
Alfred Mele's is the most influential of deflationist accounts. According to Mele, the following conditions are jointly sufficient for a subject's entering self-deception in acquiring a belief that p.
1. The belief that p which S acquires is false.
2. S treats data relevant, or at least seemingly relevant, to the truth value of p in a motivationally biased way.
3. This biased treatment is a nondeviant case of S's acquiring the belief that p.
4. The body of data possessed by S at the time provides greater warrant for not-p than for p.
(2001, p. 51; see also Mele 1987, p. 127)
The account is notable for what it does not include. There is no requirement that the subject must intentionally bring about his deception, nor is there a contradictory belief requirement. It should be noted, as well, that the motivational states mentioned in (2) will typically be desires for states of affairs; for example, our husband's desire that his wife be faithful. This is to be distinguished from familiar traditionalist accounts according to which our husband not only desires that his wife be faithful but, in addition, desires that he believe (or come to believe) that his wife is faithful; it is by virtue of the possession of this latter desire that, by the lights of the traditionalist, the husband comes to self-deceive himself. (Dana Nelkin [2002] has argued that, on pain of counting cases that do not involve self-deception as self-deception, the deflationist, like the traditionalist, must appeal to a subject's desire to believe.)
Mele, in particular, has emphasized the ways in which the motivational states of a subject can harness various sources of cognitive bias. Our husband's desire that his wife is faithful may trigger positive misinterpretation of data, negative misinterpretation of data, and selective evidence gathering and attention. Moreover, familiar "cold" or unmotivated sources of bias may also be triggered by motivation. That our husband desperately wants his wife to be loyal may make data indicative of her faithfulness more vivid as well as more salient. (We do, after all, tend to think about the objects of our desires.) Additionally, it seems clear that motivation will influence the selection of which hypotheses we begin testing with and so may trigger the confirmation bias.
difficulties for deflationism
Needless to say, it has been argued that various features of core cases of self-deception render the deflationist account implausible. William Talbott (1995), for example, has argued that not only is intentional self-deception possible in a single coherent self but, additionally, that we must appeal to an agent's intention to bias her cognition in favor of a particular proposition regardless of the truth of that proposition, if we are to explain various distinctive features of the phenomenon.
First, the process of self-deception might be regarded as too complex, too light-fingered and strategic to be the result of a non-intentional mechanism or process. Indeed, in core cases of self-deception—cases like that of our husband—the subject explains away just what needs to be explained away, he searches for just the evidence he needs in order to come to believe the favored proposition, he does not look just where he must not look, and so forth. This is just the sort of behavior characteristic of means-end rationality, and so of intentional behavior. Moreover, if the processes mediating self-deception are nonintentional, if such processes are "launched" as a simple result of our inhabiting various motivational states, why is it that human beings do not invariably bias their cognition in the direction of motivationally favored propositions? Happily, we do not always become self-deceived that p when we powerfully desire that p. Self-deception is in this sense "selective." Again, it would seem that an extremely plausible explanation of why it is that I do come to bias my cognition when I do is that I intend to do so. (It is to be emphasized that Talbott takes our self-deceptive intentions to bias our cognition to be unconscious intentions. Annette Barnes [1997] and Ariela Lazar [1999] have developed a number of powerful objections to the notion that unconscious intentions play a crucial role in the explanation of self-deception.)
facing the question: "p or not-p?"
Does the deflationist have the resources to respond to these difficulties? Much recent discussion of these issues has drawn on the social psychological investigation of lay-hypothesis testing. Consider the task of any hypothesis tester—including the prospective self-deceiver. He faces questions of the form: "p or not-p?" The effort to settle any such question will involve costs to the agent in the form of time, and energy spent in the task of hypothesis testing. What is central to this "pragmatic" account of hypothesis testing is another sort of cost involved in the settling of such questions: the cost of anticipated errors as noted by Friedrich (1993) and Trope and Liberman (1996). In aiming to settle a question, a subject aims to end her uncertainty, to reach her "confidence threshold" at which time hypothesis testing is ended. As such, there will be costs associated with settling the question in favor of p, when p is false (false positives), and costs associated with settling the question in favor of not-p, when p is true (false negatives). In brief, what is crucial to this account is that with regard to many such questions, the costs associated with such errors will be asymmetric rather than symmetric. As such, there will be what James Friedrich calls a "primary error," an error that the subject is preponderantly motivated to avoid. This error, not surprisingly, is fixed by the values, aims, and interests of the cognizer. Such asymmetric error costs, in turn, fix asymmetric confidence thresholds. The result is biased hypothesis testing and the striking appearance of intentional guidance toward the doxastic embrace of a favored proposition. As Friedrich (1993) puts it: "Lay hypothesis testers are always motivated by accuracy, in the sense that they want to detect and minimize particularly costly errors" (p. 357).
Consider the case of our husband. He must settle the question, "Does she or doesn't she?" His primary error is fixed by his desires and interests. As such, we can easily imagine that his primary error, the error he is most powerfully motivated to avoid, is the error of believing that his wife is unfaithful when she is not. This, then, generates asymmetric confidence thresholds. As a result, he will demand powerful and compelling evidence if he is to accept that she is unfaithful, while requiring relatively little data to accept that she is faithful. As this is so, the model predicts that our husband will subject data suggestive of her infidelity to powerful critical scrutiny whereas he accepts data suggestive of her fidelity without serious investigation. The account promises a nonintentional explanation of the apparently strategic behavior of core cases of self-deception. It should not be forgotten, of course, that hypothesis testing is typically an amalgam of the intentional and non-intentional. Any hypothesis tester who faces the question "p or not-p?" does aim to settle that question. She knows, as well, of the means of which she must avail herself (seeking evidence, asking questions of those "in the know," etc.) if she is to resolve her uncertainty. So the issue, it seems, is not whether the self-deceiver engages in any intentional behavior in coming to believe as she does. Rather, the issue is whether she must be understood to possess an intention to settle her question in some particular direction.
Moreover, it seems that the pragmatic account of hypothesis testing offers an explication of why it is that we do not invariably come self-deceptively to bias our cognition in favor of what it is that we anxiously desire to be so and, so, promises at least a tentative response to the selectivity problem. Again, whether an individual engages in biased hypothesis testing will be determined by the full range of the subject's interests. So, for example, Talbott (1995) notes that hurtling down a steep mountain road and hearing unfamiliar and frightening noises when I depress my car's brakes, I am not likely to come to believe that there is nothing amiss, even though there is no doubt that I very much want it to be the case that my brakes are just fine. Indeed, given that the error costs associated with believing my brakes are in working order when they are not are terrifically vivid, I may be likely to come to believe in biased fashion that my brakes are failing. (For skepticism concerning whether a pragmatic account of hypothesis testing holds an answer to the selectivity problem in its full generality see Jose Bermudez [2000].)
This last example indirectly raises the problem of "twisted" or "unwelcome" cases of self-deception (Mele 1999, 2001; Barnes 1997; Lazar 1999; Scott-Kakures 2000). It is indeed a striking fact that self-deception is not always a matter of coming—in biased fashion—to believe just what is desired (directly or indirectly) to be so. Indeed, overprotective parents come in strikingly biased ways to believe that their children are suffering from grave illnesses. Some subjects come to believe, on the basis of scant evidence, that their spouses are un faithful. And, of course, we all have our favorite hypochondriac. Though the matter is much disputed, such cases would appear to constitute at least a presumptive difficulty for familiar accounts of self-deception. Such cases do, however, appear to be explicable by appeal to the pragmatic account of hypothesis testing. Consider: A busy executive, driving to her work, is nearly hit by a careless motorist as she nears her freeway on ramp. As a result, it may be that she comes, later in her commute, to conclude that many drivers she passes are careless and so constitute a danger. This is not surprising, as she has been made vividly aware of the very high cost of failing to conclude that x is a bad driver if he is. As a result of these asymmetric error costs and the associated asymmetric confidence thresholds, she is apt to demand overwhelming evidence before concluding that x is a safe driver, and she is likely to require very little evidence to bring her to the conclusion that x is a bad driver.
According to the deflationist, then, the irrationality present in self-deception is not an irrationality that requires us to appeal to the traditionalist's psychological machinery. Indeed, the irrationality present in self-deception is an irrationality with which we are all very familiar—it is a matter of biased reasoning. In this sense, self-deception, according to the deflationist, is not the cognitive pathology it has historically been understood to be. Much of the appeal to traditionalism springs from the intuition that only some distinctive cognitive pathology could explain the self-deceiver's turning away from the proper aim of belief: truth. In this way, it may well be that, for the deflationist, the price of making self-deception appear more familiar is that what we are apt to regard as "normal" hypothesis testing will come to seem more suspect and less familiar.
See also Weakness of the Will.
Bibliography
Bach, Kent. "Thinking and Believing in Self-Deception." Behavioral and Brain Sciences 20 (1997): 105.
Barnes, Annette. Seeing Through Self-Deception. Cambridge, U.K.: Cambridge University Press, 1997.
Bermudez, Jose. "Self-Deception, Intentions and Contradictory Beliefs." Analysis 60 (2000): 309–319.
Darwall, Stephen. "Self-Deception, Autonomy, and Moral Constitution." In Perspectives on Self-Deception, edited by Amélie Oksenburg Rorty and Brian McLaughlin. Berkeley: University of California Press, 1988.
Davidson, Donald. "Deception and Division." In Actions and Events: Perspectives on the Philosophy of Donald Davidson, edited by Ernest LePore and Brian McLaughlin. Oxford: Basil Blackwell, 1985. Reprinted in his Problems of Rationality. Oxford: Oxford University Press, 2004.
Davidson, Donald. "Paradoxes of Irrationality." In Philosophical Essays on Freud, edited by Richard Wollheim and James Hopkins. Cambridge, U.K.: Cambridge University Press, 1982. Reprinted in his Problems of Rationality. Oxford: Oxford University Press, 2004.
Demos, Raphael. "Lying to Oneself." Journal of Philosophy 57 (1960): 588–595.
Fingarette, Herbert. Self-Deception. Berkeley: University of California Press, 1969. Reprint 2000.
Friedrich, James. "Primary Error Detection and Minimization (PEDMIN) Strategies in Social Cognition: A Reinterpretation of Confirmation Bias Phenomena." Psychological Review 100 (1993): 298–319.
Haight, Mary. A Study of Self-Deception. Atlantic Highlands, NJ: Humanities Press, 1980.
Holton, Richard. "What Is the Role of the Self in Self-Deception?" Proceedings of the Aristotelian Society 101 (2001): 53–69.
Johnston, Mark. "Self-Deception and the Nature of Mind." In Perspectives on Self-Deception, edited by Amélie Oksenburg Rorty and Brian McLaughlin. Berkeley: University of California Press, 1988.
Lazar, Ariela. "Deceiving Oneself or Self-Deceived? On the Formation of Beliefs 'Under the Influence'." Mind 108 (1999): 265–290.
Mele, Alfred R. Irrationality: An Essay on Akrasia, Self-Deception, and Self-Control. New York: Oxford University Press, 1987.
Mele, Alfred R. "Real Self-Deception." Behavioral and Brain Sciences 20 (1997): 90–102.
Mele, Alfred R. Self-Deception Unmasked. Princeton, NJ: Princeton University Press, 2001.
Nelkin, Dana. "Self-Deception, Motivation, and the Desire to Believe." Pacific Philosophical Quarterly 83 (2002): 384–406.
Nisbett, Richard, and Lee Ross. Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs, NJ: Prentice Hall, 1980.
Noordhof, Paul. "Self-Deception, Interpretation and Consciousness." Philosophy and Phenomenological Research 67 (2003): 75–100.
Pears, David. "The Goals and Strategies of Self-Deception." In The Multiple Self, edited by Jon Elster. Cambridge, U.K.: Cambridge University Press, 1986.
Pears, David. Motivated Irrationality. Oxford: Clarendon Press, 1984.
Sartre, Jean-Paul. Being and Nothingness. Translated by Hazel Barnes. New York: Philosophical Library, 1956.
Scott-Kakures, Dion. "Motivated Believing: Wishful and Unwelcome." Nous 34 (2000): 348–375.
Talbott, William. "Intentional Self-Deception in a Single Coherent Self." Philosophy and Phenomenological Research 55 (1995): 27–74.
Trope, Yaacov, and Akiva Liberman. "Social Hypothesis Testing: Cognitive and Motivational Mechanisms." In Social Psychology: Handbook of Basic Principles, edited by E. Tory Higgins and Arie W. Kruglanski. New York: Guilford, 1996.
Dion Scott-Kakures (2005)