The problem is in the interpretation. Some see this as a form of self-deception in which we are unconscious of the degree to which our system of self-deception will readjust our thinking in the future. I doubt this. We project easily into the future because it expresses our current emotional state. Verbal predictions regarding our future mental states may be a relatively recent invention with limited selective effects. The relevant trade-offs are already built into our behavior whatever our verbal predictions.
Certain exceptions to this rule also stand out. I remember “courting” a Nigerian beauty at a very safe distance at a club in Amsterdam for three hours without ever having the courage to approach her. When she left, she threw me a look of withering contempt that burned right into my soul. If a social psychologist had been there to measure my “affective forecasting,” I doubt I would have guessed that twenty-five years later, the memory still sears in my consciousness. I believe I would have predicted that within a year or two the whole evening would have been completely forgotten.
ARE ALL BIASES DUE TO SELF-DECEPTION?
A hallmark of self-deception is bias. Mere computational error is not enough. Such error is often randomly distributed around the truth and shows no particular pattern. Self-deception produces biases, patterns where the data point in one direction—usually that of self-enhancement or self-justification. Are there biases that are real but not driven by self-deception? Of course there are.
Consider the following. Sounds that are coming toward us are perceived as closer and louder than they really are, while the opposite is true for receding sounds. This is a bias and it has a perfectly good explanation. Approaching objects are inherently more dangerous than are receding ones—hence the value of earlier and more acute detection. Perhaps the organism is measuring distances in Darwinian units rather than Newtonian ones. From that viewpoint, there is no bias.
Or consider another example. From the top of a tree, the drop to the ground looks much farther than does the same distance viewed from the ground up. There is no social component to these biases. You are directly saving yourself—not trying to manipulate the opinions of others. Many other errors have similarly innocent explanations. Some are simple optical illusions, holes in our sensory system that produce startling biases under particular conditions. Others are general rules that work well in most situations but fail badly in some.
Of course the errors we make are very numerous. In the words of one psychologist, we can fall short, overreach, skitter off the edge, miss by a mile, take our eyes off the prize, or throw the baby out with the bathwater. And we can exaggerate our accomplishments, diminish our defects, and act vice versa regarding those of others. Many of these may serve self-deceptive functions but not all. Sometimes when we take our eyes off the prize, we have only been momentarily distracted; sometimes when we miss by a mile we have only (badly) miscalculated. At other times, it is precisely our intention to throw out the baby with the bathwater or to miss by a mile, so in principle we have to scrutinize our biases to see which ones serve the usual goal of self-enhancement or, in some other fashion, deception of others, and which ones subserve the function of rational calculation in our direct self-interest.
DENIAL AND PROJECTION
Denial and projection are fundamental psychological processes—the deletion (or negation) of reality and the creation of new reality. The one virtually requires the other. Projecting reality may require deleting some, while denial tends to create a hole in reality that needs to be filled. For example, denial of personal malfeasance may by necessity require projection onto someone else. Once years ago while driving I took a corner too sharply and my one-year-old baby fell over in the backseat and started to cry. I heard myself harshly berating her nine-year-old sister (my stepdaughter) for not supporting her—as if she should know by now that I like to take my corners on two wheels. The very harshness of my voice served to signal that something was amiss. Surely the child’s responsibility in this misdemeanor was, at most, 10 percent, the remaining 90 percent lying with me, but since I was denying my own portion, she had to endure a tenfold increase in hers. It is as if there is a “responsibility equation” such that decrease of one portion must necessarily be matched by an increase elsewhere.
A rather more serious example of denial and projection concerns 9/11. Any major disaster has multiple causes and multiple responsible parties. There’s nothing wrong with assigning the lion’s share of cause and responsibility to Osama bin Laden and his men, but what about creating a larger picture that looks back over time and includes us (US citizens) in the model, not so much directly causing it as failing to prevent it? If we were capable of self-criticism, what would we admit to? How did we, however indirectly, contribute to this disaster? Surely through repeated inattention to airline safety (see Chapter 9) but also in our foreign policy.
This final admission is often hardest to make and is almost never made publicly, but sensible societies sometimes guide behavior after the fact in a useful way. It is easy for personal biases to affect one’s answer here, but I will set out what seem to me to be obvious questions. To wit, are there no legitimate grievances against the United States and its reckless and sometimes genocidal (Cambodia, Central America) foreign policy in the past fifty years? Is there any chance that our blind backing of Israel—like all our “client states,” right or wrong, you’re our boys—has unleashed some legitimate anger elsewhere, among, say, Palestinians, Lebanese, Syrians, and those who identify with them or with justice itself? In other words, is 9/11 a signal to us that perhaps we should look at our foreign policy more critically and from the viewpoint of multiple others, not just the usual favored few? One need not mention this in public but can start to make small adjustments in private. Again, the larger message is that exterminating one’s enemies is not the only useful counterresponse to their actions, but becomes so if one’s own responsibility is completely denied and self-criticism aborted.
DENIAL IS SELF-REINFORCING
Denial is also self-reinforcing—once you make that first denial, you tend to commit to it: you will deny, deny the denial, deny that, and so on. In the voice-recognition experiments, not only do deniers deny their own voice, they also deny the denial. A person decides that an article on which he is a coauthor is not fraudulent. To do so, he must deny the first wave of incoming evidence, as he duly does. Then comes the second wave. Cave in? Admit fault and cut his losses? Not too likely. Not when he can deny once more and perhaps cite new evidence in support of denial—evidence to which he becomes attached in the next round. He is doubling down at each turn—double or nothing—and as nothing is what he would have gotten at the very beginning, with no cost, he is tempted to justify each prior mistake by doubling down again. Denial leads to denial, with potential costs mounting at each turn.
In trading stock, the three most important rules are “cut your losses, cut your losses, and cut your losses.” This is difficult to do because there is natural resistance. Benefits are nice; we like to enjoy them. But to do so, we must sell a stock after it has risen in value; then we can enjoy the profit. By the same token, we are risk averse. Loss feels bad and is to be avoided. One way to avoid a cost is to hold the stock after it has fallen—loss is only on paper and the stock may soon rebound. Of course, as it sinks lower, one may wish to hold it longer. This style of trading eventually puts one in a most unenviable position, holding a portfolio of losers. Indeed, this is exactly what happens. People trading on their own tend to sell good stocks, buy less good ones, and hold on to their bad ones. Instead, “cut your losses, cut your losses, cut your losses.”
YOUR AGGRESSION, MY SELF-DEFENSE
One of the most common cases of denial coupled with projection concerns aggression—who is responsible for the fight? By adding one earlier action by the other party, we can always push causality back one link, and memory is notoriously weak when it comes to chronological order.
An analogy can be found in animal species that have evolved to create
the illusion that they are oriented 180 degrees in the opposite direction and are moving backward instead of forward. For example, a beetle has its very long antennae slung underneath its body so they protrude out the back end, creating the illusion of a head. When attacked, usually at the apparent “head” end (that is, the tail) it rushes straight forward—exactly the opposite of what is expected, helping it to escape. Likewise, there are fish with two large, false eyespots on the rear end of their body, creating the illusion that the head is located there. The fish feed at the bottom, moving slowly backward, but again, when attacked at the apparent “head” end, take off rapidly in the opposite direction. What is notable here is that the opposite of the truth (180 degrees) is more plausible than a smaller deviation from the truth (say, a 20-degree difference in angle of motion). And so also in human arguments. Is this an unprovoked attack or a defensive response to an unprovoked attack? Is causation going in this direction, or 180 degrees opposite? “Mommy, he started it.” “Mommy, she did.”
COGNITIVE DISSONANCE AND SELF-JUSTIFICATION
Cognitive dissonance refers to an internal psychological contradiction that is experienced as a state of tension or discomfort ranging from minor pangs to deep anguish. Thus, people will often act to reduce cognitive dissonance. The individual is seen to hold two cognitions—ideas, attitudes, or beliefs—that are inconsistent: “Smoking will kill you, and I smoke two packs a day.” The contradiction could be resolved by stopping cigarettes or by rationalizing their use: “They relax me, and they prevent weight gain.” Most people jump to the latter task and start generating self-justification in the face of a much more difficult (if healthier) choice. But sometimes there is only one choice, because the cost has already been suffered: you can rationalize it or live with the truth.
Take a classic case. Subjects were split into two groups, one comprising people who would endure a painful or embarrassing test to join a group and the other people who would pay a modest fee. Then each was asked to evaluate the group based on a tape of a group discussion arranged to be as dull and near-incoherent as possible. Those who suffered the higher cost evaluated the group more positively than did those who paid the small entry fee. And the effect is strong. The low-cost people rated the discussion as dull and worthless and the people as unappealing and boring. This is roughly how the tape was designed to appear. By contrast, those who paid the high cost (reading sexually explicit material aloud in an embarrassing situation) claimed to find the discussion interesting and exciting and the people attractive and sharp.
How does that make sense? According to the prevailing orthodoxy, less pain, more gain, and the mind should measure accordingly. What we find is: more pain, more post-hoc rationalization to increase the apparent benefit of the pain. The cost is already gone, and you cannot get it back, but you can create an illusion that the cost was not so great or the return benefit greater. You can choose, in effect, to get that cost back psychologically, and that is exactly what most people do. This particular experiment has been replicated many times with the same result. But it is still not quite clear why this makes sense. Certainly it works in the service of consistency—since you suffered a larger cost, it must have been for a larger benefit. People can be surprisingly unconscious of this effect in their own behavior. Even when the experiment is fully explained and the evidence of individual bias demonstrated, people see that the general result is true but claim that it does not apply to them. They take an internal view of their own behavior, in which lack of consciousness of the manipulating factor means it is not a manipulating factor.
The need to reduce cognitive dissonance also strongly affects our reaction to new information. We like our biases confirmed and we are willing to manipulate and ignore incoming information to bring about that blessed state. This is so regular and strong as to have a name—the confirmation bias. In the words of one British politician, “I will look at any additional evidence to confirm the opinion to which I have already reached.”
So powerful is our tendency to rationalize that negative evidence is often immediately greeted with criticism, distortion, and dismissal so that not much dissonance need be suffered, nor change of opinion required. President Franklin Roosevelt uprooted hundreds of thousands of Japanese-American citizens and interned them for the remainder of World War II, all based on anticipation of possible disloyalty for which no evidence was ever produced except the following classic from a US general: “The very fact that no sabotage has taken place is a disturbing and confirming indication that such action will be taken.”
Supplying a balanced set of information to those with divergent views on a subject, as we saw earlier in the case of capital punishment, does not necessarily bring the two sides closer together; quite the contrary. Facts counter to one’s biases have a way of arousing one’s biases. This can lead to those with strong biases being both the least informed and the most certain in their ignorance. In one experiment, people were fed politically congenial misinformation and an immediate correction. Most people believed the evidence more strongly after the refutation.
One important factor affecting the need for cognitive dissonance reduction is post-hoc rationalization of decisions that can no longer be changed. When women are asked to rank a set of household appliances in terms of attractiveness and then offered a choice between two appliances they have ranked equally attractive, they later rank the one they chose as more attractive than the one they rejected, apparently solely based on ownership. A very simple study showing how people value items more strongly after they have committed to them focused on people buying tickets at a racetrack. Right after they bought their ticket, they were much more confident that it was a good choice than while waiting in line with the intention of buying the same ticket. One upshot of this effect is that people like items more when they cannot return them than when they can, despite the fact that they say they like the option to return items.
A bizarre and extreme case of cognitive dissonance reduction occurs in men sentenced to life imprisonment without the possibility of parole for a crime—let us say a spousal murder, using a knife repeatedly. Surprisingly few will admit that the initial act was a mistake. Quite the contrary: they may be aggressive in its defense. “I would do it again in a second; she deserved everything she got.” It is difficult for them to resist reliving the crime, fantasizing again about the victim’s terror, pain, unanswered screams for help, and so on. They are justifying something with horribly negative consequences (for themselves as well now) that they cannot change. Their fate is instead to relive the pleasures of the original mistake, over and over again.
SOCIAL EFFECTS OF COGNITIVE DISSONANCE REDUCTION
The tendency of cognitive dissonance resolution to drive different individuals apart has been described in terms of a pyramid. Two individuals can begin very close on a subject—at the top of a pyramid, so to speak—but as contradictory forces of cognitive dissonance come into play and self-justification ensues, they may slide down the pyramid in different directions, emerging far apart at the bottom. As two experts on the subject put it:We make an early, apparently inconsequential decision and then we justify it to reduce the ambiguity of the approach. This starts a process of entrapment—action, justification, further action—that increases our intensity and commitment and may take us far from our original intentions or principles.
As we saw in Chapter 5, this process may be an important force driving married couples toward divorce rather than reconciliation. What determines the degree to which any given individual is prone to move down the pyramid when given the choice is a very important (unanswered) question.
A novel implication of cognitive dissonance concerns the best way to turn a possible foe into a friend. One might think that giving a gift to another would be the best way to start a relationship of mutual giving and cooperation. But it is the other way around—getting the other person to give you a gift is often the better way of inducing positive feelings toward you, if for no other reason tha
n to justify the initial gift. This has been shown experimentally where subjects cajoled into giving a person a gift later rate that person more highly than those not so cajoled. The following folk expression from more than two hundred years ago captures the counterintuitive form of the argument (given reciprocal altruism):He that has once done you a kindness
will be more ready to do you another
than he whom you yourself have obliged.
COGNITIVE DISSONANCE IN MONKEYS AND YOUNG CHILDREN
It is of some interest to know whether animals show cognitive dissonance and at what age children show such effects. Birds often show the human bias of preferring items for which the birds work harder (in their case, food) over identical items achieved through less work. The same is true sometimes of rats.
A more novel set of experiments shows that when a monkey is forced to choose between two items it is equally fond of (say, a blue M&M instead of a red one), it will then prefer another color (say, a yellow M&M) over the one it just rejected (red), as if needing consistency. That is, having rejected red once, to remain consistent it must do so again. But if the initial choice is made by the human experimenter (blue over red), this either has no effect on the monkey’s subsequent choice or the monkey then chooses the one the human kept for itself, as if this must be the better one.
Nearly identical experiments run on four-year-olds produce nearly identical results. When the children are forced to choose between two equivalent objects, they continue to reject the one they rejected the first time, as if staying true to themselves. That is, having rejected one, the child acts as if there must have been a good reason and rejects it again. This occurs even if the child does not see which item it chose until after having made its choice. Once again, as with the monkeys, when the experimenter makes the choice instead of the child, this either has no effect on how the child chooses or it chooses the one the experimenter kept for itself, as if this must be the better one.
The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life Page 18