Wired for Culture: Origins of the Human Social Mind
Page 22
If I rush out into the street to push you out of the way of an oncoming vehicle, I act altruistically. My altruism benefits you, but potentially at great cost to myself. We don’t expect altruism to flourish on its own, because one party benefits at the expense of another, and it was just this asymmetry that undermined the four simple strings offering assistance to each other. At the other extreme, I behave selfishly when I take advantage of your altruism. For instance, maybe I surreptitiously eat more of my share of our stored food. Selfish behavior is tantalizing because of its immediate benefits; but of course if everyone behaves selfishly, the shared food will quickly evaporate and the selfish players will be thrown into unending conflict.
Sometimes people behave spitefully, as when they do something that might be costly to themselves, in an attempt to hurt someone else even more. If I see you standing near the edge of a cliff, why should I not just push you off? I will rid myself of a potential competitor and at little cost to myself, perhaps just some torn clothing as you grasp at me when you go over the edge. This is an example we must take seriously because we know the thought that it could happen to us or that we might do it to someone else crosses our minds. In fact, in the 1990s in New York, travelers on the subway were still routinely warned not to get too near the edge, and not just for the obvious reason that a train might hit them. City authorities had judged there were a sufficient number of unpredictable people about who might not be able to resist the temptation to push someone off the edge.
But even though spite crosses people’s minds, it is unlikely to evolve because my spiteful actions toward you don’t just benefit me in getting rid of you as a competitor, they help everyone else who competes with you. My spiteful actions toward you then become acts of altruism toward these others—and that spite might be costly for me to perform. For this reason, spite is thought to be rare in nature, and so it is something of a puzzle why for many of us the thought of spiteful revenge is often attractive, and spite itself so sweet. One possibility we will see later in this chapter is that spite might have evolved as a way people can advertise to others that they are not the sorts of people who can be taken for granted. Our spite then pays its way by bringing us better outcomes in future encounters.
This leaves the fourth social behavior—cooperation—as when two parties exchange favors. This sounds attractive, but even cooperation is difficult to get established, since it is easy for one of the parties to succumb to temptation and not return the favor. Economists and evolutionary biologists use the so-called prisoner’s dilemma as a vivid metaphor to illustrate this point. The police round up two people suspected of being involved in a joint crime. They haven’t enough evidence to convict either one without a confession. They put them into separate jail cells and tell them both that if they confess and implicate the other one, they will be treated leniently. They are also told that if they don’t confess and their accomplice does, they will be treated harshly, with a long jail sentence. What would you do? You could be loyal (cooperative) and not say a word, and hope that your partner does the same, and you will both be released. On the other hand, you are worried that your partner will sell you down the road. So, you act first, implicating your partner in the crime. But of course your partner has done the same thing to you, and you both end up being convicted.
The prisoner’s dilemma teaches us that if Axelrod and Hamilton are right that cooperators enjoy the benefits of life disproportionately, then cooperation has to overcome a big problem. It is that if two people are only going to meet once, it will pay them to act selfishly. And worse, evolution has created tendencies and dispositions in us to recognize when this is true and to act on it. If I am starving and see some apples out of reach in a tree, I might ask you, a stranger passing by, if I can climb up on your shoulders to get them. But when I climb down, I might then run off, not offering any to you. Or, if I do offer them to you, you might grab them all and run off, not giving me any. It might not be the nice or polite thing to do, but it will often pay to be greedy like this. That is, unless you are going to see the person again. Now it might pay to cooperate in hopes that your partner will remember this and cooperate with you.
Robert Trivers formalized this idea in the early 1970s, calling it “reciprocal altruism,” and it involves a sort of promise of exchange between two unrelated parties. I help you now in exchange for help from you at a later time. If this exchange brings you both more than you could get on your own, then cooperation should flourish. Trivers also realized that even this simple act brings with it a truckload of possibilities for exploitation. The reason is that in every act of reciprocal exchange, initially only one of the two parties benefits, and does so at a cost to the other. The helper in these exchanges is taking a risk that the help will be returned. The dilemma this causes is nowhere better illustrated than in the scene from countless detective films when the good guys hand over money to bad guys in ransom for some wretched person who has been kidnapped. The good guys hold the suitcase containing the money just out of reach of the kidnappers, who in turn hold their hostage just out of reach of the good guys. They have to do this: how can the good guys know the bad guys won’t just take the money and run? How do the bad guys know the good guys have put the right amount of money in the suitcase?
For reciprocal altruism to work means that the person you decide to cooperate with also wants genuinely to cooperate with you. Trivers recognized this would create an entire evolved psychology of traits and emotions surrounding every exchange. These include friendship, gratitude, sympathy, guilt, a heightened sensitivity to cheaters, generosity, withholding of help to people who do not reciprocate, a sense of justice or fairness, and even forgiveness. Each of these either encourages us to enter into cooperative relationships or protects us once we do. The fragility of reciprocal altruism and the psychological complexity even its simple acts require might be why it is surprisingly difficult to find in nature, outside of humans. The best-known example in the animal kingdom is that of vampire bats, and even that one is controversial. Vampire bats have long lives and they spend them living in colonies in which they see the same individuals repeatedly. They prey on mammals and birds at night, obtaining a blood meal from bites they inflict with their sharp teeth. Sometimes a bat will fail to obtain a meal and return to the colony hungry. A missed meal is not a problem for a large animal such as a human, but can cause a small animal to starve to death in one night because their higher metabolic rates mean they burn through their reserves of energy quickly. Nevertheless, a well-fed vampire bat can afford to share some of its meal, and will, in some instances, regurgitate some of it to a starving one, keeping it alive.
Why help each other this way? Why not let the other die the better to reduce competition for future meals? The reason might be that vampire bats repeatedly confront a risky environment in which, on any given night, through no fault of their own, a meal might be hard to come by. In these circumstances, if I save you from starvation now by sharing some of my meal, you may live to help me another day when I have been unlucky. The example is controversial because the bats in a colony are often relatives and so the behaviors might be little more than help directed at kin. Still, the argument tells us that when we might see someone again it can pay to be kind, even at a cost to ourselves, rather than to compete or even fight, especially if that person might repay our kindness at a later time.
The expectation that you will see someone again can restrain your tendencies to cheat them, but that alone is not enough to promote reciprocity. Let’s imagine I know that after another ten rounds of exchanges in which I give you something and you give me something in return, we will never see each other again. After our ninth exchange, I quietly change my tactic. I will accept your offering but withhold mine. I have taken advantage of you, and unpleasant as my behavior is, it makes sense if I am trying to maximize my payoffs. But wait, if you also know that we are going to finish after ten rounds you will do the same to me, and on our tenth exchange both of us will betr
ay the other’s trust. It gets worse. Knowing you will betray me on the tenth exchange, I will act before you and defect on the ninth. Of course, you will have done the same. So, we step back to our eighth exchange and the same thing happens, and this backward spiral goes right back to our first encounter. A fixed end point means that I will want to cheat you and worry that if I don’t, you might cheat me; so we will both cheat each other right back to the beginning.
Stable cooperation requires more than just the possibility of a future encounter. It depends upon extended and durable interactions, with no known end point. As soon as one party thinks it can do better by cutting and running, it will often pay it to do so, and the cooperative enterprise can quickly unravel. Robert Axelrod called this “the shadow of the future.” In an unexpected way, the future reaches back in time to influence our present behavior. We can see it as a loose or statistical way of linking the fates of two cooperators. The ease with which we appreciate the influence of this expectation and incorporate it into our own actions reminds us that many of our dispositions are those we expect of a species that has evolved to live for long periods of time around sets of people we might expect to see over and over. Of course, that is precisely what the structure of our cultural survival vehicles ensures, and this makes them a powerful source for promoting cooperation.
A sense of shortening of the shadow of the future causes us almost immediately to withdraw, even if imperceptibly, from friends who announce their intention to move away, or from work colleagues who might change jobs. It is also why lame-duck political figures are so weak. An acquaintance of mine, the head of a large research organization, once told me that for only about eighteen months of his four-year term as chief executive was he able to be effective. The first year, he said, was spent learning the ropes and gaining people’s confidence. The next eighteen months were reasonably fruitful. But with around eighteen months left, people started to withdraw, and knowing he would soon be replaced they became less fearful of his reproaches. It is a dilemma faced by prime ministers and presidents around the world—at least those that are elected—and it all comes back to “the shadow of the future.”
Axelrod points out that marriage exploits the shadow of the future in the wedding vows “’til death do us part.” In the absence of a belief in the afterlife, this is about as long a shadow of the future as any relationship can be expected to produce. Whether this is an argument for making divorce difficult to obtain, we do know that durable exchanges can even help enemies to get along. The most striking illustration of this was the live-and-let-live system that spontaneously arose during the trench warfare of World War I. Enemy combat units, facing each other from their trenches and engaging in daily bouts of deadly warfare across no-man’s-land, evolved sophisticated measures to avoid killing each other. Artillery would be fired at the same time every day, and always a bit short. Snipers would aim high. Famously, some of these enemies even shared Christmas gifts and played soccer one Christmas Day. Commanders had to use threats of courts-martial to break up these spontaneous reciprocal relationships.
Even with a long shadow of the future, the prospect of a defection looms large in any cooperative relationship. Someone might “forget” to return your favor, or simply make a mistake and fail to return your kindness. What should you do? If you do nothing, they might get the idea they can cheat you every now and then. Experiments with volunteers and studies using computers to simulate cooperation have shown that a simple strategy of repaying kindness with kindness and betrayal with revenge is surprisingly effective. If your partner betrays you, punish them. Axelrod called it “tit for tat.” It is not very costly to you, and defectors quickly learn that they will not be tolerated.
On the other hand, Mahatma Gandhi famously pointed out that this simple “eye for an eye” strategy “makes the whole world blind” as formerly happy cycles of cooperation can disintegrate into endless cycles of punishment and revenge. Indeed, anthropologists’ accounts of tribal conflict cite tit-for-tat cycles of revenge and counterrevenge in response to homicides and thievery as the most common cause of skirmishes and warfare between groups. In War Before Civilization, Lawrence Keeley recounts the history of violence between two groups in New Guinea that he discreetly labels A and B:
Village A owed village B a pig as reward for B’s help in a previous war in which the latter had killed one of A’s enemies. Meanwhile, a man from village A heard some (untrue) gossip that a man from village B had seduced his young wife; so, with the aid of a relative, he assaulted the alleged seducer. Village B then “overreacted” to this beating by making two separate raids on village A, wounding a man and a woman… . These two raids by village B led to a general battle in which several warriors on both sides were wounded, but no one was killed… later… a warrior from village B, to avenge a wound suffered by one of his kinsmen during the battle, ambushed and wounded a village A resident. The following day battle was resumed and a B villager was killed. After this death, the war became general: all the warriors of both villages, plus various allies, began a series of battles and ambushes that continued intermittently for the next two years.
One way to end tit-for-tat cycles of revenge and counterrevenge might be to acquire the dispositions that encourage you to exterminate your enemy in a great rush of violence. It might be just such dispositions that fuelled the brutality we saw earlier between the Tutsi and Hutus. On the other hand, if cooperation has been valuable in our past, then we might expect it to have given us strategies of forgiveness as a way of avoiding these cycles. And indeed a strategy of ignoring the first act of betrayal, then waiting before resuming cooperation, can be shown to work better than tit for tat. Assume the betrayal was a mistaken judgment, a moment of weakness, or maybe just a slipup. This allows groups of generous and forgiving cooperators to overcome the occasional bout of moral weakness or mere mistake from someone within their ranks.
An even better strategy is more wily and self-serving. It is sometimes confusingly called win-stay, lose-shift, even though it is subtly different from the straightforward version of that strategy we saw in Chapter 3. Colloquially, we might think of it as a mild form of sulking, but with an added twist. In this setting, a person responds with cooperation so long as the other person is cooperative—if you are winning, you stay. Confronted by a betrayal, you don’t respond with punishment; rather you simply withhold cooperation—this is the sulking, and it corresponds to “if you lose, shift tactics.” On the next exchange, though, you switch back to cooperating, and continue to cooperate so long as the other person cooperates.
This form of win-stay, lose-shift is the policy we follow when we have a brief argument and then make up. It gives people a second chance, and if they take that chance, cooperation is maintained. If they don’t and continue to betray you, win-stay, lose-shift again switches back to withholding cooperation. By merely withholding cooperation rather than overtly punishing someone who has betrayed you, the strategy avoids having a series of exchanges dissolve into cycles of betrayal and revenge. At the same time it makes it clear to cheaters and others who might “free-ride” on your goodwill that their behavior won’t work, and it offers incentives in the forms of glimpses of what cooperation can look like.
But what is the twist? This strategy also has a self-serving trick up its sleeve. Every now and then it tries defecting. Why would it do this? Remember, natural selection is not about goodness and light; it is about strategies that promote replicators. For all the win-stay, lose-shift strategist knows, it might be playing against a Good Samaritan who always behaves cooperatively, or simply a gullible person who always does the nice thing. The win-stay, lose-shift strategy cunningly exploits them by defecting. A Good Samaritan will nevertheless continue to cooperate, so win-stay, lose-shift, being on a winning streak, stays and exploits them again. The success of this strategy against other forgiving but less wily strategies tells us to expect that natural selection might have built into us emotions for taking advantage of the weak or
gullible—an emotion that we sadly cannot easily deny is part of our species.
AN EXPECTATION FOR FAIRNESS
OF ALL the emotions associated with getting acts of reciprocity to work, our expectation for fairness is perhaps the most intriguing and explosive. If forgiveness and generosity are like investments in keeping a cooperative relationship going, our sense of fairness is more like a police force. It is the emotion behind our belief that it is wrong for others to take advantage of us, and it might take the form in our own minds of our conscience, telling us that it is wrong to take advantage of others. It can be schizophrenic in its effects, capable of producing violence on the one hand, and startling altruism on the other. Its violent side disposes us to punish people whose actions reveal them as selfish, and for this reason it is sometimes called moralistic aggression. We like to think it is something only others do, but honking horns at people who cut into traffic, or heckling people who jump lines are commonplace instances of moralistic aggression deriving from a sense that someone’s actions are not fair.
Once, travelling in Vienna, I was waiting for a tram, and even as it was arriving and the doors were still opening an older woman on board, wrapped in a head scarf, wagged her finger disapprovingly and hissed at me indignantly. Evidently I wasn’t leaping forward quickly enough to help a younger woman with a baby in a pushchair who was preparing to clamber down the stairs to the sidewalk. In 2009, a man in the city of Guangzhou in China threatened to commit suicide by jumping from a bridge. His presence on the bridge caused traffic jams, and eventually he was approached by a passer-by who shoved him over the edge, telling a newspaper later on that he was fed up with the desperate man’s “selfish activity.”