Primates and Philosophers_How Morality Evolved
Page 16
Such go-between behavior has been repeatedly observed by my team in a variety of chimpanzee groups. It allows male rivals to approach each other without taking initiative, without making eye contact, and perhaps without losing face. But more importantly: a third party steps in to ameliorate relationships in which she herself is not directly involved.
Policing by high-ranking males shows the same sort of community concern. These males break up fights among others, sometimes standing between them until the conflict calms down. The evenhandedness of male chimpanzees in this role is truly remarkable, as if they place themselves above the contestants. The pacifying effect of this behavior has been documented in both captive (de Waal 1984) and wild chimpanzees (Boehm 1994).2
A recent study of policing in macaques has shown that the entire group benefits. In the temporary absence of the usual performers of policing, the remaining group members see their affiliative networks deteriorate and the opportunities for reciprocal exchange dwindle. It is no exaggeration to say, therefore, that in primate groups a few key players can exert extraordinary influence. The group as a whole benefits from their behavior, which enhances social cohesion and cooperation. How and why policing behavior evolved is a separate issue, but its pervasive effect on group dynamics is undeniable (Flack et al. 2005; 2006).
The idea that individuals can make a difference for the group has been taken a giant step further in our own species. We actively insist that each individual try to make a difference for the better. We praise deeds that contribute to the greater good and disapprove of deeds that undermine the social fabric. We approve and disapprove even if our immediate interests are not at stake. I will disapprove of individual A stealing from B not only if I am B, or if I am close to B, but even if I have nothing to do with A and B except for being part of the same community. My disapproval reflects concern about what would happen if everyone started acting like A: my long-term interest are not served by rampant stealing. This rather abstract yet still egocentric concern about the quality of life in a community is what underpins the “impartial” and “disinterested” perspective stressed by Philip Kitcher and Peter Singer, which is at the root of our distinction between right and wrong.
Chimpanzees do distinguish between acceptable and unacceptable behavior, but always closely tied to immediate consequences, especially for themselves. Thus, apes and other highly social animals seem capable of developing prescriptive social rules (de Waal 1996; Flack et al. 2004), of which I will offer just one example:
One balmy evening at the Arnhem Zoo, when the keeper called the chimps inside, two adolescent females refused to enter the building. The weather was superb. They had the whole island to themselves and they loved it. The rule at the zoo was that none of the apes would get fed until all of them had moved inside. The obstinate teenagers caused a grumpy mood among the rest. When they finally did come in, several hours late, they were assigned a separate bedroom by the keeper so as to prevent reprisals. This protected them only temporarily, though. The next morning, out on the island, the entire colony vented its frustration about the delayed meal by a mass pursuit ending in a physical beating of the culprits. That evening, they were the first to come in. (adapted from de Waal 1996: 89)
However impressive such rule enforcement, our species goes considerably further in this than any other. From very young onwards we are subjected to judgments of right and wrong, which become so much part of how we see the world that all behavior shown and all behavior experienced passes through this filter. We put social thumbscrews on everyone, making sure that their behavior fits expectations.3 We thus build reputations in the eyes of others, who may reward us through so-called “indirect reciprocity” (Trivers 1971; Alexander 1987).
Moral systems thus impose myriad constraints. Behavior that promotes a mutually satisfactory group life is generally considered “right” and behavior that undermines it “wrong.” Consistent with the biological imperatives of survival and reproduction, morality strengthens a cooperative society from which everyone benefits and to which most are prepared to contribute. In this sense Rawls (1972) is on target; morality functions as a social contract.
Level 3: Judgment and Reasoning
The third level of morality goes even further, and at this point comparisons with other animals become scarce indeed. Perhaps this reflects just our current state of knowledge, but I know of no parallels in animals for moral reasoning. We, humans, follow an internal compass, judging ourselves (and others) by evaluating the intentions and beliefs that underlie our own (and their) actions. We also look for logic, such as in the above discussion in which moral inclusion based on sentience clashes with moral duties based on ancient loyalties. The desire for an internally consistent moral framework is uniquely human. We are the only ones to worry about why we think what we think. We may wonder, for example, how to reconcile our stance towards abortion with the one towards the death penalty, or under which circumstances stealing may be justifiable. All of this is far more abstract than the concrete behavioral level at which other animals seem to operate.
This is not to say that moral reasoning is totally disconnected from primate social tendencies. I assume that our internal compass is shaped by the social environment. Everyday, we notice the positive or negative reactions to our behavior, and from this experience we derive the goals of others and the needs of our community. We make these goals and needs our own, a process known as internalization. Moral norms and values are not argued from independently derived maxims, therefore, but born from internalized interactions with others. A human being growing up in isolation would never arrive at moral reasoning. Such a “Kaspar Hauser” would lack the experience to be sensitive to others’ interests, hence lack the ability to look at the world from any perspective other than his or her own. I thus agree with Darwin and Smith (see Christine Korsgaard’s commentary) that social interaction must be at the root of moral reasoning.
I consider this level of morality, with its desire for consistency and “disinterestedness,” and its careful weighing of what one did against what one could or should have done, uniquely human. Even though it never fully transcends primate social motives (Waller 1997), our internal dialogue nevertheless lifts moral behavior to a level of abstraction and self-reflection unheard of before our species entered the evolutionary scene.
NAILS IN COFFIN
It is good to hear that my “sledgehammer” approach to Veneer Theory (VT) comes down to beating a dead horse (Philip Kitcher) that was silly to begin with (Christine Korsgaard). The only one to have ridden this horse, Robert Wright, now denies having wholeheartedly done so, if at all, whereas Peter Singer defends VT on the grounds that certain aspects of human morality, such as our impartial perspective, appear to be an overlay, hence a sort of veneer.
The latter is quite a different kind of veneer, though. Singer hints at the prominence of layer 3 (judgment and reasoning) in the larger scheme of human morality, but I doubt that he would advocate disconnecting this layer from the other two. This is, however, exactly what VT has tried to achieve by outright denying layer 1 (the moral sentiments) and stressing layer 2 (social pressure) at the expense of everything else. VT presents moral behavior as nothing more than a way of impressing others and building favorable reputations, hence Ghiselin’s (1974) equation of an altruist with a hypocrite and Wright’s (1994: 344) comment that “To be moral animals, we must realize how thoroughly we aren’t.” In the words of Korsgaard, VT depicts the human primate as “a creature who lives in a state of deep internal solitude, essentially regarding himself as the only person in a world of potentially useful things—although some of those things have mental and emotional lives and can talk or fight back.”
VT occupies an almost autistic universe. One only needs to inspect the indexes of their books to notice that its defenders rarely if ever mention empathy, or other-directed emotions in general. Even though empathy can be overridden by more pressing concerns4—which is why universal empathy is such a fragile
proposal—its very existence should give pause to anyone depicting us as out only for ourselves. The human tendency to involuntarily flinch at seeing another in pain profoundly contradicts VT’s notion of us as self-obsessed. All scientific indications are that we are hardwired to be in tune with the goals and feelings of others, which in turn primes us to take these goals and feelings into account.
Huxley and his followers have tried to drive a wedge between morality and evolution, a position that I attribute to an excessive focus on natural selection. The mistake is to think that a nasty process can only produce nasty outcomes, or as Joyce (2006: 17) recently put it: “the basic blunder [is] confusing the cause of a mental state with its content.” Absent natural moral inclinations, the only hope VT has for humanity is the semi-religious notion of perfectibility: with great effort we may be able to lift ourselves up by our own bootstraps.5
Is VT really too easily countered to be taken seriously, as Philip Kitcher argues? Remember that VT has dominated evolutionary writing for three decades, and lingers still. During this time, anyone who thought differently was labeled “naive,” “romantic,” “soft-hearted,” or worse. I will be more than happy, however, to let VT rest in peace. Maybe the present discussion will serve as the final nail in its coffin. We urgently need to move from a science that stresses narrowly selfish motives to one that considers the self as embedded in and defined by its social environment. This development is well underway in both neuroscience, which increasingly studies shared representations between self and other (e.g., Decety and Chaminade 2003), and economics, which has begun to question the myth of the self-regarding human actor (e.g., Gintis et al. 2005).
FACES OF ALTRUISM
Finally, a few words on selfish versus altruistic motives. This seems like a straightforward distinction, but it is confused by the special way in which biologists employ these terms. First, “selfish” is often a shorthand for self-serving or self-interested. Strictly speaking, this is incorrect, as animals show a host of self-serving behaviors without the motives and intentions implied by the term “selfish.” For example, to say that spiders build webs for selfish reasons is to assume that a spider, while spinning her web, realizes that she is going to catch flies. More than likely, insects are incapable of such predictions. All we can say is that spiders serve their own interests by building webs.
In the same way, the term “altruism” is defined in biology as behavior costly to the performer and beneficial to the recipient regardless of intentions or motives. A bee stinging me when I get too close to her hive is acting altruistically, since the bee will perish (cost) while protecting her hive (benefit). It is unlikely, however, that the bee knowingly sacrifices herself for the hive. The bee’s motivational state is hostile rather than altruistic.
So, we need to distinguish intentional selfishness and intentional altruism from mere functional equivalents of such behavior. Biologists use the two almost interchangeably, but Philip Kitcher and Christine Korsgaard are correct to stress the importance of knowing the motives behind behavior. Do animals ever intentionally help each other? Do humans?
I add the second question even if most people blindly assume a affirmative answer. We show a host of behavior, though, for which we develop justifications after the fact. It is entirely possible, in my opinion, that we reach out and touch a grieving family member or lift up a fallen elderly person in the street before we fully realize the consequences of our actions. We are excellent at providing post hoc explanations for altruistic impulses. We say such things as “I felt I had to do something,” whereas in reality our behavior was automatic and intuitive, following the common human pattern that affect precedes cognition (Zajonc 1980). Similarly, it has been argued that much of our moral decision-making is too rapid to be mediated by the cognition and self-reflection often assumed by moral philosophers (Greene 2005; Kahneman and Sunstein 2005).
We may therefore be less intentionally altruistic than we like to think. While we are capable of intentional altruism, we should be open to the possibility that much of the time we arrive at such behavior through rapid-fire psychological processes similar to those of a chimpanzee reaching out to comfort another or sharing food with a beggar. Our vaunted rationality is partly illusory.
Conversely, when considering the altruism of other primates, we need to be clear on what they are likely to know about the consequences of their behavior. For example, the fact that they usually favor kin and reciprocating individuals is hardly an argument against altruistic motives. This argument would only hold if primates consciously considered the return benefits of their behavior, but more than likely they are blind to these. They may evaluate relationships from time to time with respect to mutual benefits, but to believe that a chimpanzee helps another with the explicit purpose of getting help back in the future is to assume a planning capacity for which there is little evidence. And if future payback does not figure in their motivation, their altruism is as genuine as ours (table 3).
If one keeps separate the evolutionary and motivational levels of behavior (known in biology as “ultimate” and “proximate” causes, respectively), it is obvious that animals show altruism at the motivational level. Whether they also do so at the intentional level is harder to determine, since this would require them to know how their behavior impacts the other. Here I agree with Philip Kitcher that the evidence is limited even if not wholly absent for large-brained nonhuman mammals, such as apes, dolphins, and elephants, for which we do have accounts of what I call “targeted helping.”
TABLE 3
Taxonomy of Altruistic Behavior
Note: Altruistic behavior falls into four categories dependent on whether or not it is socially motivated and whether or not the actor intends to benefit the other or itself. The vast majority of altruism in the animal kingdom is only functionally altruistic in that it takes place without an appreciation of how the behavior will impact the other and absent any prediction of whether the other will return the service. Social mammals sometimes help others in response to distress or begging (socially motivated helping). Intentional helping may be limited to humans, apes, and a few other large-brained animals. Helping motivated purely by expected return benefits may be rarer still.
Early human societies must have been optimal breeding grounds for survival-of-the-kindest aimed at family and potential reciprocators. Once this sensibility had come into existence, its range expanded. At some point, sympathy for others became a goal in and of itself: the centerpiece of human morality and an essential aspect of religion. It is good to realize, though, that in stressing kindness, our moral systems are enforcing what is already part of our heritage. They are not turning human behavior around, only underlining preexisting capacities.
CONCLUSION
That human morality elaborates upon preexisting tendencies is, of course, the central theme of this volume. The debate with my colleagues made me think of Wilson’s (1975: 562) recommendation three decades ago that “the time has come for ethics to be removed temporarily from the hands of philosophers and biologicized.” We currently seem in the middle of this process, not by pushing philosophers aside but by including them, so that the evolutionary basis of human morality can be illuminated from a variety of disciplinary angles.
To neglect the common ground with other primates, and to deny the evolutionary roots of human morality, would be like arriving at the top of a tower to declare that the rest of the building is irrelevant, that the precious concept of “tower” ought to be reserved for its summit. While making for good academic fights, semantics are mostly a waste of time. Are animals moral? Let us simply conclude that they occupy several floors of the tower of morality. Rejection of even this modest proposal can only result in an impoverished view of the structure as a whole.
1 This view is congruent with Singer’s (1972) argument that increased affluence brings increased obligations to those in need.
2 My popular books do not always present the actual data on which conclusions are b
ased. For example, the claim that high-ranking males police intragroup conflicts in an impartial manner was based on 4,834 interventions analyzed by de Waal (1984). One male, Luit, showed a lack of correlation between his social preferences (measured by association and grooming) and interventions in open conflict. Only Luit showed this dissociation: interventions by other individuals were biased in favor of friends and family. My remark about Luit that “there is no room in this policy for sympathy and antipathy” (de Waal 1998 [1982]: 190) thus summarizes well-quantified aspects of his behavior.
3 Our experiments on inequity inversion concerned expectations about reward division (Brosnan and de Waal 2003; Brosnan et al. 2005). In response to Philip Kitcher, it should be noted that it is unclear that inequity aversion has much to do with altruism. Another pillar of human morality, equally important as empathy and altruism, is reciprocity and resource distribution. The reactions of primates faced with unequal rewards falls under this domain, showing that they watch what they gain relative to others. Cooperation is not sustainable without a reasonably equal reward distribution (Fehr and Schmidt 1999). Monkeys and apes react negatively to receiving less than someone else, which is indeed different from reacting negatively to receiving more, but the two reactions may be related if the second reflects anticipation of the first (i.e., if individuals avoid taking more so as to forestall negative reactions in others to such behavior). For a discussion of how these two forms of inequity aversion may relate to the human sense of fairness, see de Waal (2005:209–11).
4 Given a choice between an action that benefits only themselves and an action that benefits both themselves and a companion, chimpanzees seem to make no distinction. Under these circumstances, they only help themselves (Silk et al. 2005). The authors titled their study “Chimpanzees are indifferent to the welfare of unrelated group members,” even if all that they demonstrated was that one can create a situation in which chimpanzees consider the welfare of others secondary. I am sure one can do the same with people. When hundreds of people rush into a store that has a rare item for sale, such as a popular Christmas toy, they surely exhibit little regard for the welfare of others. No one, however, would conclude from this that people are incapable of such regard.