An example of this paradoxical process can be seen in the media spotlight that has recently been shining on certain crimes in the United States. For example, sexual assaults on college campuses receive vastly more media attention today than they did in the past. A Google Trends search finds over ten times as many news stories on campus sexual assault in 2016 than there were five years prior. The word “epidemic” frequently crops up in these articles, which probably contributes to the fact that four in ten Americans believe that the United States currently fosters a “rape culture” in which sexual violence is the norm; only three in ten disagree. But this is not remotely true. Sexual assault is no more common on college campuses than among same-aged adults who are not in college. And like most other kinds of crimes, rates of sexual assault are decreasing, not increasing, both on campuses and off, according to the Bureau of Justice Statistics. There is no epidemic, other than an epidemic of awareness. Now, this epidemic of awareness may be a good thing if it results in an epidemic of concern about these crimes that contributes to their continued decline—and I hope it does. But the media spotlight is not without downsides, one of which is a massively distorted public perception of the frequency of sexual assault, which in turn worsens broader perceptions about gender relations, law enforcement, and human nature itself.
Hardwired cognitive biases exacerbated by biased media coverage help to explain why people’s beliefs about human nature are mathematically incompatible with reality. Amazingly, people’s beliefs are even incompatible with the knowledge that they have about themselves, although you might assume that self-knowledge would be more resistant to distortion. (It’s not.)
In the illuminating Common Cause UK Values Survey, pollsters found, consistent with the results of other surveys, that negativity bias had successfully distorted its respondents’ views of human nature. About half of the respondents reported that, in general, people place more importance on selfish values like dominance over others, influence, and wealth than on compassionate values like social justice, helpfulness, and honesty. At least, respondents believed these things to be true about other people. When asked about their own values, a substantial majority (74 percent) of these same respondents reported that they themselves placed more value on compassionate values than selfish values.
Obviously, one of these two findings has to be wrong. It can’t be simultaneously true that most people value compassion more and that most people value selfishness more. So what should we believe: what people say about themselves, or what they say about others? The pollsters took several steps to reduce the odds that the discrepancy resulted from respondents merely bragging or puffing themselves up. They concluded that the primary source of the problem was respondents’ overly negative perceptions of others. Ultimately, after comparing people’s actual reported values with their perceptions of others’ values, the pollsters concluded that a whopping 77 percent of the sample underestimated how much their fellow Brits valued compassion.
Now, 77 percent is high, but it’s not 100 percent. Not all respondents had equally cynical views of human nature, and the likelihood that a given respondent would underestimate others’ compassion was not randomly distributed. A reliable predictor of cynicism about others’ values was the respondent’s own values: respondents who themselves valued compassion very little also perceived others as valuing compassion very little. Conversely, those who valued compassion more tended to believe that others valued compassion more as well. Psychologists call this pattern the false consensus effect, according to which people believe their own values and beliefs more closely reflect what the average person values and believes than is actually the case. As a result, people who are themselves highly selfish tend to believe that others are too, whereas those who are highly compassionate believe that others are as well. Think, for example, of Anne Frank, who concluded, despite all she had seen and experienced, that “people are truly good at heart,” or Nelson Mandela, who believed that “our human compassion binds us one to the other.” Or Martin Luther King Jr., who in his Nobel Prize acceptance speech said, “I refuse to accept the view that mankind is so tragically bound to the starless midnight of racism and war that the bright daybreak of peace and brotherhood can never become a reality.… I believe that unarmed truth and unconditional love will have the final word in reality.” Or Mahatma Gandhi, who proclaimed, “Man’s nature is not essentially evil. Brute nature has been known to yield to the influence of love. You must never despair of human nature.”
None of these people can possibly be accused of naïveté. All knew horrors beyond what any human being should have to witness or experience. But all were people whose compassion for others persisted despite their own experiences, and whose faith in the compassion of others remained undimmed.
By contrast, those who themselves are callous or cruel tend to falsely believe that their values represent the consensus. Compare the beliefs of Frank, Mandela, King, and Gandhi to the beliefs of Richard Ramirez, the notorious serial murderer—nicknamed “The Nightstalker”—who brutalized and killed thirteen people during the 1980s. Although his actions placed him far outside the bounds of normal human behavior, he viewed himself as relatively typical, once asking, “We are all evil in some form or another, are we not?” and claiming that “most humans have within them the capacity to commit murder.” The serial murderer Ted Bundy concurred, warning, “We serial killers are your sons, we are your husbands, we are everywhere.” Even Adolf Hitler framed his own horrific misdeeds as reflecting basic human nature, retorting, when questioned about his brutal treatment of Jews, “I don’t see why man should not be just as cruel as nature.” Perhaps Josef Stalin best demonstrated the relationship between the possession and perception of human vice: he once proclaimed that he trusted no one, not even himself.
I realize that many people will persist in believing that people are fundamentally and uniformly selfish and callous by nature, regardless of the objective evidence to the contrary. But the evidence also suggests that rigid adherence to this belief says much more about the person who espouses it than it does about people in general.
Resist the temptation, then, to believe only the most pessimistic messages about human nature. Consider the evidence I’ve given you, as well as the evidence of your own eyes. Next time you see or read or hear about a callous or terrible thing that some person or group of people has done—or hear someone bemoaning how awful people or just some group of people are—don’t succumb to negativity bias without a fight. Stop a moment to remember how genuinely variable people are and ask: Is that terrible thing really representative of people as a whole? Is it likely to even be representative of what that person or group of people is like? In some cases it may be—for example, when a psychopath commits a truly heinous crime. But such an act is the diminishingly rare exception, not the rule.
Don’t limit your stopping and thinking to the bad things either. The many acts of kindness and generosity that happen every day around all of us can fade into the scenery if we let them. When you see or hear or read about (or commit!) an act of genuine kindness or generosity, please take a moment to notice it and to remember how much goodness there is in the world.
There are many reasons that this approach is worthwhile; perhaps the most important is that trust in others can become a self-fulfilling prophecy, a fact that has been demonstrated using simulated social interactions. Perhaps the most famous such simulation is the Prisoner’s Dilemma. In this paradigm, a player is told that he and his partner each have two options in every round of the game. They can choose to cooperate with each other, in which case both will get a medium-sized reward of, say, $3. Alternatively, they can choose to defect. If both players defect, they both get only $1. Where things get interesting is if one player decides to cooperate and the other defects. In this case, the cooperator gets nothing at all and the defector gets $5. The hitch is that the players are not allowed to communicate while making their decisions. They must choose what to do—to trust or mistrus
t each other—before learning of their partner’s decision.
In any given round of the game, the payoff structure ensures that it is always more rational to defect than to cooperate. If his partner defects, a player will get $1 if he also defects, but $0 if he cooperates. If his partner cooperates, the player gets $5 if he defects, but $3 if he also cooperates. And yet, when people play this game, they overwhelmingly cooperate. Why would this be? It happens because the game typically involves multiple rounds—often an indeterminate number of them—and so each player’s partner has opportunity to pay him back, for better or worse, as the game goes on. This makes the Prisoner’s Dilemma a good model of reciprocity-based altruism. Cooperating in any given round requires a player to make short-term sacrifices that benefit his partner under the assumption that the partner will reciprocate in the future. And in the Prisoner’s Dilemma, as in real life, they usually do.
Early studies found that the optimal strategy in the Prisoner’s Dilemma is called tit-for-tat: starting out cooperating, then doing whatever your partner did in the last round. If he cooperated, so do you; if he defected, defect right back. Those who use this strategy tend to win out in the long term. The fact that tit-for-tat entails cooperating on the opening move is key. This demonstrates that starting from the assumption that even perfect strangers are probably trustworthy is the more advantageous approach for everyone. Starting from an assumption of others’ trustworthiness usually leads to an upward spiral of cooperation and increasing trust.
Trust, in other words, becomes a self-fulfilling prophecy.
In my interactions with altruistic kidney donors, I have gotten a peek into the worlds that such a prophecy can create in real life. Like the most compassionate respondents in the Common Cause UK Values Survey, altruists’ own deep-seated compassion and kindness often leads them to assume the best of others, and to be fairly open with and trusting of others, even people they don’t know well. As one altruist averred: “I always say: Everybody helps people in some way or another. There are just different ways to do it.” Another concurred, saying: “I think generally people are good, and I think people would like to do the right thing.” For my research team and myself, it’s been one of the most remarkable aspects of working with them—being treated with the trust and warmth of old friends by people we have just met.
I think this view of the world helps explain altruists’ decisions to donate as well. When I ask ordinary adults why they wouldn’t donate a kidney to a stranger, they often cite concern that the recipient might not truly deserve it—that person could be a criminal or a drug abuser or otherwise just not quite trustworthy. But altruistic kidney donors don’t seem to adopt this viewpoint. As one of them told us, “Everyone is going to live their own life and make their own decisions, and some of them are going to be bad and some of them are going to be good. But nobody is that bad to not deserve a normal life.” Or as another said, “Everyone’s life is equally valuable. There’s no reason to pick or choose.” The fact that altruists are willing to give a kidney to literally anyone means that they must start from the belief that whoever is selected to receive their kidney, it will be someone who deserves life and health and compassion.
You might be tempted to conclude that maybe altruists are just suckers, but that’s not it. In one computer simulation study we conducted, altruists were as willing as anyone else to penalize people who actually acted unfairly. But their default assumption—their starting point with people who are totally unknown to them—seems to be trust. This approach to the world and the people who populate it seems to result in more positive interactions than would result from a mistrustful approach, and over time they reinforce altruists’ perceptions of the basic goodness of the people around them.
Who wouldn’t want to live in a world like that?
2. Caring requires more than just compassion.
Understanding that care requires more than compassion is a really, really important aspect of understanding altruism. It suggests that a heightened capacity for compassion is not the only thing that fosters extraordinary altruism. What makes acts of extraordinary altruism—from altruistic kidney donations to my roadside rescue to Lenny Skutnik’s dive into the Potomac—extraordinary is that they are undertaken to help a stranger. Most of us would make sacrifices for close friends and family members—people whom we love and trust and with whom we have long-standing relationships—but these sorts of sacrifices can be easily accommodated by established theories like kin selection and reciprocity, which dictate that altruism is preferentially shown toward relatives and socially close others and that these forms of altruism are at least in part self-serving. When people violate this dictum by sacrificing for an anonymous stranger, however, their actions suggest that they possess the somewhat unusual belief that anyone is just as deserving of compassion and sacrifice as a close family member or friend would be. Think of it as alloparenting on overdrive.
Recent data we collected in my laboratory allowed us to mathematically model this feature of extraordinary altruism. The paradigm we used is called the social discounting task, which was originally developed by the psychologists Howard Rachlin and Bryan Jones. Rachlin and Jones were seeking to understand how people’s willingness to sacrifice for others changes as the relationship between them becomes more distant. In the task they created, respondents make a series of choices about sacrificing resources for other people. Each choice presents the respondent with two options. They can either choose to receive some amount of resources (say, $125) for themselves or split an equal or larger amount (say, $150) evenly with another person, in which case each person receives $75. In this example, choosing to share would result in sacrificing $50 ($125 − [$150 ÷ 2] = $50) to benefit the other person.
The identity of the other person varies throughout the task. In some trials, the respondent is asked to imagine sharing the resources with the person who is closest to them in their life, whoever that may be. Imagine the person closest to you in your life. Would you accept $75 instead of $125 so that this person could get $75 as well? You probably would—me too. In other trials, respondents are asked to imagine that the other person is someone more distant: their second-or fifth-or tenth-closest relationships, all the way out to their one-hundredth-closest relationship. Typically, the one-hundredth-closest person on anyone’s list is not remotely close at all and may be only barely familiar—perhaps a cashier at a local store, or someone seen in passing in the office or church. Now, would you settle for $75 instead of $125 so that someone this distant from you—someone whose name you might not even know—could receive $75? Maybe, maybe not.
Rachlin and Jones, and others as well, find that the choices people make during this task describe a very reliable hyperbolic decline as a function of social distance. This means that people reliably sacrifice significant resources for very close others, but their willingness to sacrifice drops off sharply thereafter. For example, most respondents, when given the choice to receive $155 for themselves or to share $150 with their closest loved one, choose to share. In other words, they will forgo getting an extra $80 for themselves so that their loved one can get $75 instead. This choice indicates that respondents place even more value on the sacrificed money when it is shared with a loved one—who otherwise would get nothing—than they would if they had kept it for themselves. But as the relationship in question moves from a respondent’s closest relationship to their second-closest relationship to someone in position 10 or 20, the average person’s willingness to sacrifice declines by about half. By positions 50 through 100, most people will sacrifice only about $10 to bequeath $75 on a very distant other. This pattern holds up across multiple studies and subject populations and across disparate cultures. It also holds up whether the money in question is real or hypothetical. Rachlin and Jones’s term for this hyperbolic drop-off, social discounting, refers to the fact that people discount the value of a shared resource as the person with whom it is shared becomes more socially distant.
Ca
n social discounting help to explain the difference between extraordinary altruists, who really do make enormous sacrifices for very distant others, and everyone else? Obviously, money is not a kidney. Sharing it does not require undergoing general anesthesia or major surgery. But in other ways the task is not a bad parallel to donating a kidney. When a living donor sacrifices their own kidney for another person, their choice to give their extra kidney away means that they place even more value on it when it is shared with another person—who otherwise would have no functioning kidney at all—than they do on keeping it for themselves. Think back to kidney donor Harold Mintz’s question: if your mother was going to die of renal failure tomorrow and your kidney could save her, would you give it to her? If you answered yes, we can say that you would rather sacrifice half of your total renal resources than leave your mother with none. This is exactly the choice that thousands of living donors make every year. Now, what if the person who needs a kidney is your friend, or your boss, or a neighbor? Would you sacrifice half your renal resources so that they could have some rather than none? If this was a harder choice, you have just discounted the value of your shared kidney.
Our data suggest that social discounting helps us understand the real-life choices that altruistic kidney donors make. The kidney donors and controls in our study—who were matched on every variable we could think of, including gender, age, race, average income, education, IQ, even handedness—completed a version of Jones and Rachlin’s social discounting task. Over and over again they made choices about whether they would prefer to keep resources for themselves or share them with close and distant others. Tabulating the results, my student Kruti Vekaria and I first looked at how altruists responded when choosing to sacrifice for the people closest to them. We found that they looked almost exactly like our controls. The data for the two groups overlapped completely, with nearly everyone willing to sacrifice the maximum amount ($85, in this case) to share money with their loved one.
The Fear Factor Page 24