Recall those charts and table showing the recentness of these scientific discoveries. If you believe that starting tonight, at midnight, something will happen and science will stop, that there will be no new publications, findings, or knowledge relevant to this book, that we now know everything there is, then it is clear what one’s stance should be—there are some rare domains where extremes of biological dysfunction cause involuntary changes in behavior, and we’re not great at predicting who undergoes such changes. In other words, the homunculus is alive and well.
But if you believe that there will be the accrual of any more knowledge, you’ve just committed to either the view that any evidence for free will ultimately will be eliminated or the view that, at the very least, the homunculus will be jammed into ever tinier places. And with either of those views, you’ve also agreed that something else is virtually guaranteed: that people in the future will look back at us as we do at purveyors of leeches and bloodletting and trepanation, as we look back at the fifteenth-century experts who spent their days condemning witches, that those people in the future will consider us and think, “My God, the things they didn’t know then. The harm that they did.”
Archaeologists do something impressive, reflecting disciplinary humility. When archaeologists excavate a site, they recognize that future archaeologists will be horrified at their primitive techniques, at the destructiveness of their excavating. Thus they often leave most of a site untouched to await their more skillful disciplinary descendants. For example, astonishingly, more than forty years after excavations began, less than 1 percent of the famed Qin dynasty terra-cotta army in China has been uncovered.
Those adjudicating trials don’t have the luxury of adjourning for a century until we really understand the biology of behavior. But at the very least the system needs the humility of archaeology, a sense that, above all else, we shouldn’t act irrevocably.
But what do we actually do in the meantime? Simple (which is easy for me to say, looking at the legal world from the soothing distance of my laboratory): probably just three things. One is easy, one is very challenging to implement, and the third is nearly impossible.
First the easy one. If you reject free will and the discussion turns to the legal system, the crazy-making, inane challenge that always surfaces is that you’d do nothing about criminals, that they’d be free to walk the streets, wreaking havoc. Let’s trash this one instantly—no rational person who rejects free will actually believes this, would argue that we should do nothing because, after all, the person has frontal damage, or because, after all, evolution has selected for the damaging trait to traditionally be adaptive, or because, after all . . . People must be protected from individuals who are dangerous. The latter can no more be allowed to walk the streets than you can allow a car whose brakes are faulty to be driven. Rehabilitate such people if you can, send them to the Island of Misfit Toys forever if you can’t and they are destined to remain dangerous. Josh Greene and Jonathan Cohen of Princeton wrote an extremely clearheaded piece on this, “For the Law, Neuroscience Changes Nothing and Everything.” Where neuroscience and the rest of biology change nothing is in the continued need to protect the endangered from the dangerous.30
Now for the nearly impossible issue, the one that “changes everything”—the issue of punishment. Maybe, just maybe, a criminal must suffer punishment at junctures in a behaviorist framework, as part of rehabilitation, part of making recidivism unlikely by fostering expanded frontal capacity. It is implicit in the very process of denying a dangerous individual their freedom by removing them from society. But precluding free will precludes punishment being an end in and of itself, punishment being imagined to “balance” the scales of justice.
It is the punisher’s mind-set where everything must be changed. The difficulty of this is explored in the superb book The Punisher’s Brain: The Evolution of Judge and Jury (2014) by Morris Hoffman, a practicing judge and legal scholar.31 He reviews the reasons for punishment: As we see from game theory studies, because punishment fosters cooperation. Because it is in the fabric of the evolution of sociality. And most important, because it can feel good to punish, to be part of a righteous and self-righteous crowd at a public hanging, knowing that justice is being served.
This is a deep, atavistic pleasure. Put people in brain scanners, give them scenarios of norm violations. Decision making about culpability for the violation correlates with activity in the cognitive dlPFC. But decision making about appropriate punishment activates the emotional vmPFC, along with the amygdala and insula; the more activation, the more punishment.32 The decision to punish, the passionate motivation to do so, is a frothy limbic state. As are the consequences of punishing—when subjects punish someone for making a lousy offer in an economic game, there’s activation of dopaminergic reward systems. Punishment that feels just feels good.
It makes sense that we’ve evolved such that it is limbic froth that is at the center of punishing, and that a pleasurable dopaminergic surge rewards doing so. Punishment is effortful and costly, ranging from forgoing a reward when rejecting a lowball offer in the Ultimatum Game to our tax dollars paying for the dental plan of the prison guard who operates the lethal injection machine. That rush of self-righteous pleasure is what drives us to shoulder the costs. This was shown in one neuroimaging study of economic game play. Subjects alternated between being able to punish lousy offers at no cost and having to spend points they had earned to do so. And the more dopaminergic activation during no-cost punishment, the more someone would pay to punish in the other condition.33
Thus the nearly impossible task is to overcome that. Sure, as I said, punishment would still be used in an instrumental fashion, to acutely shape behavior. But there is simply no place for the idea that punishment is a virtue. Our dopaminergic pathways will have to find their stimulation elsewhere. I sure don’t know how best to achieve that mind-set. But crucially, I sure do know we can do it—because we have before: Once people with epilepsy were virtuously punished for their intimacy with Lucifer. Now we mandate that if their seizures aren’t under control, they can’t drive. And the key point is that no one views such a driving ban as virtuous, pleasurable punishment, believing that a person with treatment-resistant seizures “deserves” to be banned from driving. Crowds of goitrous yahoos don’t excitedly mass to watch the epileptic’s driver’s license be publicly burned. We’ve successfully banished the notion of punishment in that realm. It may take centuries, but we can do the same in all our current arenas of punishment.
Which brings us to the huge practical challenge. The traditional rationales behind imprisonment are to protect the public, to rehabilitate, to punish, and finally to use the threat of punishment to deter others. That last one is the practical challenge, because such threats of punishment can indeed deter. How can that be done? The broadest type of solution is incompatible with an open society—making the public believe that imprisonment involves horrific punishments when, in reality, it doesn’t. Perhaps the loss of freedom that occurs when a dangerous person is removed from society must be deterrence enough. Perhaps some conventional punishment will still be needed if it is sufficiently deterring. But what must be abolished are the views that punishment can be deserved and that punishing can be virtuous.
None of this will be easy. When contemplating the challenge to do so, it is important to remember that some, many, maybe even most of the people who were prosecuting epileptics in the fifteenth century were no different from us—sincere, cautious, and ethical, concerned about the serious problems threatening their society, hoping to bequeath their children a safer world. Just operating with an unrecognizably different mind-set. The psychological distance from them to us is vast, separated by the yawning chasm that was the discovery of “It’s not her, it’s her disease.” Having crossed that divide, the distance we now need to go is far shorter—it merely consists of taking that same insight and being willing to see its valid extension in whatev
er directions science takes us.
The hope is that when it comes to dealing with humans whose behaviors are among our worst and most damaging, words like “evil” and “soul” will be as irrelevant as when considering a car with faulty brakes, that they will be as rarely spoken in a courtroom as in an auto repair shop. And crucially, the analogy holds in a key way, extending to instances of dangerous people without anything obviously wrong with their frontal cortex, genes, and so on. When a car is being dysfunctional and dangerous and we take it to a mechanic, this is not a dualistic situation where (a) if the mechanic discovers some broken widget causing the problem, we have a mechanistic explanation, but (b) if the mechanic can’t find anything wrong, we’re dealing with an evil car; sure, the mechanic can speculate on the source of the problem—maybe it’s the blueprint from which the car was built, maybe it was the building process, maybe the environment contains some unknown pollutant that somehow impairs function, maybe someday we’ll have sufficiently powerful techniques in the auto shop to spot some key molecule in the engine that is out of whack—but in the meantime we’ll consider this car to be evil. Car free will also equals “internal forces we do not understand yet.”*34
Many who are viscerally opposed to this view charge that it is dehumanizing to frame damaged humans as broken machines. But as a final, crucial point, doing that is a hell of a lot more humane than demonizing and sermonizing them as sinners.
POSTSCRIPT: NOW FOR THE HARD PART
Well, so much for the criminal justice system. Now on to the really difficult part, which is what to do when someone compliments your zygomatic arches.
If we deny free will when it comes to the worst of our behaviors, the same must also apply to the best. To our talents, displays of willpower and focus, moments of bursting creativity, decency, and compassion. Logically it should seem as ludicrous to take credit for those traits as to respond to a compliment on the beauty of your cheekbones by thanking the person for implicitly having praised your free will, instead of explaining how mechanical forces acted upon the zygomatic arches of your skull.
It will be so difficult to act that way. I am willing to admit that I have acted egregiously in this regard. My wife and I have brunch with a friend, who serves fruit salad. We proclaim, “Wow, the pineapple is delicious.” “They’re out of season,” our host smugly responds, “but I lucked out and found a decent one.” My wife and I express awestruck worship—“You really know how to pick fruit. You are a better person than we are.” We are praising the host for this supposed display of free will, for the choice made at the fork in life’s road that is pineapple choosing. But we’re wrong. In reality, genes had something to do with the olfactory receptors our host has that help detect ripeness. Maybe our host comes from a people whose deep and ancient cultural values include learning how to feel up a pineapple to tell if it’s good. The sheer luck of the socioeconomic trajectory of our host’s life has provided the resources to prowl an overpriced organic market playing Peruvian folk Muzak. Yet we praise our host.
I can’t really imagine how to live your life as if there is no free will. It may never be possible to view ourselves as the sum of our biology. Perhaps we’ll have to settle for making sure our homuncular myths are benign, and save the heavy lifting of truly thinking rationally for where it matters—when we judge others harshly.
Seventeen
War and Peace
Let’s review some facts. The amygdala typically activates when seeing a face of another race. If you’re poor, by the time you’re five, your frontal cortical development probably lags behind average. Oxytocin makes us crappy to strangers. Empathy doesn’t particularly translate into compassionate acts, nor does refined moral development translate into doing the harder, right thing. There are gene variants that, in particular settings, make you prone toward antisocial acts. And bonobos aren’t perfectly peaceful—they wouldn’t be masters of reconciliation if they didn’t have conflicts to reconcile.
All this makes one mighty pessimistic. Yet the rationale for this book is that, nonetheless, there’s ground for optimism.
Thus this final chapter’s goals are (a) to evidence that things have improved, that many of our worst behaviors are in retreat, our best ones ascendant; (b) to examine ways to improve this further; (c) to derive emotional support for this venture, to see that our best behaviors can occur in the most unlikely circumstances; (d) and finally, to see if I can actually get away with calling this chapter “War and Peace.”
SOMEWHAT BETTER ANGELS
When it comes to our best and worst behaviors, the world is astonishingly different from that of the not-so-distant past. At the dawn of the nineteenth century, slavery occurred worldwide, including in the colonies of a Europe basking in the Enlightenment. Child labor was universal and would soon reach its exploitative golden age with the Industrial Revolution. And there wasn’t a country that punished mistreatment of animals. Now every nation has outlawed slavery, and most attempt to enforce that; most have child labor laws, rates of child labor have declined, and it increasingly consists of children working alongside their parents in their homes; most countries regulate the treatment of animals in some manner.
The world is also safer. Fifteenth-century Europe averaged 41 homicides per 100,000 people per year. Currently only El Salvador, Venezuela, and Honduras, at 62, 64, and 85, respectively, are worse; the world averages 6.9, Europe averages 1.4, and there are Iceland, Japan, and Singapore at 0.3.
Here are things that are rarer in recent centuries: Forced marriages, child brides, genital mutilation, wife beating, polygamy, widow burning. Persecution of homosexuals, epileptics, albinos. Beating of schoolchildren, beating of beasts of burden. Rule of a land by an occupying army, by a colonial overlord, by an unelected dictator. Illiteracy, death in infancy, death in childbirth, death from preventable disease. Capital punishment.
Here are things invented in the last century: Bans on the use of certain types of weapons. The World Court and the concept of crimes against humanity. The UN and the dispatching of multinational peacekeeping forces. International agreements to hinder trafficking of blood diamonds, elephant tusks, rhino horns, leopard skins, and humans. Agencies that collect money to aid disaster victims anywhere on the planet, that facilitate intercontinental adoption of orphans, that battle global pandemics and send medical personnel to any place of conflict.
Yes, I know, I’m an utter naïf if I think laws are universally enforced. For example, in 1981 Mauritania became the last country to ban slavery; nevertheless, today roughly 20 percent of its people are slaves, and the government has prosecuted a total of one slave owner.1 I recognize that little has changed in many places; I have spent decades in Africa living around people who believe that epileptics are possessed and that the organs of murdered albinos have healing powers, where beating of wives, children, and animals is the norm, five-year-olds herd cattle and haul firewood, pubescent girls are clitoridectomized and given to old men as third wives. Nonetheless, worldwide, things have improved.
The definitive account of this is Pinker’s monumental The Better Angels of Our Nature: Why Violence Has Declined.2 It’s a scholarly work that’s gut-wrenchingly effective in documenting just how bad things once were. Pinker graphically describes the appalling historical inhumanity of humans. Roughly half a million people died in the Roman Colosseum to supply audiences of tens of thousands the pleasure of watching captives raped, dismembered, tortured, eaten by animals. Throughout the Middle Ages, armies swept across Eurasia, destroying villages, killing every man, consigning every woman and child to slavery. Aristocracy accounted for a disproportionate share of violence, savaging peasantry with impunity. Religious and governmental authorities, ranging from Europeans to Persians, Chinese, Hindus, Polynesians, Aztecs, Africans, and Native Americans, invented means of torture. For a bored sixteenth-century Parisian, entertainment might consist of a cat burning, execution of a “criminal” animal, or bearbaiting, where
a bear, chained to a post, would be torn apart by dogs. It is a sickeningly different world; Pinker quotes the writer L. P. Hartley: “The past is a foreign country: they do things differently there.”
Better Angels has provoked three controversies:
Why Were People So Awful Then?
For Pinker the answer is clear. Because people had always been so awful. This is chapter 9’s debate—when was war invented, was ancestral hunter-gatherer life about Hobbes or Rousseau? As we saw, Pinker is in the camp holding that organized human violence predates civilization, stretching back to our last common ancestor with chimps. And as reviewed, most experts convincingly disagree, suggesting that data have been cherry-picked, hunter-horticulturalists mislabeled as hunter-gatherers, and newfangled sedentary hunter-gatherers inappropriately grouped with traditional nomadic ones.
Why Have People Gotten Less Awful?
Pinker’s answer reflects two factors. He draws on the sociologist Norbert Elias, whose notion of the “civilizing process” centered on the fact that violence declines when states monopolize force. That is coupled with spread of commerce and trade, fostering realpolitik self-restraint—recognizing that it’s better to have this other person alive and trading with you. Their well-being begins to matter, prompting what Pinker calls an “escalator of reasoning”—an enlarged capacity for empathy and Us-ness. This underlies the “rights revolution”—civil rights, women’s rights, children’s rights, gay rights, animal rights. This view is a triumph of cognition. Pinker yokes this to the “Flynn effect,” the well-documented increase in average IQ over the last century; he invokes a moral Flynn effect, as increasing intelligence and respect for reasoning fuel better Theory of Mind and perspective taking and an increased ability to appreciate the long-term advantages of peace. In the words of one reviewer, Pinker is “not too fainthearted to call his own culture civilized.”3
Behave: The Biology of Humans at Our Best and Worst Page 62