The Trouble with Testosterone
Page 5
It is only later that the mind returns to the other men in the picture. The faceless ones, automatons with guns, the members of the firing squad. Although their pull on the viewer is more gradual than that of the doomed men in the light, it is just as powerful. For those French soldiers were the agents of one of the most magnetic of human dramas, the taking of life. How could they do it? What did they feel? Did they feel? Perhaps they were evil, or perhaps they were coerced. Or perhaps they truly believed in the necessity of their actions.
One of the great horrors of a world that provides its Goyas with endless such scenes to paint is the possibility that after the smoke clears and the corpses are removed, the executioners give the executions no thought whatever. More likely, though, at least some of those faceless men do reflect on their work, and do feel guilt over it or at least fear that others will someday judge them as well. In fact, such remorse seems pervasive enough that firing squads have evolved in a way that, in some circumstances, accommodates it. Central to that evolution are some subtly distorting aspects of human cognition that have much to do with how people go about killing people, how they judge people, and how they set priorities for their resources. And oddly, such cognitive distortions may have something to do with how some of us feel about doing science.
Underpinning these suggestions is work done over the past twenty years, principally by two psychologists, the late Amos N. Tversky of Stanford University and Daniel Kahneman of Princeton University. Tversky and Kahneman have shown that one can present people with two choices that, in terms of formal logic, are equivalent, yet one choice may be strongly and consistently preferred and may carry an emotional weight altogether different from that of the other choice. On the other hand, one can offer people two logically disparate choices, yet they may be seen as equivalent. Such apparent contradictions could come about in several ways: they could be caused by some commonly held cognitive biases identified by Tversky and Kahneman; they could be the outcome of the way the choices are presented; or they could be a reflection of the makeup of each individual.
Consider this scenario: You are told about a young woman who in college was much involved in leftist and progressive causes, a true social activist, committed and concerned. You are then told that the young woman has since gone on to one of four careers, and you are asked to rate the likelihood of her having pursued each one: (1) an organizer of farmworkers (good chance, you say); (2) a bank teller (seems highly unlikely); (3) an environmental activist (another likely outcome); and (4) a bank teller who is active in the feminist movement (well, surely that is much more likely than her having become a mere bank teller; at least she’s politically involved some of the time). Thus you rate choice 4 as being more likely than choice 2. Logically, however, it is impossible for choice 4, which features two constraints (bank teller and activist), to be more likely than choice 2, which has only one of those two constraints.
Much of Tversky and Kahneman’s work has gone into figuring out why people sometimes make that distortion, embedding something more likely inside something less likely. Another aspect of their work identifies ways people regard scenarios as unequal when in fact they are formally equivalent:
You are a physician and you have a hundred sick people on your hands. If you perform treatment A, twenty people will die. If you opt for treatment B, everyone has a 20 percent chance of dying. Which one do you choose?
The alternative scenario runs like this:
You have a hundred sick people. If you perform treatment A, eighty people will live. If you opt for treatment B, everyone has an 80 percent chance of living. Which one do you choose?
The two scenarios are formally identical; they are merely framed differently. But it turns out that for the first scenario, which states things in terms of death, people will prefer option B, whereas for the second scenario, stated in terms of survivorship, people prefer option A. Thus when thinking about life, people prefer certainty; when thinking about death, they prefer odds, because it is always conceivable the odds can be beaten.
What if Tversky and Kahneman’s scheme were applied to an early-nineteenth-century militia bent upon taking fatal revenge on some prisoners? At the time, a single shot fired from a moderate distance was often not enough to kill a person. Thus, some options: one man could stand at a distance and shoot at the prisoner five times; or five men could stand at the same distance and shoot at the prisoner once each.
Five-times-one or one-times-five are formally equivalent. So why did the firing squad evolve? I suspect it had something to do with a logical distortion it allowed each participant: if it takes one shot from each of five people to kill a man, each participant has killed one-fifth of a man. And on some irrational level, it is far easier to decide then that you have not really killed someone or, if you possess extraordinary powers of denial, that you have not even contributed to killing someone.
Why do I think the firing squad was an accommodation to guilt, to the perception of guilt, and to guilty consciences? Because of an even more intriguing refinement in the art of killing people. By the middle of the nineteenth century, when a firing squad assembled, it was often the case that one man would randomly be given a blank bullet. Whether each member of the firing squad could tell if he had the blank or not—by the presence or absence of a recoil at the time of the shooting—was irrelevant. Each man could go home that night with the certainty that he could never be accused, for sure, of having played a role in the killing.
Of course, the firing squad wasn’t always the chosen method of execution, and that is where Tversky and Kahneman’s work says something about the emotional weight of particular killings. If civilians were being killed—members of an unruly urban populace rising up in threatening protest—a single close-range shot to the head, a bayoneting, or any such technique would be suitable. If the victim was some nondescript member of the enemy army, a firing squad was assembled in which there was rarely a random blank. But if, by the addled etiquette of nineteenth-century warfare, the victim was someone who mattered, an accomplished and brave officer of the enemy or a comrade turned traitor, the execution would be a ceremony filled with honorifics and ambivalence.
For example, military law for the Union army during the American Civil War specified the rules of executing a traitor or deserter. Explicit instructions were given to cover a range of details: how the enlisted men in attendance should be lined up to ensure that everyone would see the shooting; the order in which the execution party marched in; the music to be played at various points by the band of the prisoner’s regiment—and exactly when the provost marshal was supposed to put a blank into one of the guns, out of sight of the firing squad.
It all seems so quaint. But the same traditions persist today. In the American states that allow executions, lethal injection is fast becoming the method of choice. In states more “backward” about the technology of execution, execution is done by hand. But among the cutting-edge states, a $30,000 lethal injection machine is used. Its benefits, extolled by its inventor at the wardens’ conventions he frequents, include dual sets of syringes and dual stations with switches for two people to throw at the same time. A computer with a binary-number generator randomizes which syringe is injected into the prisoner and which ends up in a collection vial—and then erases the decision. The state of New Jersey even stipulates the use of execution technology with multiple stations and a means of randomization. No one will ever know who really did it, not even the computer.
Such rites of execution are part of a subtle cognitive game. The formal set of circumstances exemplified by the multiple executioners who contribute to a singular execution has challenged societies in other forms. If five people must shoot to kill a man, then on some logical level no one of the shooters is a murderer. A different version of that principle is central to the idea of causation as taught in the law schools. Suppose two men start fires simultaneously, at opposite ends of a property. The fires merge and burn down the property. Who is responsible for the d
amage? The logic adopted by nineteenth-century American courts would have given solace to any participant in a firing squad. Each of the two arsonous defendants can correctly make the same point: If I had not set the fire, the property would still have burned. So how can I be guilty? Both of them would have walked free.
By 1927 a different interpretation rose out of a landmark court decision, Kingston v. Chicago and NW Railroad. The case did indeed begin with two fires—one caused by a locomotive and the other of unknown origin; the fires then joined to burn down the plaintiff’s property. Before Kingston, judges would have ruled that the guilt of the singular burning of a property could not be distributed among multiple parties. But in the Kingston case, the courts declared for the first time that guilt for a singular burning, a singular injury, a singular killing, could be distributed among contributing parties. The decision prompted an almost abashed disclaimer from the judge: were the railroad able to get off free, he said, “the injustice of such a doctrine sufficiently impeaches the logic upon which it is founded.” It was as if he had to apologize for being compassionate instead of logical.
His decision, however, was no more or less logical than the earlier ones that would have freed the railroad of wrongdoing. Again, it is a matter of Tverskian and Kahnemanesque framing: two fires join and burn down a property. The issue of guilt depends on whether the judges are more emotionally responsive to the defendants (“If I hadn’t been there, the place would still have burned. How could I be guilty?”), or more responsive to the plaintiff (“My home was burned by these people”). Daniel J. H. Greenwood of the University of Utah College of Law, who has thought long and hard about the 1927 philosophical transition, thinks the change reflects the general social progressiveness of the time: it shed some vestiges of the legal thought that had reflected the interests of robber barons and trusts. Judges began to frame cases logically in a way that would make them think first of the individual and damaged plaintiff, and that made it more difficult for anyone to hide behind corporate aggregateness. Suddenly it was possible for a number of people to be statistically guilty for a singular event.
The act of placing a blank among the bullets for the firing squad is even subtler in its cognitive implications. For example, on what occasions are people allowed the comfort of a metaphorical blank? Criminal law in the United States requires the unanimity of twelve jurors in a decision. You can bet that when the all-white jury acquitted the police in the first Rodney King trial, the jurors desperately wished for a system that would have allowed them to vote eleven-to-one for acquittal. Each one could then have hinted to the enraged world that he or she had been the innocent in that travesty. Yet the system does not allow such blanks, probably because the desire for perceived unanimity when the State forces its citizens to judge one another outweighs the desire of the State to protect the citizens who do the judging.
An even more intriguing aspect of the metaphorical blank is what one does with it cognitively. When a member of the two-man lethal-injection team goes home the night after the execution—and assuming he has a twinge of conscience—he does not think, “I am a killer,” or “Today, I contributed to a killing,” or even “I have a 50 percent chance of having helped kill someone today.” More likely, he would frame the same logic in terms of statistical innocence: “I have a 50 percent chance of not having helped kill someone.” Or he might even find a way of rationalizing a fraction of innocence into an integer: “One of us didn’t really do it. Why shouldn’t it be me?” Or: “I know what the dummy button feels like. It was mine.” Given the duality of perceiving the event as one of fractional guilt or fractional innocence, people not only bias toward the latter, but also are replete with clever rationalizations that distort the matter further until it is the certain integer of innocence. In other circumstances the bias goes the other way. An archetypally villainous industrialist, bloated and venal, contemplates a new profitable venture for his factory. His minions of advisers, on the basis of their calculations, tell him that the toxins his factory will dump into the drinking water will probably lead to three cancer deaths in his town of 100,000. Naturally he decides to go ahead with his remunerative plans. But that night he surely doesn’t think in integers: “Today, for profit, I have consigned three innocents to their deaths.” Instead, all his cognitive biases lead him to frame the matter in terms of statistical guilt: “All it does is increase the cancer risk .003 percent for each person. Trivial. Charles, you may serve me my dinner.”
Tversky uses the term “tendencies” to describe the practice of conceiving guilt and innocence in fractions. He reserves the term “frequencies” to describe the bias toward viewing the same facts in integers. If one has done something bad, it is no accident that one thinks fractionally—or, statistically speaking, in terms of distributed tendencies, a world without complete faces. If one has done something good, the pull is toward frequencies and integers.
Thinking in frequencies is also the easiest way of getting someone to consider another person’s pains. It is overwhelmingly likely that if it is ever decided in the courts to hold tobacco companies responsible for killing endless numbers of people, the decision will not be a result of a class action suit, with claims of distributed pain. Rather, it will be because a jury will understand the pain of one individual who personifies being killed by cigarettes. “People think less extensionally when they think of tendencies than when they think of frequencies,” says Tversky. In other words, as every journalist knows, empathy is grounded in a face, an individual story, a whole number of vulnerability.
The work of Tversky and Kahneman teaches scientists all kinds of lessons about cognitive pitfalls, including the fairly obvious ones people confront as they struggle with problems that invariably have multiple causes. For example, is the causality distributed among so many agents that people cannot perceive it? Do people distort their work with a cognitive bias toward finding a single magic bullet? We who do science all could learn a thing or two from Tversky and Kahneman. But there is an aspect of their work that tugs at my emotions. It comes from another of their scenarios:
In a population, all the deaths are attributable to two diseases, each accounting for half the deaths. You have a choice. You can discover something that cures all the known cases of one disease, or you can discover something that cures half the cases of each disease.
By now it should be clear that the two options are formally equivalent: 1 × 1/2 is equal to (1/2 × 1/2) + (1/2 × 1/2). Yet people show a strong bias toward curing all the causes of one disease. That kind of integer satisfies a sense of closure. In spite of our poetry and abstract theorems and dreams, our cognitive pull is toward being concrete and tangible—cross that disease off the list of things to worry about. And as a scientist, that is what plagues me emotionally.
In 1977 a group of biomedical scientists from the World Health Organization inoculated the populace of a town called Merca, in the hinterlands of Somalia, and thereby accomplished something extraordinary: they eradicated the last known case of smallpox on this planet. I think of that moment often, and always with the envy of knowing I will never accomplish something like that. I had a similar sense when I was a postdoctoral fellow at the Salk Institute for Biological Studies. I would occasionally see Jonas Salk at seminars and feel in awe of him, a scientist who knew an extraordinary closure.
I will never achieve that sense, not just because I am not as skillful a biomedical scientist, but also because of the way science is typically done these days. Now the scientific arena is one in which a coherent picture of a problem emerges from the work of teams of people in dozens of laboratories, in which diseases often have multiple causes, in which biological messengers have multiple effects; in which a long, meandering route of basic research might eventually lead to clinical trials. The chances that one person can single-handedly bring closure to a biological research problem grow ever more remote.
My own research is at an extreme of distributed causality. In my laboratory, I study how stre
ss, and a class of hormones secreted during stress, can endanger neurons in the brain and make those cells less likely to survive neurological disasters such as cardiac arrest and seizure. In other words, I do not study so much how stress can damage the brain as much as how it may exacerbate the toxicity of neurological insults. It is by no means clear yet whether things really work that way, let alone how important a variable stress is. But if every scientific fantasy of mine comes true—if, inconceivably, every experiment I go near works perfectly—I will be able to demonstrate that stress is an exacerbating factor in modulating the neurological damage that affects hundreds of thousands of people each year. And if every crackpot therapeutic idea I have works, the new knowledge will lead to a way of decreasing the brain damage, at least a little bit, in each of those people.
Even in my fantasy world, though, stress would be only one of many statistical villains. It may be a factor in many diseases that cause brain damage, but, at best, it is only a small factor; its importance would be distributed over the huge number of cases. With deep knowledge of the problem, the best I could hope for is to bring about statistical good—in effect, to save a hundredth or a thousandth of a life here and there. That would be wonderful, indeed. Still, there is the pull of integers, the pull of whole numbers.
Few scientists will ever save or solve in integers rather than in fractions. Saving in integers is the realm of clinical medicine; there one deals with one person at a time. Moreover, saving lives in integers is the realm of an era of science that for the most part is past, when the lone investigator could conceivably vanquish a disease. Instead, in the present world we scientists have factors that modulate and synergize and influence and interact, but rarely cause. It is abundantly true that 1 × 1/2 is equal to (1/2 × 1/2) + (1/2 × 1/2). But we have a cognitive bias toward the former, and we are in a profession that must exalt extreme versions of the latter.