The Most Human Human
Page 6
That core, that essence, that meaning, seems to have migrated in the past few millennia, from the whole body to the organs in the chest (heart, lungs, liver, stomach) to the one in the head. Where next?
Consider, for instance, the example of the left and right hemispheres.
The human brain is composed of two distinct “cerebral hemispheres” or “half brains”: the left and the right. These hemispheres communicate via an extremely “high bandwidth” “cable”—a bundle of roughly 200 million axons called the corpus callosum. With the exception of the data being ferried back and forth across the corpus callosum, the two halves of our brain operate independently—and rather differently.
The Split Brain
So: where are we?
Nowhere is this question raised more shockingly and eerily than in the case of so-called “split-brain” patients, whose hemispheres—usually as a result of surgical procedures aimed at reducing seizures—have been separated and can no longer communicate. “Joe,” a split-brain patient, says, “You know, the left hemisphere and right hemisphere, now, are working independent of each other. But, you don’t notice it … It doesn’t feel any different than it did before.”
It’s worth considering that the “you”—in this case, a rhetorical stand-in for “I”—no longer applies to Joe’s entire brain; the domain of that pronoun has shrunk. It only, now, refers to the left hemisphere, which happens to be the dominant hemisphere in language. Only that half, you might say, is speaking.
At any rate, Joe is telling us that “he”—or, his left hemisphere—doesn’t notice anything different. But things are different, says his doctor, Michael Gazzaniga. “What we can do is play tricks, by putting information into his disconnected, mute, non-talking right hemisphere, and watch it produce behaviors. And out of that, we can really see that there is, in fact, reason to believe that there’s all kinds of complex processes going on outside of his conscious awareness of his left half-brain.”
In one of the more eerie experiments, Gazzaniga flashes two images—a hammer and a saw—to different parts of Joe’s visual field, such that the hammer image goes to his left hemisphere and the saw to his right. “What’d you see?” asks Gazzaniga.
“I saw a hammer,” Joe says.
Gazzaniga pauses. “So, just close your eyes, and draw with your left hand.” Joe picks up a marker with his left hand, which is controlled by his right hemisphere. “Just let it go,” says Gazzaniga. Joe’s left hand draws a saw.
“That looks nice,” says Gazzaniga. “What’s that?”
“Saw?” Joe says, slightly confused.
“Yeah. What’d you see?”
“Hammer.”
“What’d you draw that for?”
“I dunno,” Joe, or at any rate his left hemisphere, says.
In another experiment, Gazzaniga flashes a chicken claw to a split-brain patient’s “talking” left hemisphere and a snowbank to the “mute” right hemisphere. The patient draws a snow shovel, and when Gazzaniga asks him why he drew a shovel, he doesn’t hesitate or shrug. Without missing a beat, he says, “Oh, that’s simple. The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed.” Of course, this as an explanation is completely false.
The left hemisphere, it seems, is constantly drawing cause-and-effect inferences from experience, constantly attempting to make sense of events. Gazzaniga dubs this module, or whatever it is exactly, the “interpreter.” The interpreter, split-brain patients show us, has no problem and no hesitation confabulating a false causation or a false motive. Actually, “lying” is putting it too strongly—it’s more like “confidently asserting its best guess.” Without access to what’s happening in the right hemisphere, that guess can sometimes be purely speculative, as in this case. What’s fascinating, though, is that this interpreter doesn’t necessarily even get it right all the time in a healthy brain.
To take a random example: a woman undergoing a medical procedure had her “supplementary motor cortex” stimulated electrically, producing uncontrollable laughter. But instead of being bewildered by this inexplicable outburst, she acted as if anyone in her position would have cracked up: “You guys are just so funny standing around!”
I find it so tragic that when an infant cries, the parents sometimes have no idea what might be the cause of the cry—hunger? thirst? Dirty diaper? fatigue? If only the child could tell them! But no, they must simply run down the list—here’s some food, no, still crying, here’s a new diaper, no, still crying, here’s your blanket, maybe you need a nap, no, still crying … But it occurs to me that this also describes my relationship to myself. When I am in a foul mood, I think, “How’s my work going? How’s my social life? How’s my love life? How much water have I had today? How much coffee have I had today? How well have I been eating? How much have I been exercising? How have I been sleeping? How’s the weather?” And sometimes that’s the best I can do: eat some fruit, jog around the neighborhood, take a nap, and on and on till the mood changes and I think, “Oh, I guess that was it.” I’m not much better than the infant.
Once, in graduate school, after making what I thought was not a trivial, but not a particularly major, life decision, I started to feel kind of “off.” The more off I felt, the more I started to rethink my decision, and the more I rethought the decision—this was on the bus on the way to campus—the more I started feeling nauseous, sweaty, my blood running hot and cold. “Oh my God!” I remember thinking. “This is actually a much bigger deal than I thought!” No, it was simply that I’d caught the stomach flu going around the department that month.
You see this happen—“misattribution”—in all sorts of fascinating studies. For instance: they’ve proven that people look more attractive to you when you’re walking across a suspension bridge or riding a roller coaster. Apparently, the body generates all this jitteriness, which is actually fear, but the rational mind says something to the effect of, “Oh, butterflies in the stomach! But obviously there’s nothing to be afraid of from a silly little roller coaster or bridge—they’re completely safe. So it must be the person standing next to me that’s got me all aflutter …” In a Canadian study, a woman gave her number to male hikers either just before they reached the Capilano Suspension Bridge or in the middle of the bridge. Those who met her on the bridge were twice as likely to call and ask for a date.
Someone who can put together extremely compelling reasons for why they did something can get themselves out of hot water more often than someone at a loss for why they did it. But just because a person gives you a sensible explanation for a strange or objectionable behavior, and the person is an honest person, doesn’t mean the explanation is correct. And the ability to spackle something plausible into the gap between cause and effect doesn’t make the person any more rational, or responsible, or moral, even though we’ll pretty consistently judge them so.
Says Gazzaniga, “What Joe, and patients like him, and there are many of them, teaches us, is that the mind is made up of a constellation of independent, semi-independent, agents. And that these agents, these processes, carry on a vast number of activities outside of our conscious awareness.”
“Our conscious awareness”—our! The implication here (which Gazzaniga later confirms explicitly) is that Joe’s “I” pronoun may have always referred mostly, and primarily, to his left hemisphere. So, he says, do ours.
Hemispheric Chauvinism: Computer and Creature
“The entire history of neurology and neuropsychology can be seen as a history of the investigation of the left hemisphere,” says neurologist Oliver Sacks.
One important reason for the neglect of the right, or “minor,” hemisphere, as it has always been called, is that while it is easy to demonstrate the effects of variously located lesions on the left side, the corresponding syndromes of the right hemisphere are much less distinct. It was presumed, usually contemptuously, to be more “primitive” than the left, the latter being seen as the unique flower of huma
n evolution. And in a sense this is correct: the left hemisphere is more sophisticated and specialised, a very late outgrowth of the primate, and especially the hominid, brain. On the other hand, it is the right hemisphere which controls the crucial powers of recognising reality which every living creature must have in order to survive. The left hemisphere, like a computer tacked onto the basic creatural brain, is designed for programs and schematics; and classical neurology was more concerned with schematics than with reality, so that when, at last, some of the right-hemisphere syndromes emerged, they were considered bizarre.
The neurologist V. S. Ramachandran echoes this sentiment:
The left hemisphere is specialized not only for the actual production of speech sounds but also for the imposition of syntactic structure on speech and for much of what is called semantics—comprehension of meaning. The right hemisphere, on the other hand, doesn’t govern spoken words but seems to be concerned with more subtle aspects of language such as nuances of metaphor, allegory and ambiguity—skills that are inadequately emphasized in our elementary schools but that are vital for the advance of civilizations through poetry, myth and drama. We tend to call the left hemisphere the major or “dominant” hemisphere because it, like a chauvinist, does all the talking (and maybe much of the internal thinking as well), claiming to be the repository of humanity’s highest attribute, language.
“Unfortunately,” he explains, “the mute right hemisphere can do nothing to protest.”
Slightly to One Side
This odd focus on, and “dominance” of, the left hemisphere, says arts and education expert (and knight) Sir Ken Robinson, is evident in the hierarchy of subjects within virtually all of the world’s education systems:
At the top are mathematics and languages, then the humanities, and the bottom are the arts. Everywhere on Earth. And in pretty much every system too, there’s a hierarchy within the arts. Art and music are normally given a higher status in schools than drama and dance. There isn’t an education system on the planet that teaches dance every day to children the way we teach them mathematics. Why? Why not? I think this is rather important. I think math is very important, but so is dance. Children dance all the time if they’re allowed to; we all do. We all have bodies, don’t we? Did I miss a meeting? Truthfully, what happens is, as children grow up, we start to educate them progressively from the waist up. And then we focus on their heads. And slightly to one side.
That side, of course, being the left.
The American school system “promotes a catastrophically narrow idea of intelligence and ability,” says Robinson. If the left hemisphere, as Sacks puts it, is “like a computer tacked onto the basic creatural brain,” then by identifying ourselves with the goings-on of the left hemisphere, by priding ourselves on it and “locating” ourselves in it, we start to regard ourselves, in a manner of speaking, as computers. By better educating the left hemisphere and better valuing and rewarding and nurturing its abilities, we’ve actually started becoming computers.
Rational Agents
You see the same left-hemisphere bias in the field of economics. Emotions are considered barnacles on the smooth hull of the mind. Decisions should be made, to the greatest extent possible, in their absence—and, as much as possible, calculatingly, even algorithmically.
“If you had asked Benjamin Franklin, ‘How should I go about making up my mind?’ ” says Baba Shiv of the Stanford Graduate School of Business, “what he would have advised you to do is, list down all of the positives and all the negatives of your present option, list down all of the positives and all the negatives of the alternative that you have. And then choose that option that has got the greatest number of positives and the least number of negatives.”
This analytical, emotionless notion of ideal decision making was codified into the “rational agent” model of economic theory. The model consumer or investor, it reckoned, would somehow have access to all possible information about the market and would be able to somehow instantly distill it all and make the perfect choice. Shockingly, real markets, and real investors and consumers, don’t work this way.
But even when recognition came that omniscient rationality was not the right model to use, it seemed that economists were more interested in talking about this as a shortfall than a boon. Consider 2008’s Predictably Irrational, in which behavioral economist Dan Ariely argues against the rational-agent model by highlighting the various human behaviors that don’t accord with it. A victory for re-assimilating the various neglected and denigrated capacities of the self? A glance at the jacket blurbs is enough to produce a resounding no, revealing the light in which we are meant to read these deviations from economic theory. “How we can prevent being fooled,” says Jerome Groopman, Recanati Professor of Medicine at Harvard Medical School. “The weird ways we act,” says business writer James Surowiecki. “Foibles, errors, and bloopers,” says Harvard psychologist Daniel Gilbert. “Foolish, and sometimes disastrous, mistakes,” says Nobel laureate in economics George Akerlof. “Managing your emotions … so challenging for all of us … can help you avoid common mistakes,” says financial icon Charles Schwab.12
Now, some of what passes for “irrationality” in traditional “rational” economics is simply bad science, cautions Daniel Kahneman, Nobel laureate from Princeton. For instance, given a choice between a million dollars and a 50 percent chance of winning four million dollars, the “rational” choice is “obviously” the latter, whose “expected outcome” is two million dollars, double the first offer. Yet most people say they would choose the former—fools! Or are they? It turns out to depend on how wealthy you are: the richer you are, the more inclined toward the gamble. Is this because wealthier people are (as demonstrated by being rich) more logical? Is this because less wealthy people are blinded by an emotional reaction to money? Is it because the brain is, tragically, more averse to loss than excited by gain? Or perhaps the wealthy person who accepts the gamble and the less wealthy person who declines it are, in fact, choosing completely appropriately in both cases. Consider: a family deep into debt and about to default on their home could really use that first million; the added three million would be icing on the cake but wouldn’t change much. The “quadruple or nothing” offer just isn’t worth betting the farm—literally. Whereas for a billionaire like Donald Trump, a million bucks is chump change, and he’ll probably take his chances, knowing the odds favor him. The two choose differently—and both choose correctly.
At any rate, and with examples like this one aside, the prevailing attitude seems clear: economists who subscribe to the rational-choice theory and those who critique it (in favor of what’s known as “bounded rationality”) both think that an emotionless, Spock-like approach to decision making is demonstrably superior. We should all aspire to throw off our ape ancestry to whatever extent we can—alas, we are fallible and will still make silly emotion-tinged “bloopers” here and there.
This has been for centuries, and by and large continues to be, the theoretical mainstream, and not just economics but Western intellectual history at large is full of examples of the creature needing the computer. But examples of the reverse, of the computer needing the creature, have been much rarer and more marginal—until lately.
Baba Shiv says that as early as the 1960s and ’70s, evolutionary biologists began to ask—well, if the emotional contribution to decision making is so terrible and detrimental, why did it develop? If it was so bad, wouldn’t we have evolved differently? The rational-choice theorists, I imagine, would respond by saying something like “we’re on our way there, but just not fast enough.” In the late ’80s and through the ’90s, says Shiv, neuroscientists “started providing evidence for the diametric opposite viewpoint” to rational-choice theory: “that emotion is essential for and fundamental to making good decisions.”
Shiv recalls a patient he worked with “who had an area of the emotional brain knocked off” by a stroke. After a day of doing some tests and diagnostics for which the patie
nt had volunteered, Shiv offered him a free item as a way of saying “thank you”—in this case, a choice between a pen and a wallet. “If you’re faced with such a trivial decision, you’re going to examine the pen, examine the wallet, think a little bit, grab one, and go,” he says. “That’s it. It’s non-consequential. It’s just a pen and a wallet. This patient didn’t do that. He does the same thing that we would do, examine them and think a little bit, and he grabs the pen, starts walking—hesitates, grabs the wallet. He goes outside our office—comes back and grabs the pen. He goes to his hotel room—believe me: inconsequential a decision!—he leaves a message on our voice-mail mailbox, saying, ‘When I come tomorrow, can I pick up the wallet?’ This constant state of indecision.”
USC professor and neurologist Antoine Bechara had a similar patient, who, needing to sign a document, waffled between the two pens on the table for a full twenty minutes.13 (If we are some computer/creature hybrid, then it seems that damage to the creature forces and impulses leaves us vulnerable to computer-type problems, like processor freezing and halting.) In cases like this there is no “rational” or “correct” answer. So the logical, analytical mind just flounders and flounders.
In other decisions where there is no objectively best choice, where there are simply a number of subjective variables with trade-offs between them (airline tickets is one example, houses another, and Shiv includes “mate selection”—a.k.a. dating—among these), the hyperrational mind basically freaks out, something that Shiv calls a “decision dilemma.” The nature of the situation is such that additional information probably won’t even help. In these cases—consider the parable of the donkey that, halfway between two bales of hay and unable to decide which way to walk, starves to death—what we want, more than to be “correct,” is to be satisfied with our choice (and out of the dilemma).