The Tale of the Dueling Neurosurgeons: The History of the Human Brain as Revealed by True Stories of Trauma, Madness, and Recovery

Home > Other > The Tale of the Dueling Neurosurgeons: The History of the Human Brain as Revealed by True Stories of Trauma, Madness, and Recovery > Page 30
The Tale of the Dueling Neurosurgeons: The History of the Human Brain as Revealed by True Stories of Trauma, Madness, and Recovery Page 30

by Sam Kean


  In the course of its evolution the left brain also took on the crucial role of master interpreter. Neuroscientists have long debated whether split-brain folks have two independent minds running in parallel inside their skulls. That sounds spooky, but some evidence suggests yes. For example, split-brain people have little trouble drawing two different geometric figures (like ⊔ and ⊏) at the same time, one with each hand. Normal people bomb this test. (Try it, and you’ll see how mind-bendingly hard it is.) Some neuroscientists scoff at these anecdotes, saying the claims for two separate minds are exaggerated. But one thing is certain: two minds or no, split-brain people feel mentally unified; they never feel the two hemispheres fighting for control, or feel their consciousness flipping back and forth. That’s because one hemisphere, usually the left, takes charge. And many neuroscientists argue that the same thing happens in normal brains. One hemisphere probably always dominates the mind, a role that Michael Gazzaniga called the interpreter. (Per George W. Bush, you could also call it “the decider.”)

  Normally, having an interpreter/decider benefits people: we avoid cognitive dissonance. But in split-brain patients, the know-it-allness of the left brain can skew their thinking. In one famous experiment Gazzaniga flashed two pictures to a split-brain teenager named P.S.—a snowscape to his right brain and a chicken claw to his left brain. Next, Gazzaniga showed P.S. an array of objects and had him pick two. P.S.’s left hand grabbed a snow shovel, his right hand a rubber chicken. So far, so good. Gazzaniga then asked him why he’d picked those things. P.S.’s linguistic left brain knew all about the chicken, of course, but remained ignorant of the snowscape. And, unable to accept that it might not know something, his left-brain interpreter devised its own reason. “That’s simple,” P.S. said. “The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed.” He was completely convinced of the truth of what he’d said. Less euphemistically, you could call the left-brain interpreter a part-time confabulator.

  Split-brain patients confabulate in other circumstances, too. As we’ve seen, thoughts and sensory data cannot cross over from the left hemisphere to the right hemisphere, or vice versa. But it turns out that raw emotions can cross over: emotions are more primitive, and can bypass the corpus callosum by taking an ancient back alley in the temporal lobe. In one experiment scientists flashed a picture of Hitler to a split-brain woman’s left side. Her right brain got upset and (the right brain being dominant for emotions) imposed this discomfort onto her left brain. But her linguistic left brain hadn’t seen Hitler, so when asked why she seemed upset, she confabulated: “I was thinking about a time when someone made me angry.” This trick works with pictures of funeral corteges and smiley faces and Playboy bunnies, too: people frown, beam, or titter, then point to some nearby object or claim that some old memory bubbled up. This result seems to reverse neurological cause and effect, since the emotion came first and the conscious brain had to scramble to explain it. Makes you wonder how much we actually grasp about our emotions in everyday life.

  Along those lines, split-brain people can help illuminate certain emotional struggles we face. Consider P.S., the teenager who confabulated about chickens and shovels. In another experiment scientists flashed “girlfriend” to his right hemisphere. In classic split-brain fashion, he claimed he saw nothing; but in classic teenage fashion, he giggled and blushed. His left hand then used some nearby Scrabble

  tiles to spell L-I-Z. When asked why he’d done that, he said he didn’t know. He certainly wouldn’t do anything as stupid as like a girl. Tests also revealed conflicting desires in his right and left brain. P.S. attended a fancy finishing school in Vermont, and when asked what he wanted to do for a living, his left brain bid him say “Draftsman,” a respectable career. Meanwhile his left hand spelled out “automobile race[r]” with tiles. His brain even betrayed a red/blue political divide: post-Watergate, his left brain expressed sympathy for President Nixon, while his right brain hinted it was glad to see Tricky Dick go. When facing a crisis or controversy, we often talk about feeling torn or being of two minds. Perhaps those aren’t just metaphors.*

  This left-right asymmetry within the brain affects how we read emotions in other people as well. Imagine simple line drawings of two half-smiley, half-frowny faces, one with the smile on the left side of the face, one with the frown on the left. In a literal sense, these faces are equal parts sad and happy. But to most people the emotion on the left side (from the viewer’s point of view) dominates, and determines the overall emotional tenor. That’s because whatever’s in your left visual field taps into the emotion-dominant and face-dominant right brain. Along those lines, if you bisect a person’s photograph and view each half independently, people usually think he “looks like” the left half more than the right half.

  Artists have long exploited this left-right asymmetry to make their portraits more dynamic. Generally, the left half of someone’s face (the side controlled by the emotive right brain) is more expressive, and surveys in European and American art museums have found that something like 56 percent of men and 68 percent of women in portraits face the left side of the canvas and thereby show more of the left side of the face. Crucifixion scenes of Jesus suffering on the cross showed an even stronger bias, with over 90 percent facing left. (By chance alone, you’d expect closer to 33 percent, since subjects could face left, right, or straight ahead.) And this bias held no matter whether the artists themselves were left- or right-handed. Whether this happens because the sitters prefer to display their more expressive left side or because the artists themselves find that side more interesting isn’t clear. But the bias seems universal: it shows up even in high school yearbook photos. A leftward pose also allows the artist to center the sitter’s left eye on the canvas. In this position most of her face appears on the canvas’s left side, where the face-hungry right hemisphere can study it.

  There are exceptions to this leftward bias in portraiture, but even these are telling. The highly ambidextrous Leonardo often broke convention and drew right-facing profiles. But perhaps his most classic piece, the Mona Lisa, faces left. Another exception is that self-portraits often face right. Artists tend to paint self-portraits in a mirror, however, which makes the left half of the face appear on the right side of the canvas. So this “exception” might actually confirm the bias. Finally, one study found that prominent scientists, at least in their official portraits for the Royal Society in England, usually face right. Perhaps they simply preferred to seem cooler and less emotional—more the stereotypical rationalist.

  In contrast to portraits, art in general doesn’t show a leftward bias, not in all cultures. In Western paintings, the so-called glance curve—the line the eye naturally follows—does often travel left to right. In art from east Asia, the glance curve more often runs right to left, more in line with reading habits there. A similar bias exists in theater: in Western theaters, as soon as the curtain rises, audiences look left in anticipation; in Chinese theaters, audiences swivel right.

  The reason we show a left-right preference for some things (portraits) but not others (landscapes) probably traces back to our evolutionary heritage as animals. Animals can safely ignore most left-right differences in the environment: a scene and its mirror image are more or less identical with regard to food, sex, and shelter. Even smart and discriminating animals—such as rats, who can distinguish squares from rectangles pretty easily—struggle in telling mirror images apart. And human beings, being more animal than not, can be similarly oblivious about left/right differences, even with our own bodies. Russian drill sergeants in the 1800s got so fed up with illiterate peasants not knowing left from right that they’d tie straw to one leg of recruits, hay to the other, then bark, “Straw, hay, straw, hay!” to get them to march in step. Even brainiacs like Sigmund Freud and Richard Feynman admitted to having trouble telling right and left apart. (As a mnemonic, Freud made a quick writing motion with his right hand; Feynman peeked at a mole on his left.) There’s also a
famous (right-facing) portrait of Goethe showing him with two left feet, and Picasso apparently shrugged at (mis)printed reversals of his works, even when his signature ran the wrong way.

  So why then do humans notice any left-right differences? In part because of faces. We’re social creatures, and because of our lateralized brains, a right half-grin doesn’t quite come off the same as a left half-grin. But the real answer lies in reading and writing. Preliterate children often reverse asymmetric letters like S and N because their brains can’t tell the difference. Illiterate artisans who made woodblocks for books in medieval times were bedeviled by the same problem, and their s and Иs added a clownish levity to dry Latin manuscripts. Only the continual practice we get when reading and writing allows us to remember that these letters slant the way they do. In fact, in all likelihood only the advent of written scripts a few millennia ago forced the human mind to pay much attention to left versus right. It’s one more way that literacy changed our brains.

  Of the three great “proving otherwise”s in Sperry’s career, the split-brain work was the most fruitful and the most fascinating. It made Sperry a scientific celebrity and brought colleagues from around the world to his lab. (Although not a schmoozer, Sperry did learn to host a decent party, with folk dancing and a drink called split-brain punch—presumably so named because a few glasses would cleave your mind in two.) The split-brain work entered the popular consciousness as well. Writer Philip K. Dick drew on split-brain research for plot ideas, and the entire educational theory of “left-brain people” versus “right-brain people” derives (however loosely) from Sperry and crew.

  Sperry’s early proving otherwises probably deserved their own Nobel Prizes, but the split-brain work finally catapulted him to the award in 1981. He shared it with David Hubel and Torsten Wiesel, who’d proved how vision neurons work, thanks to a crooked slide. As lab rats, none of the three had much use for formal attire, and Hubel later recalled hearing a knock on his hotel room door just before the Nobel ceremony in Stockholm. Sperry’s son was standing there, his dad’s white bow tie limp in his hand: “Does anyone have any idea what to do with this?” Hubel’s youngest son, Paul, nodded. Paul had played trumpet in a youth symphony back home and knew all too well about tuxedos. He ended up looping and knotting the bow ties for the geniuses.

  Winning a Nobel didn’t quench Sperry’s ambitions. By the time he won the prize, in fact, he’d all but abandoned his split-brain research to pursue that eternal MacGuffin of neuroscience, the mind-body problem. Like many before him, Sperry didn’t believe that you could reduce the mind to mere chirps of neurons. But neither did he believe in dualism, the notion that the mind can exist independently of the brain. Sperry argued instead that the conscious mind was an “emergent property” of neurons.

  An example of an emergent property is wetness. Even if you knew every last quantum factoid about H2O molecules, you’d never be able to predict that sticking your hand into a bucket of water feels wet. Massive numbers of particles must work together for that quality to emerge. The same goes for gravity, another property that surfaces almost magically on macro scales. Sperry argued that our minds emerge in an analogous way: that it takes large numbers of neurons, acting in coordinated ways, to stir a conscious mind to life.

  Most scientists agree with Sperry up to this point. More controversially, Sperry argued that the mind, although immaterial, could influence the physical workings of the brain. In other words, pure thoughts somehow had the power to bend back and alter the molecular behavior of the very neurons that gave rise to them. Somehow, mind and brain reciprocally influence each other. It’s a bracing idea—and, if true, might explain the nature of consciousness and even provide an opening for free will. But that’s a doozy of a somehow, and Sperry never conjured up any plausible mechanism for it.

  Sperry died in 1994 thinking his work on consciousness and the mind would be his legacy. Colleagues begged to differ, and some of them think back on Sperry’s final years (as with Wilder Penfield’s late work) with a mixture of disbelief and embarrassment. As one scientist commented, work on the fuzzier aspects of consciousness repels everyone but “fools and Nobel laureates.” Nevertheless, Sperry was right about one thing: explaining how human consciousness emerges from the brain has always been—and remains today—the defining problem of neuroscience.

  CHAPTER TWELVE

  The Man, the Myth, the Legend

  The ultimate goal of neuroscience is to understand consciousness. It’s the most complicated, most sophisticated, most important process in the human brain—and one of the easiest to misunderstand.

  September 13, 1848, proved a lovely fall day, bright and clear with a little New England bite. Around 4:30 p.m., when the mind might start wandering, a railroad foreman named Phineas Gage filled a drill hole with gunpowder and turned his head to check on his men. Victims in the annals of medicine almost always go by initials or pseudonyms. Not Gage: his is the most famous name in neuroscience. How ironic, then, that we know so little else about the man.

  The Rutland and Burlington Railroad Company was clearing away some rock outcroppings near Cavendish, in central Vermont, that fall, and had hired a gang of Irishmen to blast their way through. While good workers, the men also loved brawling and boozing and shooting guns, and needed kindergarten-level supervision. That’s where the twenty-five-year-old Gage came in: the Irishmen respected his toughness, business sense, and people skills, and they loved working for him. Before September 13, in fact, the railroad considered Gage the best foreman in its ranks.

  As foreman, Gage had to determine where to place the drill holes, a job that was half geology, half geometry. The holes reached a few feet deep into the black rock and had to run along natural joints and rifts to help blow the rock apart. After the hole was drilled, the foreman sprinkled in gunpowder, then tamped the powder down, gently, with an iron rod. This completed, he snaked a fuse into the hole. Finally an assistant poured in sand or clay, which got tamped down hard, to confine the bang to a tiny space. Most foremen used a crowbar for tamping, but Gage had commissioned his own rod from a blacksmith. Instead of a crowbar’s elongated S, Gage’s rod was straight and sleek, like a javelin. It weighed 13¼ pounds and stretched three feet seven inches long (Gage stood five-six). At its widest the rod had a diameter of 1¼ inches, although the last foot—the part Gage held near his head when tamping—tapered to a point.

  Around 4:30 Gage’s crew apparently distracted him; they were loading some busted rock onto a cart, and it was near quitting time, so perhaps they were a-whooping and a-hollering. Gage had just finished pouring some powder into a hole, and turned his head. Accounts differ about what happened next. Some say Gage tried to tamp the gunpowder down with his head still turned, and scraped his iron against the side of the hole, creating a spark. Some say Gage’s assistant (perhaps also distracted) failed to pour the sand into the hole, and when Gage turned back he smashed the rod down hard, thinking he was packing inert material. Regardless, a spark shot out somewhere in the dark cavity, and the tamping iron reversed thrusters.

  Gage was likely speaking at that instant, with his jaw open. The iron entered point first, striking Gage point-blank below the left cheekbone. The rod destroyed an upper molar, pierced the left eye socket, and passed behind the eye into his brainpan. At this point things get murky. The size and position of the brain within the skull, as well as the size and position of individual features within the brain itself, vary from person to person—brains vary as much as faces do. So no one knows exactly what got damaged inside Gage’s brain (a point worth remembering). But the iron did enter the underbelly of his left frontal lobe and plow through the top of his skull, exiting where babies have their soft spots. After parabola-ing upward—it reportedly whistled as it flew—the rod landed twenty-five yards distant and stuck upright in the dirt, mumblety-peg-style. Witnesses described it as streaked with red and greasy to the touch from fatty brain tissue.

  The rod’s momentum threw Gage backward and he
landed hard. Amazingly, though, he claimed he never lost consciousness, not even for an eyeblink. He merely twitched a few times on the ground, and was talking again within a few minutes. He walked to a nearby oxcart and climbed in, and someone grabbed the reins and giddyupped. Despite the injury Gage sat upright for the mile-long trip into Cavendish, then dismounted with minimal assistance at the hotel where he was lodging. He took a load off in a chair on the porch and even chatted with passersby, who could see a funnel of upturned bone jutting out of his scalp.

  Two doctors eventually arrived. Gage greeted the first by angling his head and deadpanning, “Here’s business enough for you.” Doctor one’s “treatment” of Gage hardly merited the term: “the parts of the brain that looked good for something, I put back in,” he later recalled, and threw the “no good” parts out. Beyond that, he spent much of his hour with Gage questioning the veracity of the witnesses. You’re sure? The rod passed through his skull? On this point the doctor also queried Gage himself, who—despite all expectation—had remained utterly calm and lucid since the accident, betraying no discomfort, no pain, no stress or worry. Gage answered the doctor by pointing to his left cheek, which was smeared with rust and black powder. A two-inch flap there led straight into his brain.

  Finally, Dr. John Harlow arrived around 6 p.m. Just twenty-nine years old, and a self-described “obscure country physician,” Harlow spent his days treating people who’d fallen from horses and gotten in carriage accidents, not neurological cases. He’d heard nothing of the new theories of localization simmering in Europe and had no inkling that, decades later, his new patient would become central to the field.

 

‹ Prev