Hawking lectures with painstaking slowness but utter clarity—time having no beginning, radio waves doing the impossible and escaping from black holes. Equations appear that we can follow, interlaced with a puckish, arrogant humor.
The kid is worse. As Hawking struggles, he sits bored, tossing his chalk in the air, focused on an attractive woman in the front row. He drops the chalk and forces the mummy through the arduous task of starting again. Once, he translates a sentence and Hawking becomes agitated—the kid got it wrong. Hawking repeats himself. Glowering, the kid mutters, “You’re not making yourself understood.”
Who the hell is this kid? Slowly, it dawns on us. Hawking, in his Cambridge chair once occupied by Newton, has this kid as his student. And his chosen voice. This must be one incredibly smart guy and, not coincidentally, a deep intimate. Something is shifting in our picture of the mummy brain. Part of the hagiography of Hawking is that before the illness, he was a showboat, ostentatiously functioning at half speed to barely finish some task before returning to a party, all with a dissolute brilliance. He must have been just like this kid. And it hits us: they planned this, the whole show—the careening wheelchair, the “You’re not in church,” the kid’s insouciance—to desanctify. The conspiracy we sense is so intimate that it is not a question of Hawking having been like the kid, or of this kid speaking for him. Instead, he is the kid for this hour. The mummy brain is gone. Instead, we’re listening to an entertaining lecture by a cocky Cambridge don. One who just happens to require inhabiting two bodies to pull it off.
Hawking’s act was metaphor and theater and, one assumes, temporary. The possibility of prolonged dissociation between numbers of bodies and consciousnesses has not often interested neuroscientists. One exception, however, is with split-brain patients. While the brain is roughly symmetrical, its functions can be lateralized—the left and right sides subsume different tasks. The left hemisphere typically specializes in language, while the right excels in nonverbal spatial abilities, facial recognition, music. Popular knowledge of such lateralization has fostered absurd New Age ideas that people go about every conceivable behavior with differing “Left Brain” or “Right Brain” styles. Nonetheless, the science of lateralization is solid.
The two hemispheres communicate by a hefty cable of connections called the corpus callosum. In one type of epilepsy, a seizure can propagate a subsequent one on the mirrored part of the other hemisphere, back and forth across the corpus callosum. In the 1960s, surgical severing of the callosum was tried on such epileptics, halting the seizures. However, the person was left with two disconnected hemispheres. Roger Sperry, who won the Nobel Prize for this work, designed brilliant experiments to feed information to only one hemisphere, or differing information to each simultaneously. He showed that the hemispheres could function separately, and with differing analytical strengths. For example, an object could be shown to the visual field such that the information only entered the verbal hemisphere, and the person could then readily identify the picture. But when the same information was presented to the nonverbal hemisphere, the person couldn’t even state that he had seen anything—yet could identify the object by touch.
Each hemisphere could learn, remember, reason, have opinions, initiate behavior, be self-aware, have a sense of time and of future, and generate emotions. This raised the messy possibility that there were now two individuals occupying that skull. At an extreme was the idea that this was not only the case for split-brain patients, but that we all normally consist of two separate individuals, yoked together by the corpus callosum. Prompted by this, the psychologist Julian Jaynes wrote the highly eccentric Origin of Consciousness in the Breakdown of the Bicameral Mind. He argued that a coherent sense of self, a well-bounded ego, developed only some 3,000 years ago. Before that, the brain was “bicameral” (i.e., two-chambered), with minimal integration of the two hemispheres. Instead, one hemisphere spoke, either metaphorically or literally, and the other hemisphere obeyed, attributing that voice to the gods. Jaynes claimed that the modern sense of ego represents a breakdown of that bicamerality, that schizophrenics remain bicameral, and supported his ideas with mountains of trivia from archeology, mythology, the classics, and the Bible. The near consensus among savants was that the book was dizzyingly erudite, stimulating, and ultimately loony.
Sperry rejected the notion that there were two individuals inside anyone’s head, and most agreed. While split-brain patients could be manipulated into displaying two independent cognitive styles, the underlying opinions, memories, and emotions were the same. This was explained anatomically. Even if the corpus callosum was cut, more primordial, deeper structures of the brain, critical to emotion and physiological regulation, remained connected. Split-brain brains are not really split into two, but instead form a “Y” There might be two separate consciousnesses, one paying attention to a speaking voice, the other to background music, one navigating through town by remembering names of streets, the other by remembering a spatial map of the town’s appearance—yet it is still the same individual. One body, one person.
A battle as to how many selves can reside within one body has often raged over the issue of multiple personality disorder. Different facets of our personality dominate in different settings, sufficiently so that we may act like “a different person” when with a boss rather than a subordinate, or with a woman instead of a man. But we are not literally different people. In individuals with multiple personality disorder, however, separate personalities take full control of the person’s behavior at different times. Most mental health professionals agree that there are individuals in whom the different facets of personality are so different, so disjointed and dissociated, as to be diseased. These patients often have suffered horrific childhood abuse, and it is theorized that the compartmentalizing of the different personalities evolved as a protective strategy. What is wildly controversial is whether these nonoverlapping identities truly represent different personalities, how this works biologically, and how often this occurs.
At one extreme are clinicians who claim to have had hundreds of such patients (bringing up the inevitable jokes about billing each personality separately). They cite studies showing that when the personalities of their patients shift, so too do eyeglass or medication prescriptions. Some of these practitioners even glory in the multiplicity of personalities. Their goal is not to bring about integration into a single core personality, as that represents some sort of drab, monochromatic surrender (some blather about paternalistic monotheism always seems to pop up around this point in the discussion), but instead to enable the patient to make happy, productive use of the various personalities.
At the other extreme are clinicians who have apoplexy over this, claiming that a true multiple personality patient is seen once in a career, that the “different eyeglasses” stories are only stories. The patriarchs of psychiatry have generally taken this view. In the latest edition of psychiatry’s bible, the Diagnostic and Statistical Manual, there have been careful changes concerning the disorder. Its diagnosis no longer involves the “existence” of multiple “personalities.” Instead, the prerequisite is the “presence” of “dissociated identities” (and, in fact, the disease is now called “dissociative identity disorder”). In other words, the key is that the patient identifies herself in multiple ways—and the experts won’t touch with a ten-foot pole whether those identities really constitute personalities. Moreover, to paraphrase a psychiatrist who helped make those changes, the point is not that these people might have more than one personality but that, when the pieces are put together, they really have less than one.
“Multiple personality” disorder and split-brain patients raise the possibility of fragmentation of the self. Far more plausible is the idea of the self occasionally making room for another. Freudians believe that this can occur and can reflect profound psychopathology.
In the face of failure or loss, we all mourn with a certain sadness and withdrawal. And just as surely as we all can get dep
ressed, most of us heal. Even in the face of great tragedy, such as the loss of a loved one, we can usually find the means, eventually, to feel that it is not the end of the world. However, some of us, in the face of such loss, fall into prolonged and incapacitating sadness—“melancholia,” to use Freud’s terminology, or a major depression, as we would now call it. Along with the usual symptoms seen in someone mourning, the deeply depressed individual typically shows a self-hatred, claiming the death was their fault, wallows in guilt over ancient behaviors, and engages in punishing self-destructive behavior. “In {the healthy state of} mourning,” Freud wrote, “it is the world which has become poor and empty; in melancholia it is the ego itself.” Major depression is “aggression turned inward,” as Freud’s colleague Karl Abraham termed it.
Why should the mourning typical of most of us give way to incapacitating sadness and self-hatred in some? The root, Freud thought, lay in ambivalence—the dead individual was not only loved but also hated—and the person’s response to that ambivalence. Critically, the depressive is left unconsciously angry after the loss—anger for being abandoned by the dead person, for their previous conflicts, for the impossibility now of ever resolving them. As such, the depressive identifies with, incorporates, internalizes features of the lost individual, carries them inside. This is no antiseptic homage. The emotional adversary is gone, and there is no choice but to reconstruct the lost person internally, and then carry on the battle.
In this, Freud made a critical observation. The features of the lost individual that are internalized, the opinions and habits, are not just any, but those that were most hated. “If one listens patiently to a melancholic’s many and various self-accusations, one cannot in the end avoid the impression that often the most violent of them are hardly at all applicable to the patient himself, but that with insignificant modifications they do fit someone else, someone whom the patient loves or has loved or should love.” By carrying on the most hated traits, one can still argue (“You see, don’t you hate when I do that, can you believe I put up with that for fifty years?”) and, through the terrible pain of a major depression, punish oneself for arguing.
Thus, for the Freudian, the major depressive’s ego boundaries dissolve to allow parts of the lost individual to be carried within. Freud thought this explained the greatest psychodynamic ill of all—suicide. The ego can never consent to its own destruction; suicide requires the internalization of the other so completely that the suicide is more akin to murder.
Thus, science has occasionally considered cases of the self-fragmenting (or withering sufficiently to make room for another), cases where more than one self might dwell in one body. There has been even less concern for the possibility of one self occupying more than a single body. And, all things considered, these musings haven’t generated much certainty, in any modern scientific sense, as to what really constitutes a “self.” None of this had ever been of much interest to me as a scientist until recently, when I was personally exposed to a breakdown of the boundaries of self. At first, I could fully explain it with my science, viewing its extreme manifestations as pathology.
As my father aged, he suffered cognitive problems secondary to neurological damage, to the point where he would often not know what decade it was, his location, the names of his grandchildren. Along with this, his ego boundaries began to dissolve. Gradually, he purloined bits of my life—I, his only son. There were similarities already. Long ago, he had done medical research, as I do now; he had been a professor, as am I. We had always shared tastes, styles, and temperaments. But as he aged, the details of our lives began to intertwine. When I moved to San Diego, his navy years, recounted endlessly, suddenly included stretches spent in San Diego, so that soon we were sharing the same opinions and anecdotes about the town, forty years apart. When I moved to San Francisco, his entry to the United States switched from New York’s Ellis Island to San Francisco, his reported first view of America including a Golden Gate Bridge that did not yet exist in the year of his entry. Wherever I went to lecture, it would now be a university where he was once visiting faculty. His medical research, cut short by the Depression, had been in cancer biology, but now he was full of contentless memories of an interest in neurobiology, my own subject. I don’t believe it was competitiveness, or a need for us to have more in common. The problem was too much in common already. He identified passionately with whatever achievements I might have, and we knew how much I was him without the bad luck of refugee status, world wars, and the Depression, privileged with the rewards that his obsessive hard work had provided me, topped by perhaps a half century of life ahead as his own shadows lengthened. And as the fog of his disorientation swept in, he needed someone else’s stories and became less certain of where he ended and I began.
This felt more than a bit intrusive, but I was well defended with an armamentarium of labels and diagnoses and detached, condescending understanding, a world where something disturbing is housebroken by turning it into a lecture. “ . . . As another feature of the demented patient, one occasionally sees . . .” At night, he’d wander the house, agitatedly reporting the presence of angry strangers, of long-dead colleagues. If things were that bad, come on, cut him some slack if he’s also confused about which of the two of us was falling in love with Californian redwoods. You see, there’s been some neurological damage.
His recent death knocked me off that diagnostic high horse, as it became me who developed problems with boundaries. It started manageably enough. I began to spout his sayings and mannerisms. This was not Freudian melancholia—while I was plenty sad, I wasn’t clinically depressed, I wasn’t awash in ambivalence and anger, and the behaviors of his that I seized hadn’t irritated me for decades with an Oedipal itch. They were the insignificant quirks that had made him him, and with which I now festered. I arranged the utensils as he did, hummed a favorite Yiddish tune of his throughout the day, looked at the landscape as I never had but as he always did. Soon, I had forsaken wearing my blue flannel shirts in order to wear the blue flannel ones of his that I had brought back with me. I developed an interest in his profession—architecture—for the first time. I grew up amid his blueprints and drafting board, but had remained indifferent to the subject. Yet now, I found myself absentmindedly drawing floor plans of my apartment, or trying to understand three-point perspective, something he had tried to teach me without success.
This seemed reasonable. When I was younger, signs that I carried bits of him in me would have triggered bristly Oedipal denial, defensive nit-picking about subtle differences. But I could deal with a certain amount of this with equanimity, homage without the Freudian bile. But things took a troubling turn.
When I had spent a week of mourning at the family house, I observed the magnitude of his final frailty, the bottles of nitroglycerin everywhere. I took one back to California and kept it close to me for weeks. I would make love to my wife, work out in the gym, attend a lecture, and always, the bottle would be nearby—on a nightstand, in a sweat-jacket pocket, amid my papers. There was a day when I briefly misplaced it, and everything stopped for an anxious search. It was not that I had lost a holy relic of his suffering, an object to show my children someday to teach them about the man they hadn’t known. This was urgent; I felt vulnerable. Was my heart now diseased, or was it his diseased heart somewhere inside me that I now vigilantly stood by to medicate?
What the hell was this all about? I don’t believe in God or gods, seraphs or angels, transmigrations or transmutations of souls. I don’t believe in souls, for that matter, or in UFOs, with or without Elvis on board. Was it my sense of hard-assed individualism that was feeling unnerved by this intermingling, or was I feeling his hard-assed sense of individualism being unnerved?
The height of the confusion came a month later, with the final lecture for the class that I was teaching. My mother, exhausted with caring for him and finally persuaded to leave him with a nurse and come for a vacation, had earlier sat in on my class, and the students had cheered
when I introduced her. They had done a good thing. Four days later, he was dead, classes were canceled, and afterward, many of the students had expressed warm, supportive thoughts. I had come to feel close to all four hundred of them. At the end of the final lecture, I thought to tell them something about what a spectacular lecturer my father had been, things I had learned from his teaching that might apply to their lives. I intended to subject them to a eulogy about him, but something became confused and soon, wearing his shirt, I was lecturing for him, offering the frail advice of an octogenarian.
I warned them, amid their plans to tackle difficult problems in life and to be useful and productive, that they should prepare for setbacks, for the realization that each commitment entailed turning their backs on so many other things—like knowing their children, for example. And this was not me speaking, still with a sheltered optimism about balancing parenting with the demands of science, but he with his weathered disappointments and the guilt and regret he expressed in his later years that he was always working when I was a boy. I told them that I knew they wanted to change the world but that they should prepare for the inconceivable—someday, they would become tired. At the end, wondering whether this much emotion was setting me up for one of his angina attacks, I said good-bye for him to an ocean of twenty-year-olds rippling with life and future. And that night I put away the nitroglycerin.
During that month, my head swam with unlikely disorders from my textbooks to explain this intermingling. A year later, safe again on my battlefield of individuation, that time has begun to make more sense. I feel sure that what I went through need not merit a diagnosis, and I don’t think anymore that his own earlier confusion about the boundaries of the two of us really had anything to do with his neurological problems. It is a measure of the pathologic consequences of my training as a scientist that I saw pathology that was not there, and a measure of the poverty of our times that I could only feel as a brief flicker something intrinsic to our normal human experience.
The Trouble with Testosterone Page 17