Book Read Free

Tales from Both Sides of the Brain : A Life in Neuroscience (9780062228819)

Page 35

by Gazzaniga, Michael S.


  Now think in terms of layers: five of them when it comes to clocks. Seeing the device in terms of layers, its architecture becomes evident, as does the way all mechanical clocks work. There is the energy layer, the distribution layer, the escapement layer, the controller layer, and the time indicator layer. First, a clock needs energy to make it work, so a spring needs to be wound up. That energy has to be stored and then slowly released. Second, wheels distribute that energy throughout the clock. Third, escapement mechanisms stop the energy from escaping all at once. Fourth, the controller mechanism controls the escapement function. Finally, all of this comes together to the fifth layer, which indicates the time. Notice, as you move up through the layers, each one does not predict the functional role of the next layer. The energy layer has nothing to do with the escapement layer, and so on.

  Now note that each layer is flexible and largely independent. It is easy to swap in a new energy layer: The spring could be replaced by a weight and gravity, or possibly by batteries and motors, provided these are compatible with the core architecture. If, however, you changed to a new architecture, say, solid state electronics, most of the old parts would be obsolete. With the new architecture, you would still have a variety of swappable energy sources, including solar, but they are different from what can be swapped into a mechanical clock. The spring is gone, as are weight and gravity. Up at the time indicator layer, an infinite number of user interfaces indicating the time can be used, all of which are also independent and swappable: The new clock could even look like the old clock on the outside. So with layering, a great variety on the outside can hide a common core, or a common behavior can be implemented many different ways. As Doyle says, “Without layering you don’t get this, or understand it.” Again, without the organizing idea of layers, it would be extremely difficult to describe how a simple mechanical clock works or to build one. Over time, just as the clockmaker has figured out which parts work best, which size to use, which leveraging system to use, which wheels and springs, and so on, so natural selection did for our brains.

  The tension at this point is that on the one hand, it seems that an abstraction is just another layer and thus a thing, a something you can hang your hat on. On the other hand is the view that an abstraction is not a mysterious thing, but a way of handling all the parts. Very recent eye-opening work by the neuroscientist Giulio Tononi and his colleagues has quantified how the layers might interact, and how the macro layers may indeed jump into the causal chain of command, just like Sperry suggested fifty years ago.19 The battle for understanding the difference, if any, between “supersede” and “supervene” is on.

  WINDING DOWN

  As I have already said, fifty years ago, all that neuroscientists thought about were simple linear relationships—A makes B happen, and a thorough description of A is B. It was a reductionist heaven, and even today, that is how most neuroscientists view their work. This line of thinking left many of us banging our heads against the wall when we attempted to conceptualize how the mind is actually to be understood by our toils with the brain. We continued performing and interpreting linear experiments, putting off the bigger questions of how it all works together. Insights from people like Doyle, suggesting that we should think about the mind as interconnected networks of layers instead of linear relationships, are appreciated in some quarters. These insights, however, are hardly pervasive. Luckily, the general intellectual landscape has begun to change. The field of cell and molecular biology has come to realize that the object of its study is not to be understood in terms of working out linear pathways but instead by looking at the multiple interactions of a dynamic system.

  On March 28, 2001, the cover of Time magazine showed a picture of the cancer drug Gleevec with the headline, “There is new ammunition in the war against Cancer. These are the bullets.”20 In 2001, most cancer biologists thought cancer was caused by a mutated protein, and this mutated protein caused the cells to rapidly proliferate and avoid death. The simple thought was that if the protein could be inhibited, then cancer would be eliminated. The drug Gleevec inhibits a mutated protein (Bcr-Abl) that is found only in certain types of chronic myeloid leukemia and in gastrointestinal stromal tumors. In patients with these cancers, taking Gleevec inhibits the mutated protein and cures the cancer. Unfortunately, these seem to be the only two cancers that respond in this way.

  Researchers rapidly started identifying other mutated proteins in cancers and designing drugs to inhibit their activity. For example, in many melanomas, a mutation in another gene (BRAF) causes rapid proliferation of cells along with protection from cell death. A drug was designed to inhibit BRAF activity. When this drug was given to melanoma cells with the BRAF mutation, they started to die, but then arose again only to grow rapidly after treatment. Researchers soon figured out that BRAF works in a network to promote cell proliferation, not in a single linear pathway. When BRAF gets inhibited, the network shifts, enabling another protein (CRAF) to promote proliferation.

  As a consequence of these many findings, the new thinking by cancer biologists is that there is not one single mutation that drives cancer. Instead, a whole signaling network changes to drive the cancer. To kill cancers, the network must be targeted in multiple places. Since 2006, the entire field of cell and molecular biology has now realized that they are dealing with systems with feedback loops, controls, compensatory networks, and all kinds of distant forces impinging and involved with any one single function they might be interested in. The complex architecture of a cell must indicate that the complex architecture of the brain is at least as challenging and may be very similar in some respects.

  Let’s look again at the fifty-year-old results of Case W.J., which remain telling. Disconnection of one part of the brain from another does prove that specific nerve pathways are important: Their duties may range from signaling basic sensory and motor information all the way up to complex informational exchanges between the two half brains that deal with such things as orthographic and phonological information. At another level, however, W.J. didn’t seem a whit different from his preoperative state. He walked, he talked, he understood the world as usual, and he gave his big engaging smile right on cue. He also had those islands of specialized functions: Only his left hemisphere could manage language, and only his right hemisphere could grasp spatial relationships.

  As the ensuing fifty years of research unfolded, these initial glimpses of human brain organization were deepened and put into a broader context. We now know just how specific the brain can be in its local processing. We know the brain is full of modules. In fact, a fundamental strategy of the brain is to reduce any new challenge to a module that can operate more or less automatically, outside the immediate mechanisms of cognitive control.

  All of this, of course, brings us back to the question, How do all of those peripheralized modules interact to produce the glorious psychological unity we all enjoy? Are they massively and intricately exchanging a code of some kind or is it something else? Is it more like a society where all the citizens (modules) vote and out of that comes (emerges) democracy, which in turns constrains those that vote? Or, let’s try a related metaphor, the orchestra.

  In the spring of 2013, I was asked to deliver the keynote address at the annual meeting of the Association for Psychological Science in Washington, D.C. The meeting lasts a few days and is jam-packed with empirical studies on simple to complex animals, behaviors, brains, and societies. Four days of data, and most of it good. I decided to kick things off in my lecture with an “orchestra” metaphor, and, in doing so, a phrase popped into my head that I couldn’t shake. I found myself telling the crowd, “The brain works more by local gossip than by central planning.” In the world of tweets, only minutes later, I was stuck with it. Sheesh, now I had to explain myself. I felt like our split-brain patient J.W. must have felt. A behavior burped out of me from a largely silent processor that had been calculating away on life’s events and then, suddenly, presented itself at the APS meeting. Great
. Now I had to bring it into my cognitive flow, and my interpreter module had to explain it. So I did my best, and this is an approximation of what I said.

  Think of all of those different musical instruments that have to be coordinated for an orchestra to produce music. The musicians all have a shared musical language and are all reading from the same script, but the conductor has to keep them queued up to do their thing at the right moment in time and at the right amplitude. At first glance, it appears that the individual players are not all connected directly, but they are connected through feedback loops via the conductor, a giant hub, who coordinates the overall timing. When all of this is done exquisitely, music is made to the delight of all. I once saw Skitch Henderson take over the baton from a less skilled conductor. Playing with the exact same musicians on the exact same piece, the house went from ho-hum to shaking with glory for hours. Each musical player has hard constraints on what he can do in time and space: He has to play the same song, at the same tempo, and on a specific instrument that has to be played with specific body parts. Being part of a symphony orchestra, however, requires coordination of all of the players, even though they do not seem to be directly communicating with each other. Yet, the coordination of all of those localized processors, locally doing their thing, seems to be the key. The conductor appears to do it in an orchestra; how does the brain do it?

  Still, the orchestra metaphor plays along with a comfortable linear way of thinking, the notion that something is in charge, or, well, orchestrating all the parts. Something was missing from this analogy—something big. Then I saw a YouTube clip of Leonard Bernstein magnificently standing in front of an orchestra yet not overtly conducting. His hands were not moving at all; he simply reacted, giving positive feedback with his face as the musicians did their thing. The local processors, the modules, were keyed in and expressing themselves right on the money. Bernstein was there to enjoy it, revel in it, not direct it. What the heck? He wasn’t controlling anything. The orchestra was working all by itself! Something had to be happening—what was it? It seemed as if something like local gossip must indeed be at work. The separate musicians were doing their thing like the parts in a mechanical watch, only local interactions and cueing were going on, too.

  After the talk, a dinner had been arranged and during cocktails, the exceptionally talented Ted Abel, a molecular neurobiologist, remarked to me that he was a clarinetist and played in many an orchestra. He said, “You know, even though the conductor is up in front of you, the real action is how the players cue each other. With the clarinet, making a rightward swirl versus a leftward swirl as you’re playing cues your colleagues on where you are going with the piece. It is like local gossip.”

  The metaphor snaps the mind to another view, that the old linear idea of flow of information in the brain may be wrongheaded. Is the brain really organized like the pony express, letters being passed from one outpost to another until somehow it all works? I don’t think so. Sure, connections are important and are the heart of the split-brain story. Sure, specialized regions do specific things, which is the heart of modern brain imaging studies. Sure, individual variation of human capacity reflects variations in brain structure, functions, and experience. But how does it all work? What is the architecture of the system that allows it to do all of the wonderful things organisms like humans do on a second-by-second basis?

  On a larger canvas, I see this as the question for mind/brain research. One problem with framing the question as “How does mental unity come out of a modular brain?” is that, currently, young graduate students of neuroscience are not commonly trained with the tools to understand such an architecture. It requires new skills and knowledge from a new array of experts who, in the main, are housed in engineering departments. Fortunately, some others see it this way, too, and after two and a half years of seemingly endless and pointless paperwork, a group of us have established a new graduate program at the University of California, Santa Barbara, aimed at bringing control and dynamical thinking to issues in neuroscience.

  Nicholas Meyer, the great writer, director, and author of several Star Trek episodes, recently observed that Shakespeare never gave stage directions in his works. Johann Sebastian Bach didn’t give musical directions, either. Two of the greatest artists in the history of the world were the original believers in the “less is more” principle. It used to be that the audience’s job was to infer the meaning of a story to bring their minds into alignment. They were to abstract a work of art into their own narrative and participate in the artistry themselves. Meyer observed that that has all but fallen away in the modern narrative. Everyone expects to be told how stories come out, and nothing is left to infer.

  In closing, I would like to point out that Darwin gave the world another brilliant play, the theory of evolution. Scientists have been studying this masterpiece for almost two hundred years, with regular offerings about how his description of natural selection actually works. Unlike Shakespeare and Bach and more like a scientist, he would have told us how it worked if he had known. But like a playwright, he didn’t send us down a preconceived path he might have held. He left the issue open for the scientific community of the future to figure out. He cleverly set up the question by observing that small differences of form and function arise within any group of animals. With time, a small difference that conferred survival and reproductive advantage prevailed over those members of the species that didn’t share the trait, and it became dominant.

  But look around. Look at the vast amount of variation in the animal kingdom. How could that all actually occur? The first approximate answer came about fifty years ago when it was discovered that heritable variation does not occur without mutations to the DNA of an organism. That was a huge insight and, of course, was built on the long-established knowledge base about DNA, which first started in 1869. Yet can rare and random mutational events explain all of the variation we observe? It doesn’t seem possible, and Darwin’s puzzle has been hanging over the scientific community for dozens of years.

  Two inventive biologists, Marc Kirschner, head of systems biology at Harvard, and John Gerhart, from the University of California, Berkeley, have tackled the problem head-on in their dazzling book, The Plausibility of Life, and have set a new stage for thinking about Darwin’s dilemma and, with it, an architecture for biologic life. Building on advances in molecular genetics of the last thirty years, they argue that there is something called “facilitated variation.” It goes like this: It is now known that there are “conserved core processes” that make and operate an animal. Kirschner and Gerhart say these processes “are pretty much the same whether we scrutinize a jellyfish or a human. . . . The components and genes are largely the same in all animals. Almost every exquisite innovation that one examines in animals, such as an eye, hand or beak, is developed and operated by various of these conserved core processes and components. . . . We suggest that it is the regulation of these (core) processes. Regulatory components determine the combinations and amounts of core processes to be used in all the special traits of the animal.”21

  This has all the ring of a layered architecture. Indeed, it is now known that gene expression is regulated by other genes: A gene codes a protein that regulates the expression of other genes. The takeaway idea here is that all of the variation we see in the natural world is the result of mutations that occur on a small number of regulatory genes, not on the thousands upon thousands of workhorse genes attending to the business of the body. The many fewer regulatory genes control the replication and activation and deactivation of the multitude of specific genes that do the work of the organism. Mutate a regulatory gene and there can be a huge effect. Consequently, the fact that mutations are rare is consistent, and a possible theory exists to explain why such mutations are so effective. The only way for Kirchner and Gerhart to have gotten to this incredible insight was to abandon simple linear thinking and to think about layered systems.

  For students of mind and brain research, it’s time to
hitch up our pants, take a deep breath, and realize that much of the low-hanging fruit in neuroscience has been picked and packaged. The simple models have taken us only so far. My own view is that it is time to realize that the deep problems remain in full view and the answers are ripe for harvesting. Our job is to go after the deep problems with gusto and infer the answers from the underlying plots of the human play in front of us. It is a fabulous way to spend one’s life.

  EPILOGUE

  MOST OF US CAN THINK BACK TO THE PEAK EXPERIENCES IN OUR LIVES, most of which are highly personal. If life has been good to us, they are gratifying and ground our lives with personal meaning. For me, that afternoon at Caltech fifty years ago, when W.J.’s right brain completed an act of which his own left hemisphere had no knowledge, was, and continues to be, seared into my mind. I was stunned. This event landed me in a world of human inquiry that had an almost timeless origin that I certainly did not sense then. More than fifty years later, as I continue to try to understand the full meaning of that elementary and original finding, I do realize that I have only participated in this saga and still do not have its ending. No one does, and no one will for some time.

  Gratifyingly, much has been learned from split-brain research. Starting with the original characterization that surgically disconnecting the two half brains resulted in someone with two minds, all the way to today’s counterintuitive view that each of us actually has multiple minds that seem to be able to implement decisions into action, split-brain research has revealed and continues to reveal some of the brain’s well-kept secrets. Nonetheless, what magical tricks the brain uses for taking a confederation of local processors and linking them to make what appears to be a unified mind, a mind with a personal psychological signature, is still a big unknown and the central question of neuroscience.

 

‹ Prev