Tales from Both Sides of the Brain : A Life in Neuroscience (9780062228819)

Home > Other > Tales from Both Sides of the Brain : A Life in Neuroscience (9780062228819) > Page 37
Tales from Both Sides of the Brain : A Life in Neuroscience (9780062228819) Page 37

by Gazzaniga, Michael S.


  To: Michael S. Gazzaniga

  From: George A. Miller

  Re: “COGNITIVE SCIENCE”

  An intense undergraduate, in the sharp panic of an identity crisis, rushed to his professor: “I don’t know who I am. Tell me, who am I?” The professor replied wearily, “Please, who’s asking the question?”

  The story flashed to mind recently when a friend, who watches science with the eyes of a biologist, asked: “What do cognitive scientists want to know?” Anyone capable of posing such a question must already know the answer. To know is to have direct cognition of. Obviously, scientists of cognition want to have direct cognition of having direct cognition. Any etymologist could tell you that.

  What would a biologist accept as an answer? Something deep is called for. My friend is not asking about computers, or simulations, or logical formalisms, or the latest methods of psychological experimentation—none of that ancillary horseshit that fills so much of the conversation of cognitive scientists. A deeper answer is that cognitive scientists want to know the cognitive rules that people follow and the knowledge representations that those rules operate on. But this language—cognitive rules, knowledge representations—is precisely the kind of smoke that started my friend looking for a fire.

  Let us begin with a question that we can answer: What do biologists want to know? Biologists want to discover the molecular logic of the living state. What is the molecular logic of the living state? Simple. It is the set of principles that, in addition to the principles of physics and chemistry, operate to govern the behavior of inanimate matter in living systems. (That is an almost direct quotation from the introduction to a biochemistry textbook.)

  Is this the kind of answer a biologist expects when he asks what cognitive scientists want to know? If so, perhaps we can construct an answer based on this model of what an answer should be. Because I am a little slow at these games, however, I shall take three steps to get where I am going. First, I will substitute psychologists for biologists. No substitution seems required for molecular logic; I assume that “molecular” in this context means “susceptible to analysis,” and is not limited to the analysis of matter into chemical molecules. And then I will substitute conscious for living, because I consider consciousness to be the constitutive problem for psychology, just as life is the constitutive problem for biology. Now I have achieved the following: Psychologists want to discover the molecular logic of the conscious state. So far so good. But now what do we mean by molecular logic of the conscious state? Let’s see if substitution leads anywhere: the set of principles that, in addition to the principles of physics, chemistry, and biology, operate to govern the behavior of inanimate matter in conscious systems. These substitutions say little more than that psychology is the next step in the positivistic hierarchy of sciences. The result sounds pretty good to me, but can I follow through? That is to say, the biochemist whose formulation I have borrowed as my model had a large and impressive textbook full of biological principles to illustrate what he was talking about. What do I have?

  One thing I do not have is behaviorism, because most behaviorists are dedicated to the proposition that consciousness is irrelevant to the science of psychology. Another thing I do not have is artificial intelligence, because computer simulations have no need for the psychological distinction between living and nonliving systems, for that matter.

  What I seem to have is a way of looking at psychology, a criterion to keep in mind while thumbing through psychological handbooks. It might be formulated like this: Any behavior that is unaffected by the state of consciousness of the behaving system is of no concern to psychology. Dreaming, for example, is a concern of psychology, because if you wake up if your state of consciousness changes—dreaming is affected. . . . The ability to violate some principle by an act of will is now the critical test that the principle in question is one that is relevant to psychology. . . . The problem, however, is that my friend did not ask what psychologists want to know. He asked what cognitive scientists want to know.

  A second set of substitutions can be tried, therefore. Suppose we substitute states of knowledge for the conscious state. Then we obtain: Cognitive psychologists want to discover the molecular logic of states of knowledge, where the molecular logic of states of knowledge refers to the set of principles that, in addition to the principles of physics and chemistry, govern the behavior of inanimate matter in knowledge systems. Reference to biological and psychological principles is here omitted, for the computers can instantiate knowledge systems; computers need obey no biological or psychological principles.

  The criterion for looking at research would now become: Any behavior that is unaffected by the state of knowledge of the behaving system is of no concern to cognitive science. If you turn off the power in a computer, for example, the consequences will not depend on the state of knowledge of the computer, so they would be of no concern to cognitive scientists. . . .

  I have no desire to dissuade anyone who wants to develop cognitive science along these lines, but neither do I have any desire to join with them. I would prefer to take a different line, defining still another science more narrowly. So I will now take a third step, as follows: Cognitive neuroscientists want to discover the molecular logic of epistemic systems, where the molecular logic in question this time is the principle that, in addition to the principles of physics, chemistry, biology, and psychology, governs the behavior of inanimate matter in epistemic systems. (The term “epistemic system” is negotiable; I use it as a placeholder for something better.) A further substitution is possible: animate for inanimate in the final clause. I am unclear whether it would make any real difference.

  By including the requirement that cognitive neuroscience is concerned only with living, conscious systems, we cut artificial intelligence free to develop in its own way, independent of the solutions that organic evolution happens to have produced. Now our concern is for a subset of conscious systems, and the criterion is whether or not the system’s state of knowledge affects its behavior. . . .

  It should be clear by now that I really don’t have an answer to the question, what do cognitive scientists want to know? But I think that cognitive neuroscientists want to know something that is reasonably interesting, and that there really might be some promise in following up systematically the implications of the definitions that we arrived at by substitution into our biological model.

  Unbelievable as it may seem, I attempted a response. After all, it was spring.

  To: George A. Miller

  From: Michael S. Gazzaniga

  Re: Exemplars of Cognitive Neuroscience

  O.K., your claim is that our task is to understand those processes active in living systems that can exert control over the comings and goings of a variety of mental constituents that make up a cognitive agent. (Put differently, is it also fair to say that the defining qualities of a cognitive system are coincident with an information processing disorder?) Alternatively, it is our task to understand cerebral software, the programming stuff that orchestrates the spatial-temporal patterns of the neural network. First, has your definition of cognitive neuroscience moved the ball down the field? I think it has. Consider what others have said about what cognition is, usually using other terminologies. Sperry, for example, used to argue that consciousness is an emergent property of the spatial-temporal interaction of the neuronal system subserving the phenomenon. He maintained that these emergent mental properties feed back, as it were, and control the activities of the system that produced it. To me this position is a neuroscientist’s way of saying cognitive act. MacKay’s hypothesis on what the cardinal feature of a cognitive system is goes like this: “the direct correlate of conscious experience is the self-evaluating, supervisory or metaorganizing activity of the cerebral system and it is this system that determines norms and priorities and organizes the internal state of readiness to reckon with the sources of sensory stimulation.” That strikes me as a rather passive description of the conscious proce
ss and it takes on more of the character of a “jobber” or “dispatcher.” He does not characterize the system as one that tries to penetrate the organism’s natural tendency to reflexively respond to a command.

  If I am right, your definition has advanced at least my understanding of some issues and has clearly stated that the task is to discover the rules that govern the epistemic system—the one living system that governs the biologic system. When thinking about that, I am maintaining that the epistemic system is supraordinate to the biologic system. Is that what you were driving at?

  At any rate, you have set us to the task of actually trying to figure out the principles of not only how cognitive systems announce their products to consciousness, but also the criterion that a cognitive system is a process that can supersede the cerebral architecture. How else can we illuminate this dynamic other than by studying disruptive brain states? In some sense the cognitive neuroscientist is trying to trick out of the organism insight into that puzzling problem. But before raising some problems from studies on brain-damaged patients, let me make one other observation that I think needs up-front analysis.

  The kind of analysis one would bring to understanding a New Yorker as opposed to understanding New York would be quite different. The kind of analysis one brings to understanding a serial system as opposed to a parallel system also seems to me to be quite different. Before we proceed with an intelligent analysis of cognitive function, do we have to face up to the issue as to whether or not the system is in fact competing for the attention of the person? If we agree that this is a reasonable model, crudely put at this point, then it seems to me how one approaches problems in brain disease that merit consideration for a theory of cognition becomes quite different.

  Let me now consider a brain disease situation that speaks to this notion of what constitutes a cognitive system. There can be in brain disease relatively discrete disruptions of one of the system properties of the cognitive agent. It is common, for example, to study patients with memory dysfunctions. On one level of analysis they are unable to (1) retain new information and (2) combine two new elements into a fresh concept. Looking into the pathophysiology underlying these disorders, one finds that both diffuse and focal disease states correlate in this psychologic disarray. It is only on deeper probe that one begins to see differences at the psychologic level. Patients with focal disease possess a dense inability to transfer information from short-term to long-term memory, although lavishly assisted in their recall performance by cueing (e.g., categorical headings embedded in a long word list). On the other hand, patients with diffuse disease are not assisted by this cognitive strategy. Their recall performance stays down on the floor.

  What are we to do with these observations? First of all, are we to dismiss the diffuse disease patients as still embodying a cognitive system? Has their agency been lost? If not, what is it about them that characterizes them as a member of this species? I don’t have an answer. It seems to me that brain-diseased patients tell us immediately that we must bring more specificity to the definition of “cognitive penetrability” as a criterion for a cognitive system. I have the strong feeling that there is a real insight here, but a nagging feeling that we can too easily dismiss a lot of cognitive agents.

  To which George replied:

  To: Michael S. Gazzaniga

  From: George A. Miller

  Re: There’s a long, long trail a’winding

  Since you accept, at least tentatively, my definition of cognitive neuroscience, our next task is to try to put it to work. I want to restate the definition, but first I want to get rid of “epistemic system.” Let me begin by pointing in the general direction I had in mind.

  Organic Knowledge Systems. A “knowledge base” is any tangible collection of signals that are arranged according to some accepted coding scheme in order to represent a given body of information. A knowledge base coupled with an information processing system for using it (for storing, retrieving, erasing, comparing, searching, etc.) is a “knowledge system.” Obviously, a knowledge base is useless except as part of a knowledge system that (unlike libraries or computers) is governed by biological and psychological principles, i.e., a living, animate, agentive knowledge system.

  Definition of Cognitive Neuroscience. Cognitive neuroscientists attempt to discover the molecular logic of organic knowledge systems, i.e., the principles that, in addition to the principles of physics, chemistry, biology, and psychology, govern the behavior of inanimate matter in living knowledge systems.

  The Cognitive Criterion. It follows from this definition that any behavior unaffected by the state of knowledge of the behaving system is of no concern to cognitive neuroscience.

  Implications of Definition. This definition is compatible with various approaches to cognitive neuroscience: (1) Evolution of knowledge systems. For example, the evolutionary shift from genetically stored knowledge to knowledge acquired from experience. (2) Ontogenesis of knowledge systems. For example, the neural basis of personal memory. (3) Psychology of knowledge systems. For example, the effects of attention, as indicated by evoked potentials, perhaps, on knowledge-governed behavior. (4) Neurology of knowledge systems. For example, the correlation of different types of brain disease. And so on. None of these approaches is novel—which means that we could have something to say about each of them.

  A philosophical objection to this approach is that, by introducing successive definitions of biology, psychology, and cognitive neuroscience in this manner, we have made it reductionistic. That is to say, the principles sought by the cognitive neuroscientist are also principles of psychology, and the principles sought by psychologists are also principles of biology. Since I have always thought of scientific psychology as a branch of biology, this objection carries little weight with me. It would carry greater weight, however, with such distinguished scientists as B. F. Skinner or H. A. Simon.

  Implications of Criterion. A central question in your memo of June 1 might be phrased as follows: What are the operational implications of the claim that “any behavior unaffected by the state of the behaving system is of no concern to cognitive neuroscience”?

  Several things occur to me when you press this button. First, Zenon Pylyshyn should not have to assume responsibility for this phrasing of the criterion. As I understand his notion of “cognitive penetrability,” it is intended to discriminate between the fixed “architecture” and the modifiable programs for a mental computer. We, on the other hand, are trying to distinguish what cognitive neuroscientists want to know from what they leave to others. It is not clear to me, in my ignorance of Pylyshyn’s ideas, whether these two distinctions coincide, so the only line I can try to develop is our own.

  Second, I see two obvious ways to apply the criterion: (1) Change an organism’s state of knowledge and try to demonstrate a resultant change in its thinking or behaving. Or (2) leave the organism’s knowledge alone, but vary the materials used in a task to see whether thought or behavior changes as a function of their familiarity.

  If I have understood your example, the case of a patient with diffuse brain disease illustrates one of the difficulties of applying the criterion in manner (1); since it is apparently impossible to change such a patient’s state of knowledge, his memory-governed behavior was of no concern to cognitive neuroscience. For such a patient, therefore, it would be necessary to apply the criterion in manner (2)—essentially, to change the contents of the questions asked until we find something the patient does remember. Does this answer the disturbing question raised at the close of your memo?

  Third, I would think of this criterion as something to guide us, as authors, in picking and choosing what studies to write about and how to organize them. I see nothing wrong with confessing that this is the criterion we used (if, indeed, we did), but it does not seem to me to be something that we must rub the reader’s nose in.

  Levels of Description. One of the biggest problems I have in trying to get my thoughts straight about cognitive neuroscience
is that different people work at different levels of description, and no one pays attention to how his level is related to descriptions at other levels. I assume this degree of incoherence is possible because the different levels are only loosely related, which, if true, is an interesting observation in its own right.

  The closest discussions I have seen of the level problem have come from the MIT Artificial Intelligence Laboratory, where I assume that Minsky and Marr have been the guiding lights. It is forced on anyone who works with computers, I guess. For example, in P. H. Winston’s Artificial Intelligence (Addison-Wesley, 1977) eight levels of description of the operation of a computer are distinguished: (1) transistors, (2) flip flops and gates, (3) registers and data paths, (4) machine instructions, (5) compiler or interpreter, (6) LISP, (7) embedded pattern matcher, and (8) intelligent programs. D. Marr and T. Poggio (A theory of human stereo vision, Proc. Royal Soc. London, 1977) bring this closer to neurology when they distinguish four levels of description that should apply both to computers and to brains: (1) transistors and diodes, or neurons and synapses, (2) assemblies made from elements at level (1), e.g., memories, adders, multipliers, (3) the algorithm, or scheme for computation, and (4) the theory of the computation.

  C1early, most neuroscientists today are gung ho for level (1); neurotransmitters are hot stuff. I have also encountered a little work at level (2)—e.g., Mountcastle’s description of columnar assemblies—so I assume there is more that I don’t know about. Level (3) is as abstract as any neuroscientist had dared to dream about—maybe it has been achieved in such cases as Vince Dethier’s analysis of flies. Level (4) has been neglected, and Marr and Poggio propose that it is the responsibility of artificial intelligence to provide general theories by which the necessary structure of computation at level (3) can be defined.

 

‹ Prev