A Woman Looking at Men Looking at Women

Home > Literature > A Woman Looking at Men Looking at Women > Page 29
A Woman Looking at Men Looking at Women Page 29

by Siri Hustvedt


  When I first ran across claims like this one, which essentially argue that it is information pattern that defines life, not matter, that organization is what counts, not the material of which it is made, I was deeply puzzled. What does it really mean that information, computation, and feedback are more important than molecules, sugars, and lipids, not to speak of bone and muscle and flesh? Are these biological realities incidental to what we think of as life? What is “information” in this context? For the moment, it is enough to say that after much reading, it became clear to me that this “ethereal commodity” has become dogma, at least for some. The word “information” is ubiquitous, and we know it is a valuable commodity, but its meaning changes with use. What is it?

  In her book How We Became Posthuman, N. Katherine Hayles asks, “When and where did information get constructed as a disembodied medium? How were researchers convinced that humans and machines are brothers under the skin?” By tracing the history of cybernetics and the interdisciplinary Macy conferences held between 1946 and 1953, Hayles argues that in the first conference Norbert Wiener and Claude Shannon devised a theory of information that was at once dematerialized and decontextualized: “Shannon and Wiener defined information so that it would be calculated as the same value regardless of the contexts in which it was embedded, which is to say, they divorced it from meaning.”184 She further notes that not everyone at the conferences thought this was the best strategy.

  Hayles is right that the word “information” took a new turn with Shannon and Wiener, but she does not say that there was a long precedent for what Shannon and Wiener did. Separating symbols from their meanings to find the essential patterns or laws of human reasoning is what logicians have been doing since the Greeks. Galileo and Descartes both sought abstract solutions to the secrets of nature and mind. In his 1847 The Mathematical Analysis of Logic, the innovative logician George Boole wrote, “They who are acquainted with the present state of the theory of Symbolic Algebra, are aware, that the validity of the processes of analysis does not depend upon the interpretation of the symbols which are employed, but solely on their laws of combination.”185 Boole was interested in not only advances in logic but uncovering the laws of the human mind. As the mathematician Keith Devlin explains in his book Goodbye, Descartes, “Since Boole, logicians have regularly exploited the possibility of working with ‘meaningless’ symbols. By stripping away the meaning, it is possible to ignore much of the complexity of the real world and concentrate on the pure, abstract patterns of logic.”186 Whitehead writes, “The point of mathematics is that in it we have always got rid of the particular instance, and even of any particular sorts of entities.”187 These patterns are the “deep understanding” of existence. With such a theory, one can move easily from organic bodies to machines and back again. Information remains the same no matter what it is made of, and it is not dependent on its material, its context, or its meaning.

  Norbert Wiener opens a chapter called “Organization as the Message” in The Human Use of Human Beings: Cybernetics and Society (1950) by confessing that what follows will contain an element of “phantasy.” Fiction and dream will invade his science just a bit. Wiener’s main point, which makes the familiar distinction between form and matter, however, is emphatic, and, by his lights, nonfantastic: “The physical identity of an individual does not consist in the matter of which it is made.” It is the pattern or form that counts, he tells us, whether the form is organic or of some other material: “The individuality of a body is that of a flame rather than that of a stone, of a form rather than of a bit of substance. This form can be transmitted or modified and duplicated, although at present we know only how to duplicate it over a short distance.”188 He ends the chapter by arguing that the reason “we cannot telegraph the pattern of a man from one place to another” is “due to technical difficulties.” In other words, soon it will be possible to beam a man from one place to another. His essential point is that “traffic” in the modern world “is overwhelmingly not so much the transmission of human bodies as the transmission of human information.”189 One can certainly argue that Wiener’s statement was right then and is even more right now. We are drowning in information.

  Wiener’s idea is at once simple and troubling. Information becomes wholly independent of its substance. It is the pattern, not the meaning, that counts. The word “information” may be the single most malleable word in contemporary culture. In 1984, A. M. Schrader published a work in which he found seven hundred definitions of “information science” between 1900 and 1981 and described the general state of affairs as one of “conceptual chaos.”190 Depending on what text you are reading, the word can mean, as it does for Wiener and Shannon, the pattern of communication between source and receiver. It can also mean the content of a so-called cognitive state, the meaning of a sentence in linguistics, or a concept in physics that in some way seems to have been naturalized. In this last definition, no eye or ear or body is needed to take in the information and understand it. It is present before any thinker came along. The very arrangement of atoms and molecules is information. In Information and the Internal Structure of the Universe, Tom Stonier writes, “Information exists. It does not need to be perceived to exist . . . It requires no intelligence to interpret it. It does not have to have meaning to exist. It exists.”191 I confess I think that how one understands the word “information” must enter into this problem, and that without certain turns in the history of science and technology, it might not have occurred to anyone to speak of information as an inherent property of the material world, that such an understanding of “information” has a rhetorical history. There is no sense of language use in this statement. If one defines “information” as patterns of reality that have the potential to be read and interpreted, then the world is indeed plump with information of all kinds—both natural and unnatural.

  Pinker states that his faith in information as the appropriate concept for “life” is also the appropriate concept for “mind.” The mind he is referring to emerged through what is now called the cognitive revolution in the 1950s. Without the cognitive revolution, there would be no evolutionary psychology, no claims that our minds are computers. From this point of view, thinking is computation, and minds are symbolic information-processing machines. Computers can therefore simulate our thought processes or disembodied patterns without any reference to particular gels and oozes—the biological material stuff underneath.

  Rom Harré, the philosopher and psychologist, describes this way of thinking about cognition as essentially dualistic. “The way that the architects of the First Cognitive Revolution constrained their model of the human mind quickly developed into the analogy of the computer and the running of its programmes . . . As a psychology, this form of cognitivism had a number of disturbing aspects. It preserved a generally Cartesian picture of ‘the mind’ as some kind of diaphanous mechanism, a mechanism which operated upon such non-material stuff as ‘information.’ ”192 Darwin would have been surprised to find that after his death, his careful observations of plants and animals and his idea of natural selection would be yoked to machine computation, not to speak of Descartes’s schism between mind and body.

  Mind as Literal Computer?

  The idea that the mind literally is a computer fascinates me. Unlike “hard-wiring,” “computation” as a description of mental processes is not used as a metaphor. Pinker, for example, refers to the mind as a “neural computer.” Although the roots of the idea may be traced back to Pythagorean mysticism and mathematics, Greek logic, Galileo, mechanistic philosophers of the seventeenth century, and Newton, who was influenced by them, another more recent antecedent is the mathematician, logician, and philosopher Gottlob Frege (1848–1925), who made further advances in logic after Boole, created a formal notation for the movements of reasoning, and had a shaping influence on Anglo-American analytical philosophy. Frege believed that logic and mathematical truths were not the property of the human mind—he was a fervent oppon
ent of “psychologism,” the idea that logic is a mental product—and argued for “a third realm.” Logic is not rooted in our everyday perceptual knowledge of the world but rather in universal principles, an idea with obvious Platonic and Cartesian resonances—a belief in eternal forms that are wholly unrelated to the experiences of our sensual, material bodies. Truth is out there waiting to be found.

  Without belaboring the ongoing debates about whether logic, mathematics, and information are fallible or absolute, a product of the human mind or a discovery of it, it is vital to understand that a great deal rests on this dispute because it lies at the heart of a definition of mind that has dominated Western philosophy and science for centuries. Without the assumption that in some way the workings of our minds can be reduced to a set of objective, mechanistic, symbolic computational processes that are wholly unrelated to matter, there could be no evolutionary psychology.

  In Evolutionary Psychology: A Primer, Leda Cosmides and John Tooby summarize their view: “The mind is a set of information-processing machines that were designed by natural selection to solve adaptive problems faced by our hunter-gatherer ancestors.” Sociobiology and computational theory of mind, CTM, are married in this sentence. According to this view, the mind is a conglomeration of specific modular mechanisms—the number of which is unknown. Cosmides and Tooby speculate that there may be “hundreds or thousands” of them. They also state clearly that they reject a hard line between nature and nurture: “A defining characteristic of the field,” they write, “is the explicit rejection of the usual nature/nurture dichotomies . . . What effect the environment will have on an organism depends critically on the details of its evolved cognitive architecture.”193 This sounds eminently reasonable to me.

  For Cosmides and Tooby, however, this mental “architecture,” which, by their own definition, “houses a stone age mind,” is highly specified and mostly fixed by natural selection. Therefore, one can draw a straight line between those male hunters out for prey and 3-D spatial rotation skills without worrying about the many thousands of years between then and now. Despite their rejection of the nature/nurture divide, Cosmides and Tooby promote minds with a hard evolutionary rigidity reminiscent of Galton, mind machines that have “innate psychological mechanisms.” Indeed, if the mind were flexible, its architecture would probably house a more up-to-date or modern mind. The architecture Cosmides and Tooby are referring to is not brain architecture. Gels and oozes do not particularly bother them.

  It is important to mention that the lives of those “hunter-gatherer ancestors” are not open books. We have no access to them because they are long gone. What we know about their lives is based on the hunter-gatherer societies that remain with us on earth, which are not wholly uniform as cultures. Although they all hunt and gather, they are also different from one another. In an essay called “Some Anthropological Objections to Evolutionary Psychology,” the anthropologist C. R. Hallpike writes, “While . . . we are quite well informed about physical conditions in East Africa one or two million years ago, by the standards of ethology and of social anthropology we know virtually nothing about the social relations and organization of our ancestors in those remote epochs, and even less about their mental capacities.”194 Hallpike goes on to say that nobody even knows whether these people had grammatical language, which makes any discussion of evolutionary adaptations extremely difficult. These Stone Age people with their Stone Age minds may be more “real” than Vico’s giants or Bigfoot, but our knowledge of them and the specifics of their lives are cloudy at best.

  Does computation serve as a good literal description of our minds? Those hunter-gatherer Pleistocene ancestors on the African savanna, whom evolutionary psychologists are continually evoking to explain our minds today, knew nothing of the computer, but they are described as having something of the sort up in their Stone Age heads long before the machine was invented. There is nothing wrong with projecting the computer backward several millennia to describe the human mind if, in fact, it does function as one. Although people have “computed” problems for a long time, the computer as a machine is a recent invention dependent on human beings for its existence, but the confidence of this characterization never fails to amaze me, simply because the mind, which has long been with us in some form or other, could not have been understood in this way until the machine came into existence. Although it is undoubtedly true that I am “processing information” daily, is my mind really a computational device, one with hundreds or maybe even thousands of problem-solving modules? Descartes would have balked at the idea that the mind, like the body, is a kind of machine. Nevertheless, as Harré noted, there is a strong Cartesian quality to this computing, problem-solving, curiously dematerialized mind.

  The mind as envisioned by Cosmides, Tooby, Pinker, David Buss, and others in the field is characterized by countless naturally selected discrete mechanisms that have evolved to address particular problems. The mind is composed of machine modules. This idea of a “modular mind” comes from the analytical philosopher Jerry Fodor’s influential book The Modularity of Mind (1983). Fodor is a philosopher who believes that all human beings share a mental conceptual structure, a fundamental mode of thought that takes logical form, which he calls “mentalese.” This language of thought is not the same as actual spoken language but lies underneath words as its abstract logic. Fodor’s modular theory of cognition argues that some, not all, psychological processes are isolated and information is encapsulated in its own domain. Perceiving an object, according to this view, may not rely on other aspects of cognition, such as language, but rather can be computed in its own distinct bounded realm.

  In Pinker’s view, each mind module, “whose logic is specified by our genetic program,” has a special task.195 The computer analogy is built into the prose, as it is into many papers in the cognitive sciences and neurosciences. The program is assumed, and it appears to function much like the one Jacob proposed in 1970 when his book was first published in France and the one Dawkins elaborated in The Selfish Gene. Genes are the “program.” But evolutionary psychology further relies on an idea that has been called “massive modularity.” The entire human mind is domain specific. This resembles the extreme locationist views that have come and gone in neurology. The information-processing model, however, as we have seen, is not dependent on real brains.

  The mind is compartmentalized into boxes, each one evolutionarily designed for a special problem, a kind of modern phrenology of mind, not brain. Franz Joseph Gall also proposed modules that could be divined from reading the human skull. The modules of evolutionary psychology are mostly but not entirely inherited, part of an internal human nature or conceptual mental architecture. This does not mean there is no “input” from the environment, but rather that each of these hypothetical mental “modules” carries within it “innate knowledge.” Acquiring language, growing up, our particular personalities, and the psychological differences between the sexes are less about our environments now, although they certainly have an impact, and more about how our minds evolved in relation to the environment over millennia. There is a biological organism, of course, but the psychology of that evolved organism is conceived through discrete machine-mind, quasi-Cartesian modules.

  Why are these people so sure about mental modules? A long history and thousands of questions precede this assumption. “How do I know I am actually here sitting by the fire?” is not even on the horizon. Rather, questions generated from one answer after another have resulted in a truism: we have evolved massively modular computational minds. Jerry Fodor, who might be described as Professor Modularity himself, was critical of what he regarded as Pinker’s ungrounded confidence in a wholly modular mind. He responded to the latter’s How the Mind Works with a book of his own: The Mind Doesn’t Work That Way.196 Fodor, unlike Pinker, does not believe that so-called higher cognitive processes such as analogous thinking are modular. He believes that this kind of thought cannot possibly rely on discrete modules.

&n
bsp; Despite many questions in evolution about which traits are adaptations and which ones aren’t and whether we are still evolving or have stopped, many scholars in many fields accept the central Darwinian principle that we are evolved beings. The neo-Darwinian thought of someone like Dawkins is more controversial. In 2012, the analytical philosopher Thomas Nagel published Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False. His critique of neo-Darwinian evolutionary theory prompted an instantaneous and often brutal response. In a tweet, Pinker asked, “What has gotten into Thomas Nagel?” and referred to “the shoddy reasoning of a once-great thinker.”197 Nagel is dissatisfied with reductive materialism and argues it cannot account for conscious subjectivity, a problem he famously described in an essay published in 1974 called “What Is It Like to Be a Bat?”

  In the essay, Nagel argues that the subjective experience of being you, me, or a bat takes place from a particular first-person perspective for you, me, or the bat and that no objective third-person description can fully characterize that reality. He is not arguing against objective positions, but rather that by reducing the subjective to the objective, something goes missing: “Every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.”198 Nagel’s work is a model of lucid philosophical prose. He stands out from many of his peers like a beam from a lighthouse on a foggy night. His style reminds me of Descartes’s in its purity. And like Descartes, Nagel understands that there is something particular about subjective experience.

 

‹ Prev