I Am a Strange Loop

Home > Other > I Am a Strange Loop > Page 12
I Am a Strange Loop Page 12

by Douglas R. Hofstadter


  That this flipping-around takes place is not in the least amazing or miraculous; rather, it is a quite unremarkable, indeed trivial, consequence of the being’s ability to perceive. It is no more surprising than the fact that audio feedback can take place or that a TV camera can be pointed at a screen to which its image is being sent. Some people may find the notion of such self-perception peculiar, pointless, or even perverse, but such a prejudice does not make self-perception a complex or subtle idea, let alone paradoxical. After all, in the case of a being struggling to survive, the one thing that is always in its environment is… itself. So why, of all things, should the being be perceptually immune to the most salient item in its world? Now that would seem perverse!

  Such a lacuna would be reminiscent of a language whose vocabulary kept growing and growing yet without ever developing words for such common concepts as are named by the English words “say”, “speak”, “word”, “language”, “understand”, “ask”, “question”, “answer”, “talk”, “converse”, “claim”, “deny”, “argue”, “tell”, “sentence”, “story”, “book”, “read”, “insist”, “describe”, “translate”, “paraphrase”, “repeat”, “lie”, “hedge”, “noun”, “verb”, “tense”, “letter”, “syllable”, “plural”, “meaning”, “grammar”, “emphasize”, “refer”, “pronounce”, “exaggerate”, “bluster”, and so forth. If such a peculiarly self-ignorant language existed, then as it grew in flexibility and sophistication, its speakers would engage ever more in talking, arguing, blustering, and so forth, but without ever referring to these activities, and such entities as questions, answers, and lies would become (even while remaining unnamed) ever more salient and numerous. Like the hobbled formalisms that came out of Bertrand Russell’s timid theory of types, this language would have a gaping hole at its core — the lack of any mechanism for a word or utterance or book (etc.) to refer to itself. Analogously, for a living creature to have evolved rich capabilities of perception and categorization but to be constitutionally incapable of focusing any of that apparatus onto itself would be highly anomalous. Its selective neglect would be pathological, and would threaten its survival.

  Varieties of Looping

  To be sure, the most primitive living creatures have little or no self-perception. By analogy, we can think of a TV camera rigidly bolted on top of a TV set and facing away from the screen, like a flashlight tightly attached to a miner’s helmet, always pointing away from the miner’s eyes, never into them. In such a TV setup, obviously, a self-turned loop is out of the question. No matter how you turn it, the camera and the TV set turn in synchrony, preventing the closing of a loop.

  We next imagine a more “evolved”, hence more flexible, setup; this time the camera, rather than being bolted onto its TV set, is attached to it by a “short leash”. Here, depending on the length and flexibility of the cord, it may be possible for the camera to twist around sufficiently to capture at least part of the TV screen in its viewfinder, giving rise to a truncated corridor. The biological counterpart to feedback of this level of sophistication may be the way our pet animals or even young children are slightly self-aware.

  The next stage, obviously, is where the “leash” is sufficiently long and flexible that the video camera can point straight at the center of the screen. This will allow an endless corridor, which is far richer than a truncated one. Even so, the possibility of closing the self-watching loop does not pin down the system’s richness, because there still are many options open. Can the camera tilt or not, and if so, by how much? Can it zoom in or out? Is its image in color, or just in black and white? Can brightness and contrast be tweaked? What degree of resolution does the image have? What percentage of time is spent in self-observation as opposed to observation of the environment? Is there some way for the video camera itself to appear on the screen? And on and on. There are still many parameters to play with, so the potential loop has many open dimensions of sophistication.

  Reception versus Perception

  Despite the richness afforded by all these options, a self-watching television system will always lack one crucial aspect: the capacity of perception, as opposed to mere reception, or image-receiving. Perception takes as its starting point some kind of input (possibly but not necessarily a two-dimensional image) composed of a vast number of tiny signals, but then it goes much further, eventually winding up in the selective triggering of a small subset of a large repertoire of dormant symbols — discrete structures that have representational quality. That is to say, a symbol inside a cranium, just like a simmball in the hypothetical careenium, should be thought of as a triggerable physical structure that constitutes the brain’s way of implementing a particular category or concept.

  I should offer a quick caveat concerning the word “symbol” in this new sense, since the word comes laden with many prior associations, some of which I definitely want to avoid. We often refer to written tokens (letters of the alphabet, numerals, musical notes on paper, Chinese characters, and so forth) as “symbols”. That’s not the meaning I have in mind here. We also sometimes talk of objects in a myth, dream, or allegory (for example, a key, a flame, a ring, a sword, an eagle, a cigar, a tunnel) as being “symbols” standing for something else. This is not the meaning I have in mind, either. The idea I want to convey by the phrase “a symbol in the brain” is that some specific structure inside your cranium (or your careenium, depending on what species you belong to) gets activated whenever you think of, say, the Eiffel Tower. That brain structure, whatever it might be, is what I would call your “Eiffel Tower symbol”.

  You also have an “Albert Einstein” symbol, an “Antarctica” symbol, and a “penguin” symbol, the latter being some kind of structure inside your brain that gets triggered when you perceive one or more penguins, or even when you are just thinking about penguins without perceiving any. There are also, in your brain, symbols for action concepts like “kick”, “kiss”, and “kill”, for relational concepts like “before”, “behind”, and “between”, and so on. In this book, then, symbols in a brain are the neurological entities that correspond to concepts, just as genes are the chemical entities that correspond to hereditary traits. Each symbol is dormant most of the time (after all, most of us seldom think about cotton candy, egg-drop soup, St. Thomas Aquinas, Fermat’s last theorem, Jupiter’s Great Red Spot, or dental-floss dispensers), but on the other hand, every symbol in our brain’s repertoire is potentially triggerable at any time.

  The passage leading from vast numbers of received signals to a handful of triggered symbols is a kind of funneling process in which initial input signals are manipulated or “massaged”, the results of which selectively trigger further (i.e., more “internal”) signals, and so forth. This batonpassing by squads of signals traces out an ever-narrowing pathway in the brain, which winds up triggering a small set of symbols whose identities are of course a subtle function of the original input signals.

  Thus, to give a hopefully amusing example, myriads of microscopic olfactory twitchings in the nostrils of a voyager walking down an airport concourse can lead, depending on the voyager’s state of hunger and past experiences, to a joint triggering of the two symbols “sweet” and “smell”, or a triggering of the symbols “gooey” and “fattening”, or of the symbols “Cinnabon” and “nearby”, or of the symbols “wafting”, “advertising”, “subliminal”, “sly”, and “gimmick” — or perhaps a triggering of all eleven of these symbols in the brain, in some sequence or other. Each of these examples of symbol-triggering constitutes an act of perception, as opposed to the mere reception of a gigantic number of microscopic signals arriving from some source, like a million raindrops landing on a roof.

  In the interests of clarity, I have painted too simple a picture of the process of perception, for in reality, there is a great deal of two-way flow. Signals don’t propagate solely from the outside inwards, towards symbols; expectations from past experiences simultaneously give rise to signals propagating outw
ards from certain symbols. There takes place a kind of negotiation between inward-bound and outward-bound signals, and the result is the locking-in of a pathway connecting raw input to symbolic interpretation. This mixture of directions of flow in the brain makes perception a truly complex process. For the present purposes, though, it suffices to say that perception means that, thanks to a rapid two-way flurry of signal-passing, impinging torrents of input signals wind up triggering a small set of symbols, or in less biological words, activating a few concepts.

  In summary, the missing ingredient in a video system, no matter how high its visual fidelity, is a repertoire of symbols that can be selectively triggered. Only if such a repertoire existed and were accessed could we say that the system was actually perceiving anything. Still, nothing prevents us from imagining augmenting a vanilla video system with additional circuitry of great sophistication that supports a cascade of signal-massaging processes that lead toward a repertoire of potentially triggerable symbols. Indeed, thinking about how one might tackle such an engineering challenge is a helpful way of simultaneously envisioning the process of perception in the brain of a living creature and its counterpart in the cognitive system of an artificial mind (or an alien creature, for that matter). However, quite obviously, not all realizations of such an architecture, whether earthbound, alien, or artificial, will possess equally rich repertoires of symbols to be potentially triggered by incoming stimuli. As I have done earlier in this book, I wish once again to consider sliding up the scale of sophistication.

  Mosquito Symbols

  Suppose we begin with a humble mosquito (not that I know any arrogant ones). What kind of representation of the outside world does such a primitive creature have? In other words, what kind of symbol repertoire is housed inside its brain, available for tapping into by perceptual processes? Does a mosquito even know or believe that there are objects “out there”? Suppose the answer is yes, though I am skeptical about that. Does it assign the objects it registers as such to any kind of categories? Do words like “know” or “believe” apply in any sense to a mosquito?

  Let’s be a little more concrete. Does a mosquito (of course without using words) divide the external world up into mental categories like “chair”, “curtain”, “wall”, “ceiling”, “person”, “dog”, “fur”, “leg”, “head”, or “tail”? In other words, does a mosquito’s brain incorporate symbols — discrete triggerable structures — for such relatively high abstractions? This seems pretty unlikely; after all, to do its mosquito thing, a mosquito could do perfectly well without such “intellectual” luxuries. Who cares if I’m biting a dog, a cat, a mouse, or a human — and who cares if it’s an arm, an ear, a tail, or a leg — as long as I’m drawing blood?

  What kinds of categories, then, does a mosquito need to have? Something like “potential source of food” (a “goodie”, for short) and “potential place to land” (a “port”, for short) seem about as rich as I expect its category system to be. It may also be dimly aware of something that we humans would call a “potential threat” — a certain kind of rapidly moving shadow or visual contrast (a “baddie”, for short). But then again, “aware”, even with the modifier “dimly”, may be too strong a word. The key issue here is whether a mosquito has symbols for such categories, or could instead get away with a simpler type of machinery not involving any kind of perceptual cascade of signals that culminates in the triggering of symbols.

  If this talk of bypassing symbols and managing with a very austere substitute for perception strikes you as a bit blurry, then consider the following questions. Is a toilet aware, no matter how slightly, of its water level? Is a thermostat aware, albeit extremely feebly, of the temperature it is controlling? Is a heat-seeking missile aware, be it ever so minimally, of the heat emanating from the airplane that it is pursuing? Is the Exploratorium’s jovially jumping red spot aware, though only terribly rudimentarily, of the people from whom it is forever so gaily darting away? If you answered “no” to these questions, then imagine similarly unaware mechanisms inside a mosquito’s head, enabling it to find blood and to avoid getting bashed, yet to accomplish these feats without using any ideas.

  Mosquito Selves

  Having considered mosquito symbols, we now inch closer to the core of our quest. What is the nature of a mosquito’s interiority? That is, what is a mosquito’s experience of “I”-ness? How rich a sense of self is a mosquito endowed with? These questions are very ambitious, so let’s try something a little simpler. Does a mosquito have a visual image of how it looks? I hope you share my skepticism on this score. Does a mosquito know that it has wings or legs or a head? Where on earth would it get ideas like “wings” or “head”? Does it know that it has eyes or a proboscis? The mere suggestion seems ludicrous. How would it ever find such things out? Let’s instead speculate a bit about our mosquito’s knowledge of its own internal state. Does it have a sense of being hot or cold? Of being tuckered out or full of pep? Hungry or starved? Happy or sad? Hopeful or frightened? I’m sorry, but even these strike me as lying well beyond the pale, for an entity as humble as a mosquito.

  Well then, how about more basic things like “in pain” and “not in pain”? I am still skeptical. On the other hand, I can easily imagine signals sent from a mosquito’s eye to its brain and causing other signals to bounce back to its wings, amounting to a reflex verbalizable to us humans as “Flee threat on left” or simply “Outta here!” — but putting it into telegraphic English words in this fashion still makes the mosquito sound too aware, I am afraid. I would be quite happy to compare a mosquito’s inner life to that of a flush toilet or a thermostat, but that’s about as far as I personally would go. Mosquito behavior strikes me as perfectly comprehensible without recourse to anything that deserves the name “symbol”. In other words, a mosquito’s wordless and conceptless danger-fleeing behavior may be less like perception as we humans know it, and more like the wordless and conceptless hammer-fleeing behavior of your knee when the doctor’s hammer hits it and you reflexively kick. Does a mosquito have more of an inner life than your knee does?

  Does a mosquito have even the tiniest glimmering of itself as being a moving part in a vast world? Once again, I suspect not, because this would require all sorts of abstract symbols to reside in its microscopic brain — symbols for such notions as “big”, “small”, “part”, “place”, “move”, and so on, not to mention “myself ”. Why would a mosquito need such luxuries? How would they help it find blood or a mate more efficiently? A hypothetical mosquito that had enough brainpower to house fancy symbols like these would be an egghead with a lot more neurons to carry around than its more streamlined and simpleminded cousins, and it would thereby be heavier and slower than they are, meaning that it wouldn’t be able to compete with them in the quests for blood and reproduction, and so it would lose out in the evolutionary race.

  My intuition, at any rate, is that a mosquito’s very efficient teeny little nervous system lacks perceptual categories (and hence symbols) altogether. If I am not mistaken, this reduces the kind of self-perception loops that can exist in a mosquito’s brain to an exceedingly low level, thus rendering a mosquito a very “small-souled man” indeed. I hope it doesn’t sound too blasphemous or crazy if I suggest that a mosquito’s “soul” might be roughly the same “size” as that of the little red spot of light that bounces around on the wall at the Exploratorium — let’s say, one ten-billionth of one huneker (i.e.., roughly one trillionth of a human soul).

  To be sure, I’m being flippant in making this numerical estimate, but I am quite serious in presenting my subjective guess about whether symbols are present or absent in a mosquito’s brain. Nevertheless, it is just a subjective guess, and you may not agree with it, but disputes about such fine points are not germane here. The key point is much simpler and cruder: merely that there is some kind of creature to which essentially this level of complexity, and no greater level, would apply. If you disagree with my judgment, then I invite you to slide u
p or down the scale of various animal intellects until you feel you have hit the appropriate level.

  One last reflection on all this. Some readers might protest, with what sounds like great sincerity, about all these questions about a mosquito’s-eye view on the world: “How could we ever know? You and I can’t get inside a mosquito’s brain or mind — no one can. For all I know, mosquitoes are every bit as conscious as I am!” Well, I would respectfully suggest that such claims cannot be sincere, because here’s ten bucks that say such readers would swat a mosquito perched on their arm without giving it a second thought. Now if they truly believe that mosquitoes are quite possibly every bit as sentient as themselves, then how come they’re willing to snuff mosquito lives in an instant? Are these people not vile monsters if they are untroubled by executing living creatures who, they claim, may well enjoy just as much consciousness as do humans? I think you have to judge people’s opinions not by their words, but by their deeds.

  An Interlude on Robot Vehicles

  Before moving on to consider higher animal species, I wish to insert a brief discussion of cars that drive themselves down smooth highways or across rocky deserts. Aboard any such vehicle are one or more television cameras (and laser rangefinders and other kinds of sensors) equipped with extra processors that allow the vehicle to make sense of its environment. No amount of simplistic analysis of just the colors or the raw shapes on the screen is going to provide good advice as to how to get around obstacles without toppling or getting stuck. Such a system, in order to drive itself successfully, has to have a nontrivial storehouse of prepackaged knowledge structures that can be selectively triggered by the scene outside. Thus, some knowledge of such abstractions as “road”, “hill”, “gulley”, “mud”, “rock”, “tree”, “sand”, and many others will be needed if the vehicle is going to avoid getting stuck in mud, trapped in a gulley, or wedged between two boulders. The television cameras and the rangefinders (etc.) provide only the simplest initial stages of the vehicle’s “perceptual process”, and the triggering of various knowledge structures of the sort that were just mentioned corresponds to the far end, the symbolic end, of the process.

 

‹ Prev