At the same time as I myself was getting ever more used to the fact that this “I” thing was responsible for what I did, my parents and friends were also becoming more convinced that there was indeed something very realseeming “in there” (in other words, something very marble-like, something with its unique brands of “hardness” and “resilience” and “shape”), which merited being called “you” or “he” or “Douggie”, and that also merited being called “I” by Douggie — and so once again, the sense of reality of this “I” was being reinforced over and over again, in myriad ways. By the time this brain had lived in this body for a couple of years or so, the “I” notion was locked into it beyond any conceivable hope of reversal.
…But Am I Real?
And yet, was this “I”, for all its tremendous stability and apparent utility, a real thing, or was it just a comfortable myth? I think we need some good old-fashioned analogies here to help out. And so I ask you, dear reader, are temperature and pressure real things, or are they just façons de parler? Is a rainbow a real thing, or is it nonexistent? Perhaps more to the point, was the “marble” that I discovered inside my box of envelopes real?
What if the box had been sealed shut so I had no way of looking at the individual envelopes? What if my knowledge of the box of envelopes necessarily came from dealing with its hundred envelopes as a single whole, so that no shifting back and forth between coarse-grained and fine-grained perspectives was possible? What if I hadn’t even known there were envelopes in the box, but had simply thought that there was a somewhat squeezable, pliable mass of softish stuff that I could grab with my entire hand, and that at this soft mass’s center there was something much more rigid-feeling and undeniably spherical in shape?
If, in addition, it turned out that talking about this supposed marble had enormously useful explanatory power in my life, and if, on top of that, all my friends had similar cardboard boxes and all of them spoke ceaselessly — and wholly unskeptically — about the “marbles” inside their boxes, then it would soon become pretty irresistible to me to accept my own marble as part of the world and to allude to it frequently in my explanations of various phenomena in the world. Indeed, any oddballs who denied the existence of marbles inside their cardboard boxes would be accused of having lost their marbles.
And thus it is with this notion of “I”. Because it encapsulates so neatly and so efficiently for us what we perceive to be truly important aspects of causality in the world, we cannot help attributing reality to our “I” and to those of other people — indeed, the highest possible level of reality.
The Size of the Strange Loop that Constitutes a Self
One more time, let’s go back and talk about mosquitoes and dogs. Do they have anything like an “I” symbol? In Chapter 1, when I spoke of “small souls” and “large souls”, I said that this is not a black-and-white matter but one of degree. We thus have to ask, is there a strange loop — a sophisticated level-crossing feedback loop — inside a mosquito’s head? Does a mosquito have a rich, symbolic representation of itself, including representations of its desires and of entities that threaten those desires, and does it have a representation of itself in comparison with other selves? Could a mosquito think a thought even vaguely reminiscent of “I can smile just like Hopalong Cassidy!” — for example, “I can bite just like Buzzaround Betty!”? I think the answer to these and similar questions is quite obviously, “No way in the world!” (thanks to the incredibly spartan symbol repertoire of a mosquito brain, barely larger than the symbol repertoire of a flush toilet or a thermostat), and accordingly, I have no qualms about dismissing the idea of there being a strange loop of selfhood in as tiny and swattable a brain as that of a mosquito.
On the other hand, where dogs are concerned, I find, not surprisingly, much more reason to think that there are at least the rudiments of such a loop in there. Not only do dogs have brains that house many rather subtle categories (such as “UPS truck” or “things I can pick up in the house and walk around with in my mouth without being punished”), but also they seem to have some rudimentary understanding of their own desires and the desires of others, whether those others are other dogs or human beings. A dog often knows when its master is unhappy with it, and wags its tail in the hopes of restoring good feelings. Nonetheless, a dog, saliently lacking an arbitrarily extensible concept repertoire and therefore possessing only a rudimentary episodic memory (and of course totally lacking any permanent storehouse of imagined future events strung out along a mental timeline, let alone counterfactual scenarios hovering around the past, the present, and even the future), necessarily has a self-representation far simpler than that of an adult human, and for that reason a dog has a far smaller soul.
The Supposed Selves of Robot Vehicles
I was most impressed when I read about “Stanley”, a robot vehicle developed at the Stanford Artificial Intelligence Laboratory that not too long ago drove all by itself across the Nevada desert, relying just on its laser rangefinders, its television camera, and GPS navigation. I could not help asking myself, “How much of an ‘I’ does Stanley have?”
In an interview shortly after the triumphant desert crossing, one gungho industrialist, the director of research and development at Intel (you should keep in mind that Intel manufactured the computer hardware on board Stanley), bluntly proclaimed: “Deep Blue [IBM’s chess machine that defeated world champion Garry Kasparov in 1997] was just processing power. It didn’t think. Stanley thinks.”
Well, with all due respect for the remarkable collective accomplishment that Stanley represents, I can only comment that this remark constitutes shameless, unadulterated, and naïve hype. I see things very differently. If and when Stanley ever acquires the ability to form limitlessly snowballing categories such as those in the list that opened this chapter, then I’ll be happy to say that Stanley thinks. At the present, though, its ability to cross a desert without self-destructing strikes me as comparable to an ant’s following a dense pheromone trail across a vacant lot without perishing. Such autonomy on the part of a robot vehicle is hardly to be sneezed at, but it’s a far cry from thinking and a far cry from having an “I”.
At one point, Stanley’s video camera picked up another robot vehicle ahead of it (this was H1, a rival vehicle from Carnegie-Mellon University) and eventually Stanley pulled around H1 and left it in its dust. (By the way, I am carefully avoiding the pronoun “he” in this text, although it was par for the course in journalistic references to Stanley, and perhaps also at the AI Lab as well, given that the vehicle had been given a human name. Unfortunately, such linguistic sloppiness serves as the opening slide down a slippery slope, soon winding up in full anthropomorphism.) One can see this event taking place on the videotape made by that camera, and it is the climax of the whole story. At this crucial moment, did Stanley recognize the other vehicle as being “like me”? Did Stanley think, as it gaily whipped by H1, “There but for the grace of God go I?” or perhaps “Aha, gotcha!” Come to think of it, why did I write that Stanley “gaily whipped by” H1?
What would it take for a robot vehicle to think such thoughts or have such feelings? Would it suffice for Stanley’s rigidly mounted TV camera to be able to turn around on itself and for Stanley thereby to acquire visual imagery of itself? Of course not. That may be one indispensable move in the long process of acquiring an “I”, but as we know in the case of chickens and cockroaches, perception of a body part does not a self make.
A Counterfactual Stanley
What is lacking in Stanley that would endow it with an “I”, and what does not seem to be part of the research program for developers of self-driving vehicles, is a deep understanding of its place in the world. By this I do not mean, of course, the vehicle’s location on the earth’s surface, which is given to it down to the centimeter by GPS; it means a rich representation of the vehicle’s own actions and its relations to other vehicles, a rich representation of its goals and its “hopes”. This would require the vehicl
e to have a full episodic memory of thousands of experiences it had had, as well as an episodic projectory (what it would expect to happen in its “life”, and what it would hope, and what it would fear), as well as an episodic subjunctory, detailing its thoughts about near misses it had had, and what would most likely have happened had things gone some other way.
Thus, Stanley the Robot Steamer would have to be able to think to itself such hypothetical future thoughts as, “Gee, I wonder if H1 will deliberately swerve out in front of me and prevent me from passing it, or even knock me off the road into the ditch down there! That’s what I’d do if I were H1!” Then, moments later, it would have to be able to entertain counterfactual thoughts such as, “Whew! Am I ever glad that H1 wasn’t so clever as I feared — or maybe H1 is just not as competitive as I am!”
An article in Wired magazine described the near-panic in the Stanford development team as the desert challenge was drawing perilously near and they realized something was still very much lacking. It casually stated, “They needed the algorithmic equivalent of self-awareness”, and it then proceeded to say that soon they had indeed achieved this goal (it took them all of three months of work!). Once again, when all due hat-tips have been made toward the team’s great achievement, one still has to realize that there is nothing going on inside Stanley that merits being labeled by the highly loaded, highly anthropomorphic term “self-awareness”.
The feedback loop inside Stanley’s computational machinery is good enough to guide it down a long dusty road punctuated by potholes and lined with scraggly saguaros and tumbleweed plants. I salute it! But if one has set one’s sights not just on driving but on thinking and consciousness, then Stanley’s feedback loop is not strange enough — not anywhere close. Humanity still has a long ways to go before it will collectively have wrought an artificial “I”.
CHAPTER 14
Strangeness in the “I” of the Beholder
The Inert Sponges inside our Heads
WHY, you might be wondering, do I call the lifelong loop of a human being’s self-representation, as described in the preceding chapter, a strange loop? You make decisions, take actions, affect the world, receive feedback, incorporate it into your self, then the updated “you” makes more decisions, and so forth, round and round. It’s a loop, no doubt — but where’s the paradoxical quality that I’ve been saying is a sine qua non for strange loopiness? Why is this not just an ordinary feedback loop? What does such a loop have in common with the quintessential strange loop that Kurt Gödel discovered unexpectedly lurking inside Principia Mathematica?
For starters, a brain would seem, a priori, just about as unlikely a substrate for self-reference and its rich and counterintuitive consequences as was the extremely austere treatise Principia Mathematica, from which self-reference had been strictly banished. A human brain is just a big spongy bulb of inanimate molecules tightly wedged inside a rock-hard cranium, and there it simply sits, as inert as a lump on a log. Why should self-reference and a self be lurking in such a peculiar medium any more than they lurk in a lump of granite? Where’s the “I”-ness in a brain?
Just as something very strange had to be happening inside the stony fortress of Principia Mathematica to allow the outlawed “I” of Gödelian sentences like “I am not provable” to creep in, something very strange must also take place inside a bony cranium stuffed with inanimate molecules if it is to bring about a soul, a “light on”, a unique human identity, an “I”. And keep in mind that an “I” does not magically pop up in all brains inside all crania, courtesy of “the right stuff” (that is, certain “special” kinds of molecules); it happens only if the proper patterns come to be in that medium. Without such patterns, the system is just as it superficially appears to be: a mere lump of spongy matter, soulless, “I”-less, devoid of any inner light.
Squirting Chemicals
When the first brains came into existence, they were trivial feedback devices, less sophisticated than a toilet’s float-ball mechanism or the thermostat on your wall, and like those devices, they selectively made primitive organisms move towards certain things (food) and away from others (dangers). Evolutionary pressures, however, gradually made brains’ triage of their environments grow more complex and multi-layered, and eventually (here we’re talking millions or billions of years), the repertoire of categories that were being responded to grew so rich that the system, like a TV camera on a sufficiently long leash, was capable of “pointing back”, to some extent, at itself. That first tiny glimmer of self was the germ of consciousness and “I”-ness, but there is still a great mystery.
No matter how complicated and sophisticated brains became, they always remained, at bottom, nothing but a set of cells that “squirted chemicals” back and forth among each other (to borrow a phrase from the pioneering roboticist and provocative writer Hans Moravec), a bit like a huge oil refinery in which liquids are endlessly pumped around from one tank to another. How could a system of pumping liquids ever house a locus of upside-down causality, where meanings seem to matter infinitely more than physical objects and their motions? How could joy, sadness, a love for impressionist painting, and an impish sense of humor inhabit such a cold, inanimate system? One might as well look for an “I” inside a stone fortress, a toilet’s tank, a roll of toilet paper, a television, a thermostat, a heat-seeking missile, a heap of beer cans, or an oil refinery.
Some philosophers see our inner lights, our “I” ’s, our humanity, our souls, as emanating from the nature of the substrate itself — that is, from the organic chemistry of carbon. I find that a most peculiar tree on which to hang the bauble of consciousness. Basically, this is a mystical refrain that explains nothing. Why should the chemistry of carbon have some magical property entirely unlike than that of any other substance? And what is that magical property? And how does it make us into conscious beings? Why is it that only brains are conscious, and not kneecaps or kidneys, if all it takes is organic chemistry? Why aren’t our carbon-based cousins the mosquitoes just as conscious as we are? Why aren’t cows just as conscious as we are? Doesn’t organization or pattern play any role here? Surely it does. And if it does, why couldn’t it play the whole role?
By focusing on the medium rather than the message, the pottery rather than the pattern, the typeface rather than the tale, philosophers who claim that something ineffable about carbon’s chemistry is indispensable for consciousness miss the boat. As Daniel Dennett once wittily remarked in a rejoinder to John Searle’s tiresome “right-stuff” refrain, “It ain’t the meat, it’s the motion.” (This was a somewhat subtle hat-tip to the title of a somewhat unsubtle, clearly erotic song written in 1951 by Lois Mann and Henry Glover, made famous many years later by singer Maria Muldaur.) And for my money, the magic that happens in the meat of brains makes sense only if you know how to look at the motions that inhabit them.
The Stately Dance of the Symbols
Brains take on a radically different cast if, instead of focusing on their squirting chemicals, you make a level-shift upwards, leaving that low level far behind. To allow us to speak easily of such upward jumps was the reason I dreamt up the allegory of the careenium, and so let me once again remind you of its key imagery. By zooming out from the level of crazily careening simms and by looking instead at the system on a speeded-up time scale whereby the simms’ locally chaotic churning becomes merely a foggy blur, one starts to see other entities coming into focus, entities that formerly were utterly invisible. And at that level, mirabile dictu, meaning emerges.
Simmballs filled with meaning are now seen to be doing a stately dance in a blurry soup that they don’t suspect for a split second consists of small interacting magnetic marbles called “simms”. And the reason I say the simmballs are “filled with meaning” is not, of course, that they are oozing some mystical kind of sticky semantic juice called “meaning” (even though certain meat-infatuated philosophers might go for that idea), but because their stately dance is deeply in synch with events in the worl
d around them.
Simmballs are in synch with the outer world in the same way as in La Femme du boulanger, the straying cat Pomponnette’s return was in synch with the return of the straying wife Aurélie: there was a many-faceted alignment of Situation “P” with Situation “A”. However, this alignment of situations at the film’s climax was just a joke concocted by the screenwriter; no viewer of La Femme du boulanger supposes for a moment that the cat’s escapades will continue to parallel the wife’s escapades (or vice versa) for months on end. We know it was just a coincidence, which is why we find it so humorous.
By contrast, a careenium’s dancing simmballs will continue tracking the world, will stay in phase with it, will remain aligned with it. That (by fiat of the author!) is the very nature of a careenium. Simmballs are systematically in phase with things going on in the world just as, in Gödel’s construction, prim numbers are systematically in phase with PM’s provable formulas. That is the only reason simmballs can be said to have meaning. Meaning, no matter what its substrate might be — Tinkertoys, toilet paper, beer cans, simms, whole numbers, or neurons — is an automatic, unpreventable outcome of reliable, stable alignment; this was the lesson of Chapter 11.
I Am a Strange Loop Page 27