Your typical human brain, being blissfully ignorant of its minute physical components and their arcanely mathematizable mode of microscopic functioning, and thriving instead at the infinitely remote level of soap operas, spring sales, super skivaganzas, SUV’s, SAT’s, SOB’s, Santa Claus, splashtacular scuba specials, snorkels, snowballs, sex scandals (and let’s not forget sleazeballs), makes up as plausible a story as it can about its own nature, in which the starring role, rather than being played by the cerebral cortex, the hippocampus, the amygdala, the cerebellum, or any other weirdly named and gooey physical structure, is played instead by an anatomically invisible, murky thing called “I”, aided and abetted by other shadowy players known as “ideas”, “thoughts”, “memories”, “beliefs”, “hopes”, “fears” “intentions”, “desires”, “love”, “hate”, “rivalry”, “jealousy”, “empathy”, “honesty”, and on and on — and in the soft, ethereal, neurology-free world of these players, your typical human brain perceives its very own “I” as a pusher and a mover, never entertaining for a moment the idea that its star player might merely be a useful shorthand standing for a myriad of infinitesimal entities and the invisible chemical transactions taking place among them, by the billions — nay, the millions of billions — every single second.
The human condition is thus profoundly analogous to the Klüdgerotic condition: neither species can see or even imagine the lower levels of a reality that is nonetheless central to its existence.
First Key Ingredient of Strangeness
Why does an “I” symbol never develop in a video feedback system, no matter how swirly or intricate or deeply nested are the shapes that appear on its screen? The answer is simple: a video system, no matter how many pixels or colors it has, develops no symbols at all, because a video system does not perceive anything. Nowhere along the cyclic pathway of a video loop are there any symbols to be triggered — no concepts, no categories, no meanings — not a tad more than in the shrill screech of an audio feedback loop. A video feedback system does not attribute to the strange emergent galactic shapes on its screen any kind of causal power to make anything happen. Indeed, it doesn’t attribute anything to anything, because, lacking all symbols, a video system can’t and doesn’t ever think about anything!
What makes a strange loop appear in a brain and not in a video feedback system, then, is an ability — the ability to think — which is, in effect, a one-syllable word standing for the possession of a sufficiently large repertoire of triggerable symbols. Just as the richness of whole numbers gave PM the power to represent phenomena of unlimited complexity and thus to twist back and engulf itself via Gödel’s construction, so our extensible repertoires of symbols give our brains the power to represent phenomena of unlimited complexity and thus to twist back and to engulf themselves via a strange loop.
Second Key Ingredient of Strangeness
But there is a flip side to all this, a second key ingredient that makes the loop in a human brain qualify as “strange”, makes an “I” come seemingly out of nowhere. This flip side is, ironically, an inability — namely, our Klüdgerotic inability to peer below the level of our symbols. It is our inability to see, feel, or sense in any way the constant, frenetic churning and roiling of micro-stuff, all the unfelt bubbling and boiling that underlies our thinking. This, our innate blindness to the world of the tiny, forces us to hallucinate a profound schism between the goal-lacking material world of balls and sticks and sounds and lights, on the one hand, and a goalpervaded abstract world of hopes and beliefs and joys and fears, on the other, in which radically different sorts of causality seem to reign.
When we symbol-possessing humans watch a video feedback system, we naturally pay attention to the eye-catching shapes on the screen and are seduced into giving them fanciful labels like “helical corridor” or “galaxy”, but still we know that ultimately they consist of nothing but pixels, and that whatever patterns appear before our eyes do so thanks solely to the local logic of pixels. This simple and clear realization strips those fancy fractalic gestalts of any apparent life or autonomy of their own. We are not tempted to attribute desires or hopes, let alone consciousness, to the screen’s swirly shapes — no more than we are tempted to perceive fluffy cotton-balls in the sky as renditions of an artist’s profile or the stoning of a martyr.
And yet when it comes to perceiving ourselves, we tell a different story. Things are far murkier when we speak of ourselves than when we speak of video feedback, because we have no direct access to any analogue, inside our brains, to pixels and their local logic. Intellectually knowing that our brains are dense networks of neurons doesn’t make us familiar with our brains at that level, no more than knowing that French poems are made of letters of the roman alphabet makes us experts on French poetry. We are creatures that congenitally cannot focus on the micromachinery that makes our minds tick — and unfortunately, we cannot just saunter down to the corner drugstore and pick up a cheap pair of glasses to remedy the defect.
One might suspect neuroscientists, as opposed to lay people, to be so familiar with the low-level hardware of the brain that they have come to understand just how to think about such mysteries as consciousness and free will. And yet often it turns out to be quite the opposite: many neuroscientists’ great familiarity with the low-level aspects of the brain makes them skeptical that consciousness and free will could ever be explained in physical terms at all. So baffled are they by what strikes them as an unbridgeable chasm between mind and matter that they abandon all efforts to see how consciousness and selves could come out of physical processes, and instead they throw in the towel and become dualists. It’s a shame to see scientists punt in this fashion, but it happens all too often. The moral of the story is that being a professional neuroscientist is not by any means synonymous with understanding the brain deeply — no more than being a professional physicist is synonymous with understanding hurricanes deeply. Indeed, sometimes being mired down in gobs of detailed knowledge is the exact thing that blocks deep understanding.
Our innate human inability to peer below a certain level inside our cranium makes our inner analogue to the swirling galaxy on a TV screen — the vast swirling galaxy of “I”-ness — strike us as an undeniable locus of causality, rather than a mere passive epiphenomenon coming out of lower levels (such as a video-feedback galaxy). So taken in are we by the perceived hard sphericity of that “marble” in our minds that we attribute to it a reality as great as that of anything we know. And because of the locking-in of the “I”-symbol that inevitably takes place over years and years in the feedback loop of human self-perception, causality gets turned around and “I” seems to be in the driver’s seat.
In summary, the combination of these two ingredients — one an ability and the other an inability — gives rise to the strange loop of selfhood, a trap into which we humans all fall, every last one of us, willy-nilly. Although it begins as innocently as a humble toilet’s float-ball mechanism or an audio or video feedback loop, where no counterintuitive type of causality is posited anywhere, human self-perception inevitably ends up positing an emergent entity that exerts an upside-down causality on the world, leading to the intense reinforcement of and the final, invincible, immutable locking-in of this belief. The end result is often the vehement denial of the possibility of any alternative point of view at all.
Sperry Redux
I just said that we all fall into this “trap”, but I don’t really see things so negatively. Such a “trap” is not harmful if taken with a grain of salt; rather, it is something to rejoice in and cherish, for it is what makes us human. Permit me once more to quote the eloquent words of Roger Sperry:
In the brain model proposed here, the causal potency of an idea, or an ideal, becomes just as real as that of a molecule, a cell, or a nerve impulse. Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and, thanks to global communication, in far distant, f
oreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet, including the emergence of the living cell.
When you come down to it, all that Sperry has done here is to go out on a limb and dare to assert, in a serious scientific publication, the ho-hum, run-of-the-mill, commonsensical belief held by the random person on the street that there is a genuine reality (i.e., causal potency) of the thing we call “I”. In the scientific world, such an assertion runs a great risk of being looked upon with skepticism, because it sounds superficially as if it reeks of Cartesian dualism (wonderfully mystical-sounding terms such as élan vital, “life force”, “spirit of the hive”, “entelechy”, and “holons” occasionally spring into my mind when I read this passage).
However, Roger Sperry knew very well that he wasn’t embracing dualism or mysticism of any sort, and he therefore had the courage to take the plunge and make the assertion. His position is a subtle balancing act whose insightfulness will, I am convinced, one day be recognized and celebrated, and it will be seen to be analogous to the subtle balancing act of Kurt Gödel, who demonstrated how high-level, emergent, self-referential meanings in a formal mathematical system can have a causal potency just as real as that of the system’s rigid, frozen, low-level rules of inference.
CHAPTER 15
Entwinement
Multiple Strange Loops in One Brain
TWO chapters back, I declared that there was one strange loop in each human cranium, and that this loop constituted our “I”, but I also mentioned that that was just a crude first stab. Indeed, it is a drastic oversimplification. Since we all perceive and represent hundreds of other human beings at vastly differing levels of detail and fidelity inside our cranium, and since the most important facet of all of those human beings is their own sense of self, we inevitably mirror, and thus house, a large number of other strange loops inside our head. But what exactly does it mean to say that each human head is the locus of a multiplicity of “I” ’s?
Well, I don’t know precisely what it means. I wish I did! And I reckon that if I did, I would be the world’s greatest philosopher and psychologist rolled into one. As best I can guess, from far below such a Parnassus, it means we manufacture an enormously stripped-down version of our own strange loop of selfhood and install it at the core of our symbols for other people, letting that initially crude loopy structure change and grow over time. In the case of the people we know best — our spouse, our parents and siblings, our children, our dearest friends — each of these loops grows over the years to be a very rich structure adorned with many thousands of idiosyncratic ingredients, and each one achieves a great deal of autonomy from the stripped-down “vanilla” strange loop that served as its seed.
Content-free Feedback Loops
More light can be cast on this idea of a “vanilla” strange loop through our old metaphor of the audio feedback loop. Suppose a microphone and a loudspeaker have been connected together so that even a very soft noise will cycle around rapidly, growing louder and louder each pass through the loop, until it becomes a huge ear-piercing shriek. But suppose the room is dead silent at the start. In that case, what happens? What happens is that it remains dead silent. The loop is working just fine, but it is receiving zero noise and outputting zero noise, because zero times anything is still zero. When no signal enters a feedback loop, the loop has no perceptible effect; it might as well not even exist. An audio loop on its own does not a screech make. It takes some non-null input to get things off the ground.
Let’s now translate this scenario to the world of video feedback. If one points a TV camera at the middle of a blank screen, and if the camera sees only the screen and none of its frame, then despite its loopiness, all that this setup will produce, whether the camera stands still, tilts, turns, or zooms in and out (always without reaching the screen’s edge), is a fixed white image. As before, the fact that the image results from a closed feedback loop makes no difference, because nothing external is serving as the contents of that loop. I’ll refer to such a content-free feedback loop as a “vanilla” loop, and it’s obvious that two vanilla video loops will be indistinguishable — they are just empty shells with no recognizable traits and no “personal identity”.
If, however, the camera turns far enough left or right, or zooms out far enough to take in something external to the blank screen (even just the tiniest patch of color), a bit of the screen will turn non-blank, and then, instantly, that non-blank patch will get sucked into the video loop and cycled around and around, like a tree limb picked up by a tornado. Soon the screen will be populated with many bits of color forming a complex and self-stabilizing pattern. What gives this non-vanilla loop its recognizable identity is not merely the fact that the image contains itself, but just as crucially, the fact that external items in a particular arrangement are part of the image.
If we bring this metaphor back to the context of human identity, we could say that a “bare” strange loop of selfhood does not give rise to a distinct self — it is just a generic, vanilla shell that requires contact with something else in the world in order to start acquiring a distinctive identity, a distinctive “I”. (For those who enjoy the taboo thrills of non-wellfounded sets — sets that, contra Russell, may contain themselves as members — I might raise the puzzle of two singleton sets, x and y, each of which contains itself, and only itself, as a member. Are x and y identical entities or different entities? Trying to answer the riddle by defining two sets to be identical if and only if they have the same members leads one instantly into an infinite regress, and thus no answer is yielded. I prefer to brazenly cut the Gordian knot by declaring the two sets indistinguishable and hence identical.)
Baby Feedback Loops and Baby “I” ’s
Although I just conjured up the notion of a “vanilla” strange loop in a human brain, I certainly did not mean to suggest that a human baby is already at birth endowed with such a “bare” strange loop of selfhood — that is, a fully-realized, though vanilla, shell of pure, distilled “I”-ness — thanks to the mere fact of having human genes. And far less did I mean to suggest that an unborn human embryo acquires a bare loop of selfhood while still in the womb (let alone at the moment of fertilization!). The realization of human selfhood is not nearly so automatic and genetically predetermined as that would suggest.
The closing of the strange loop of human selfhood is deeply dependent upon the level-changing leap that is perception, which means categorization, and therefore, the richer and more powerful an organism’s categorization equipment is, the more realized and rich will be its self. Conversely, the poorer an organism’s repertoire of categories, the more impoverished will be the self, until in the limit there simply is no self at all.
As I’ve stressed many times, mosquitoes have essentially no symbols, hence essentially no selves. There is no strange loop inside a mosquito’s head. What goes for mosquitoes goes also for human babies, and all the more so for human embryos. It’s just that babies and embryos have a fantastic potential, thanks to their human genes, to become homes for huge symbol-repertoires that will grow and grow for many decades, while mosquitoes have no such potential. Mosquitoes, because of the initial impoverishment and the fixed non-extensibility of their symbol systems, are doomed to soullessness (oh, all right — maybe 0.00000001 hunekers’ worth of consciousness — just a hair above the level of a thermostat).
For better or for worse, we humans are born with only the tiniest hints of what our perceptual systems will metamorphose into as we interact with the world over the course of decades. At birth, our repertoire of categories is so minimal that I would call it nil for all practical purposes. Deprived of symbols to trigger, a baby cannot make sense of what William James evocatively called the “big, blooming, buzzing confusion” of its sensory input. The building-up of a self-symbol is still far in the future for a baby, and so in babies there exists no strange loop of sel
fhood, or nearly none.
To put it bluntly, since its future symbolic machinery is 99 percent missing, a human neonate, devastatingly cute though it might be, simply has no “I” — or, to be more generous, if it does possess some minimal dollop of “I”-ness, perhaps it is one huneker’s worth or thereabouts — and that’s not much to write home about. So we see that a human head can contain less than one strange loop. What about more than one?
Entwined Feedback Loops
To explore in a concrete fashion the idea of two strange loops coexisting in one head, let’s start with a mild variation on our old TV metaphor. Suppose two video cameras and two televisions are set up so that camera A feeds screen A and, far away from it, camera B feeds screen B. Suppose moreover that at all times, camera A picks up all of what is on screen A (plus some nearby stuff, to give the A-loop “content”) and cycles it back onto A, and analogously, camera B picks up all of what is on screen B (plus some external content) and cycles it back onto B. Now since systems A and B are, by stipulation, far apart from each other, it is intuitively clear that A and B constitute separate, disjoint feedback loops. If the local scenes picked up by cameras A and B are different, then screens A and B will have clearly distinguishable patterns on them, so the two systems’ “identities” will be easily told apart. So far, what this metaphor gives us is old hat (in fact, it’s two old hats) — two different heads, each having one loop inside it.
What will happen, however, when systems A and B are gradually brought close enough together to begin interacting with each other? Camera A will then see not only screen A but also screen B, and so loop B will enter into the content of loop A (and vice versa).
I Am a Strange Loop Page 29