Book Read Free

The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning

Page 18

by Bor, Daniel


  LAYERS OF AWARENESS

  A closely related topic is self-awareness. Some theorists claim that we are only truly conscious when we are self-aware, and that our sense of self is the most critical component of any conscious experience. Again it is suggested that evolutionary pressures for a more sophisticated model of oneself were responsible for generating consciousness in the first place.

  Self-awareness is actually a rather confusing term. It has at least three different meanings. The first, less grandiose definition of self-awareness is simply the state in which an animal is aware of itself as distinct from all the animate and inanimate objects around it, and it thinks and acts in accord with this basic assumption. In a sense, all creatures need to make this conceptual distinction, irrespective of consciousness, so any creature that also happens to be conscious will automatically have this version of self-awareness. This may seem a trivial point, but it emphasizes how embedded within our biological makeup this heavy distinction between us and the rest of the universe is. Self-awareness based on this definition may not be a necessary component of consciousness, but instead just an accidental consequence of something being a conscious animal with an evolutionary heritage.

  Indeed, the case of the conjoined twins Tatiana and Krista, who appear to have conscious experiences that are not their own, provide tentative evidence that this accidental combination need not always occur. Occasionally, under special circumstances like this, we might be able to have experiences, but not be the owner of those experiences.

  Various psychiatric populations also hint at a possible separation between consciousness and the self. For instance, in certain cases of multiple personality disorder, patients may attribute many of their experiences to other personalities within them. One patient reported “Joy is happy and playful, so sometimes when I’m down she becomes me. Sometimes it cheers me up, but sometimes it is only Joy who is happy and I’m still upset.” Here, seemingly as a strategy to improve her mood, this patient lets an alternative identity take over her experiences.

  Although multiple personality disorder is a controversial diagnosis (it has been suggested that all such patients fabricate these extra personalities in a desperate attempt to protect them from some past trauma), less controversial is an analogous situation in schizophrenia. One of the hallmarks of schizophrenia is a genuine belief that the voices within your own mind are not your own. In other words, many schizophrenics are convinced that some part of their own experiences at least partially belongs to someone else.

  Although rather circumstantial, all these pieces of evidence point to a potential loosening of the glue between experience and a sense of self, reaffirming the possibility that their apparent inseparability might be accidental.

  But many people use the term “self-awareness” in rather more abstract ways. One version involves being aware of ourselves as having this particular body, this particular face, this persona, and so on. The main test for this form of self-awareness is whether you can recognize yourself in the mirror. But one critical question following from this definition is whether self-awareness is the cause of extensive consciousness, or simply a consequence of it.

  For an animal to be able to know that this other animal in the mirror is in fact itself is a tremendously mentally demanding feat. All the animals it has met in its life so far have been other animals, so there is a strong, very natural expectation that this animal in the mirror is another animal as well. For the animal to understand that the reflection is itself, it needs to acknowledge the majority of the following cues: It needs to realize that the other animal’s touch is incongruous, being cold and hard; that the other animal is missing any scent; that the rest of the room in the reflected world is an identical copy of its own room; that every time it makes a movement, the other animal copies it perfectly; and that no other animal in the world could match its own actions so quickly.

  In fact, it gets even more complicated in the lab, where, in addition to all the above, the animal needs to perform a secondary task to prove beyond doubt that it recognizes itself in the mirror: usually a dye is placed on the animal’s head in a location that it can’t normally see, but is easy to spot in a mirror—the forehead is a popular place. For the researcher to gain proof of self-awareness, the animal has to recognize the mirror animal as itself, and it further needs to realize that this spot is a new unnatural addition to its facial makeup, before finally having the motor ability to touch the colored spot on its own body.

  Being a shameless cognitive neuroscientist at heart, I’ve subjected my own baby daughter to this task multiple times as she’s developed, and also regularly encouraged her to play in front of a mirror more generally. As a parent, I had a sense early on that she recognized herself. For instance, for many months, while she was rather wary of other babies, she’d be fearless of the baby in the mirror, approaching it with glee. But this wasn’t proof. Solid evidence of the above kind only came when she was nearly fourteen months old, and she tried to remove the new streak of color on her forehead, via the mirror. It was clear from her less than usually fluid movements that using the visual feedback from the mirror, rather than from her own kinesthetic and more direct visual senses, was a distinctly unnatural and difficult feat in itself, over and above all the other very complex requirements of mirror self-recognition.

  Obviously as a diligent scientist I made sure I could repeat the event multiple times! It was also clear on the first couple of goes that she noticed the change, as she laughed as she looked at the spot, but it didn’t automatically follow that she wanted to remove it—she might even have liked her new facial feature! For the first few trials, only when I was able to cheat and use language, asking her to remove the object on her face, did she actually do it. But then in later trials it became a fun game to use the mirror to remove the color, and I no longer needed to prompt her. This clearly demonstrated to me just how many different ways an animal could fail at the mirror recognition test, even if it clearly had the ability to pass.

  So there is a great multitude of hidden, complex assumptions required to demonstrate that you can recognize yourself in the mirror. Only a considerable intelligence, and a high level of consciousness, with motivation directed in the right way, would be able to pass such a test. Therefore, it seems likely that self-awareness in this sense is a side effect of a powerful intellect and rich conscious life, rather than the cause of either of these.

  The final, to my mind most intriguing, version of self-awareness is where you are aware of your own consciousness. For instance, I might watch my baby daughter sleeping deeply, and not only experience feelings of love and pride, but also become aware that I’m having these emotions and say to myself, “Oh look—right now I’m experiencing feelings of love and pride.” It’s assumed that whenever we use language to communicate our feelings and sensations, we are relying on self-awareness, since we have to probe our own experiences, as if on a higher plane of consciousness, in order to know what they are. Many theorists, especially of a philosophical bent, believe that this “higher order” consciousness is the only kind of consciousness that really matters.

  The theory comes in a multitude of somewhat confusing flavors, but before I discuss it directly, I would first like to digress in order to show that, for certain forms of content, and with a relatively standard reading of the theory, we have this form of awareness far less than we think we do. In fact, there is a striking contrast between our level of consciousness, which is undoubtedly incredibly rich and varied, and our level of insight into our own conscious minds, which is patchy and deeply unreliable.

  We are all in some ways appalling at making decisions, because our more primitive drives heavily bias us to think and act in short-term ways, so that we can survive and reproduce now, today, regardless of the next month, year, or decade. Some of us can also easily lose control in blinding waves of rage or jealousy, and more generally act based on some emotion, but at the same time be rather oblivious to the emotion. For instance, we
may be so focused on the object of our rage that we have no spare attention to stop and notice that we are actually angry.

  A lack of insight into one’s own feelings and motivations is also a key trait in almost all mental illnesses. For instance, a schizophrenic doesn’t understand that he is delusional, and a depressive may not realize that she is feeling irritable or subdued until a full depressive episode has completely taken her.

  This is certainly not a pattern of a continuous, proficient facility for self-awareness, at least where emotions are concerned. Instead, we seem to spend surprisingly large amounts of time being unaware of the emotions we are currently experiencing, presumably because we don’t attend to them. After all, emotions are there to guide us toward or away from certain objects or activities, so it is natural to attend to these objects while ignoring our own feelings. It is distinctly unnatural (although often extremely useful) to pause and attend to the emotion itself and the quality of reasoning behind it.

  But what of our senses? Do we also lack insight into our own perceptions? One recent study that helps answer this question involved subjects viewing a set of six striped circles, and then a virtually identical second set. Within either the first or second group of circles was a single striped circle that was a little more vividly striped than the others. Subjects first had to guess which of the two sets had this odd-one-out stimulus, and then they had to rate their confidence in this guess. As the trials went on, the experimental program was continuously changing the detectability of the odd-one-out stimulus based on the subject’s performance, so that accuracy was maintained at close to 71 percent for all participants—therefore well above chance, but still an extremely difficult task. This fixing of performance meant that the only factor that could change between subjects was how well their confidence in their decisions mapped onto their accuracy. So subjects adept at being aware of their own perceptions would almost always rate their confidence as very high when they were correct in spotting the odd-one-out feature and rate their confidence as low if they were wrong. One striking result of this study was that although some subjects were indeed quite proficient at this task, there was a huge variation, and many subjects were very poor at matching their confidence to their accuracy, regularly either being highly confident that they had guessed right when they were wrong or having no confidence in a correct decision.

  When I pored over the details of this study, the first thing I asked Steve Fleming, its main author, was whether the subjects who were weak at reading their own minds were otherwise impaired in any way. It was an obvious thing to look for, and he’d already carried out an extensive analysis on this, but couldn’t seem to find anything at all. Those at the bottom range were just as good at basic perceptual tasks, seemed just as bright, and in all other ways seemed just like the rest of the group, except for having poor insight into their own perceptions.

  Therefore, although we clearly also have the capacity to be aware of the contents of our minds in this higher-order way, that certainly doesn’t mean we’re all fantastic at it. In fact, some of us, who are otherwise quite normal and apparently just as conscious of the world, are very poor at it indeed.

  This observation of the patchiness of self-awareness, in this higher-order way, needn’t be an outright attack on the theory. Although it really does feel to us that we spend every moment of our waking lives conscious, it’s possible that we’re utterly mistaken, and there’s only a far more limited set of moments when we are truly conscious in this higher-order sense.

  Although it might appear unintuitive, but technically possible, that we’re mistaken when we assume we’re conscious whenever we’re awake, another feature of the theory seems far more troubling: If I silently stare at the blank wall beside me, with a quiet mind, I’m clearly conscious of the wall, and yet I do not seem to be having any thoughts about my perception. A higher-order consciousness proponent would claim that I necessarily have some higher-level thought or perception about my basic perception of the wall, because I must in order to be conscious of it, even if I don’t realize that this process is going on. But then the theory is in danger of being circular or otherwise empty, and it’s unclear how you could ever verify or falsify such a position with experiments.

  Another issue for the theory is its approach to the utility of consciousness. The leading defender of this theory, David Rosenthal, believes that one consequence of viewing consciousness in this higher-order way is that there is nothing useful or advantageous about being conscious, since our cognitive skills occur at a level below that of consciousness, which observes knowledge acquisition passively from above, as it were. So awareness serves no evolutionary purpose and provides no enhancement to the quality of our learning. There is overwhelming evidence against this position: Consciousness clearly is necessary for any form of complex learning to occur, which in itself is a good reason to reject this higher-order theory of consciousness.

  As far as I know, any detailed discussion of a mechanism for consciousness within this theory simply stops at the suggestion that reflexive cognition—having a thought of a perception, say—is how consciousness comes about. There is virtually no description of what thought or perception means here from the context of standard psychological components or brain processes. There is virtually no detail about how the bridge between higher and lower mental levels might work. There is little or no explanation for how or why full consciousness should be equated with this reflexive step, how it fits in with information processing, what the evolutionary basis for higher-order awareness is, or what the overall purpose of such a conception of consciousness could be.

  A more scientific approach, as I’m describing in these pages, is potentially far more detailed and profound. For instance, although I will describe other components in a moment, I’ve so far outlined in this chapter how attention is one key component of consciousness. Attention is a well-studied process, both psychologically and biologically. It immediately casts doubt on the necessity of some higher-order level for consciousness, and instead suggests that, at least in some situations, consciousness might emerge from the winner-takes-all neuronal battles that occur unconsciously. Attention puts information processing at the heart of consciousness and suggests that consciousness is the end product of an aggressive data-filtering and -boosting process.

  So self-awareness, in any of its guises, appears to be a side product of both a deep intellect and a rich conscious life, rather than a cause of our extensive awareness. Instead, both emotions and, more generally, any information one has about oneself are only special for the biological importance they carry in keeping us alive. From the perspective of consciousness, they are just another kind of information we could be aware of, out of the millions of possible experiences we could have, many having little to do with either our feelings or our sense of self.

  But although I believe that theories defending the primacy of self-awareness, particularly involving higher orders of consciousness, are unhelpful ways of looking at the problem, there is one feature of these positions to which I am very sympathetic. Being aware of oneself, or of one’s own thoughts and sensations, might be an accidental side product of a burgeoning consciousness, but it is nevertheless a profound side product. Such examples join a much wider group of important conscious events that are highly conceptual, sitting at the very top of a mental pyramid of ideas. This general layering of concepts, with consciousness at the top, allows us to experience our surroundings not as a bland sheet of raw data but as a vibrant, immensely patterned picture, utterly pregnant with meaning, which allows us to glide through this landscape with exquisite, effortless control.

  For the remainder of this chapter, as I move from the mechanism that chooses what content to populate consciousness to the contents of awareness themselves, I’ll be repeatedly highlighting the importance to consciousness of building and manipulating these intricate monoliths of knowledge.

  FOUR COMPARTMENTS TO AWARENESS AND NO MORE . . .

  Although so
far I’ve talked about how attention acts as a filtering and boosting mechanism, I haven’t yet shown just how aggressively the brain can filter its input, or how intensely it can boost certain input signals. In fact, routinely, attention filters the billions of pieces of information streaming into our senses, or bouncing around our unconscious minds, into a maximum of three or four conscious items. So the filtering process is about as aggressive as one could imagine. But the boosting process can compensate for this limitation just as aggressively: Each of the mere handful of items can be an immensely complex mental object, and although their number is painfully finite, these conscious objects can be assessed, compared, and manipulated in virtually any way imaginable.

  This tiny, yet ever so powerful output store of attention is our “working memory.” Working memory is an inherently conscious short-term memory container where we can remember, rearrange, and evaluate whatever is in this group of items, even if it comes from different senses or categories.

  Over the past twenty years, the most prevalent, popular psychological theory of consciousness has been the “global workspace theory” proposed by Bernard Baars. In many ways, Baars’ ideas resemble mainstream views on the psychology of attention. In the global workspace theory, there is again an unconscious fight for dominance between low-level coalitions of neurons, with a winner-takes-all attitude. The winner filters into consciousness, where again Baars makes more parallels with attention, by talking of a spotlight directed onto only a small portion of a theater stage. This spotlight is the subset of our world that we are actually conscious of, and it broadcasts itself to the whole audience, in other words, making just a small number of items available to much of the brain, potentially for further information combination and comparison. But Baars’ most bold and interesting claim is that, more or less, consciousness boils down to the information sitting right now in our working memory. He views working memory as existing for a second or two, available to almost every corner of the brain, and there to guide unconscious specialized knowledge regions to help us carry out our most complex tasks, such as language and planning.

 

‹ Prev