The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning

Home > Other > The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning > Page 19
The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning Page 19

by Bor, Daniel


  Although there are ambiguities in the definition of “working memory,” in the main I firmly agree with Baars that consciousness and working memory are largely synonymous processes, and that attention is the critical means by which items enter into consciousness. But the next key step, from the point of view of consciousness, is to fill in the details, to describe exactly how working memory functions, both psychologically and in the brain. Twenty years after Baars first formulated his global workspace theory, our understanding of working memory and attention is now far more comprehensive. And, with these advances, many mysteries of consciousness are being solved.

  The first feature of working memory is how it is surprisingly so limited in capacity, comprising a mere handful of conscious objects. Many different experiments have confirmed this constraint on our conscious space—though each study has had to take careful precautions to counteract our prodigious ability to develop strategies to cheat—to try to enhance our capacity, usually by linking current items to our long-term memory store. The standard methods for removing the opportunity for such strategies are either to present stimuli so briefly that our myriad workaround tricks don’t have a chance to form, or to present more abstract items that have absolutely no relation to our preexisting memory.

  For instance, in one landmark early study, George Sperling presented subjects with a grid of 12 letters, in 3 rows of 4, but only for about 50 milliseconds. Subjects then had to report as many of the letters as possible. They would get, on average, about 1.3 letters per row correct, or about 4 items in total (1.3 multiplied by the 3 rows is 3.9). In a fascinating twist, in some trials, Sperling also immediately followed the flash of letters with a cue to tell the subjects to give their answers from just a single particular row. Now, very surprisingly, they would generally correctly recall all 4 letters from one row instead of the 1.3 letters per row that they could previously manage, presumably because the immediate instruction enabled their attentional system to focus fully in on this one row before the fresh visual information faded. If instead the cue to center on a single row came a second or more after the grid had disappeared, then subjects returned to their previous performance, as if no cue had occurred, and could only answer about 1.3 items from this cued row. Within this single second, their attentional system, not knowing which row to focus on, had applied equal importance to all 12 items as they all faded from their initial fresh visual state, and only the letters from 4 random locations in the entire grid could be preserved in their limited short-term memory store.

  A conscious limit of 4 objects turns up faithfully in almost any kind of experiment one tries. But in real life we do not usually need to remember letters in a grid, so I’ll share another example that will seem more natural. We commonly track multiple moving objects—maybe a group of people on the street that we walk past, or a set of players on a soccer pitch. Animals in the wild may also need to analyze where a group of other objects are moving. For instance, the members of a chimpanzee tribe may need to monitor the location of each member of a competing tribe that is encroaching on their territory. In an experiment that mirrors these everyday skills, Steven Yantis presented subjects with a set of 10 crosses on the computer screen. A subset of these initially flashed, and subjects had to keep track of them as they moved randomly around the screen and ignore the moving crosses that previously hadn’t flashed. At some point, the moving crosses would become stationary, and subjects had to say which of the crosses were the initially flashing ones. If there were only 3 crosses to keep track of, then subjects found this task relatively easy. When volunteers had to simultaneously track 4 objects, they were somewhat less accurate, but still performed the task competently. When Yantis increased the number for the volunteers by 1, to 5 moving crosses to keep track of around the screen, because this number exceeded their working-memory capacity by a single item, most subjects found this variant of the task virtually impossible. This experiment is a striking demonstration of how sharp a barrier this capacity of 4 conscious items is.

  Surprisingly, our working memory limit of a handful of items is basically the same as the monkey’s, even though a monkey brain is about one-fifteenth the size of ours. And our closely related skill of being able to recognize the number of items briefly presented to us—about 3 or 4 again, before we need to start approximating—is the same capacity limit that newborns have. In fact, many other species have the same upper bound to immediately counting the number of objects, including the lowly honeybee, which can differentiate patterns containing 2 from 3 items, or 3 from 4, but not 4 from 5 or above. So there may be something fundamentally limited about just how many items all animals can store in short-term memory.19

  . . . BUT EACH CONSCIOUS COMPARTMENT CAN HOLD OBJECTS OF GREAT COMPLEXITY

  That the contents of consciousness, if you discount compensating strategies, is fixed at about four items seems to be a tremendous handicap. But in humans, especially, one should never discount strategies. We use built-in attentional mechanisms as well as the heavy ammunition of our conscious powers of analysis to regularly load huge quantities of data into each conscious compartment, shamelessly cheating our apparent working memory boundaries.

  Turning first to the role that attention plays in boosting our capacity per working memory holder: Once attention has decided to prioritize a given object, whatever it may be, the neuronal war has been won. Activity in much of the brain is then shaped according to this current object and how it relates to us. For basic objects or features in the world, such as the color red as painted on a plain wall, attention boosts the signal by enhancing the readiness to fire of our visual regions, especially those for red. Non-red color-coding neurons may be suppressed, not only in our color-processing centers, but everywhere else as well. Our hearing and taste centers, for instance, may be inhibited. At the same time, all general-purpose regions, especially the prefrontal and parietal cortices, which are closely connected to consciousness, have activity that hones in on this current feature. All of this works well, and does help us spot red in the world, but the effects are not nearly as striking as when the brain has some internal hook by which to latch onto, so as to enhance the incoming signal.

  If, instead of the red wall, the current object of attention is Angelina Jolie on the big screen in front of me wearing a red dress, then anything around me that’s not Angelina Jolie gets suppressed, and any corner of my brain with any relevant information about Angelina Jolie becomes activated. As soon as I see her, I recognize the features of her face, I know her name, recall how she speaks, have knowledge of her famous husband that I can easily retrieve, remember the other films she’s been in, and so on. And, of course, I can also see that she’s wearing red. These aren’t sets of unrelated facts; they are all bound together as a single, unified, complex object. The previous example of the plain wall as an attended single object effectively had red as the only feature. When I attend to Angelina Jolie, the same piece of information, red, is attached to my conscious representation of her, but this time red is only one of dozens of features connected to this single mental object. This is a fantastic system to have—attention takes this raw input and seamlessly transforms it into a panoply of interconnected facts by the time it reaches consciousness. And yet, because attention has activated and drawn together all the components of this one object, Angelina Jolie, it takes up the same single compartment in my working memory as does the plain red wall.

  In other words, we may only have a few conscious compartments, but each holder can cope equally well with the simplest of objects or the most complex. And the term “working memory objects” in this context generally means just some bound collection of information. It could be a physical object, like Angelina Jolie. But it could equally mean one strand of the plan I devised for this current chapter as I was walking to Grantchester.

  Just how much information can one working memory object support? This is where the concept of “chunking” returns in force. In terms of grand purpose, chunking can be seen as
a similar mechanism to attention: Both processes are concerned with compressing an unwieldy dataset into those small nuggets of meaning that are particularly salient. But while chunking is a marvelous complement to attention, chunking diverges from its counterpart in focusing on the compression of conscious data according to its inherent structure or the way it relates to our preexisting memories.

  One of the most dramatic experiments to demonstrate how chunking can expand what we store in working memory was published in 1980 by K. Anders Ericsson and colleagues. The experiment is beautifully simple: The scientists took one normal undergraduate, with an average memory capacity and IQ for a student, and gave him a basic task—the experimenter read to him a sequence of random digits and he then had to try to say back the digits he’d heard, in the order he’d heard them—just like trying to remember a phone number someone has just said to you. If he recalled the digit sequence correctly, the next trial would be one number longer. If he said it back with any mistakes, the next trial would be one number shorter. This is a very standard test for verbal working memory. However, in this case, there was a big twist—he did this task for an hour a day, for roughly 4 days a week, for nearly two years!

  At the start, he was able to remember about 7 numbers in a sequence, which is indeed about average (almost everyone improves on their initial verbal working memory limit of 4 through various rehearsal strategies). But as psychology experiments go, this must have potentially won a prize for the most boring in the world, being the same day in, day out, for months on end. In order to spice things up for himself, the participant seemed determined to improve his performance. And improve he did, until, by the end of the experiment, 20 months later, he could successfully say back a novel sequence that was 80 digits long! In other words, if 7 friends in turn rapidly told him their phone numbers, he could calmly wait until the last digit was spoken and then, from memory, key all 7 friends’ numbers into his phone’s contact list without error.

  On occasion, he was tested after a session to see if he could still recall any of the sequences from earlier on in that session. At the start of the experiment, he was understandably useless, hardly remembering anything of the digit sequences, even though they were only 7 digits long. However, toward the end of the experiment nearly two years later, despite the sequences now being over 10 times longer than when he began the experiment, he could remember the vast majority of the sequences perfectly. So not only could he have immediately recalled 7 combined phone numbers, just after hearing them, but he could also have typed them in without error an hour later! How did he achieve this seemingly superhuman improvement in performance?

  This volunteer happened to be a keen track runner, and so his first thought was to see certain number groups as running times, for instance, 3492 would be transformed into 3 minutes and 49.2 seconds, around the world-record time for running the mile. In other words, he was using his memory of well-known number sequences in athletics to prop up his working memory. This strategy worked very well, and he rapidly more than doubled his working memory capacity to nearly 20 digits. The next breakthrough some months later occurred when he realized he could combine each running time into a superstructure of 3 or 4 running times—and then group these superstructures together again. Interestingly, the number of holders he used never went above his initial capacity of just a handful of items. He just learned to cram more and more into each item in a pyramidal way, with digits linked together in 3s or 4s, and then those triplets or quadruplets of digits linked together as well in groups of 3, and so on. One item-space, one object in working memory, started holding a single digit, but after 20 months of practice, could contain as much as 24 digits.20

  So, when pushed by challenging tasks, we can use our long-term memory as a crutch to convert the items in working memory into a more efficient form. The task becomes dramatically easier, our performance increases markedly, and the newly chunked information we store is more stable, robust, and efficient.

  But in humans, especially, it’s not just mnemonic tricks and familiarity that can profoundly increase the actual information stored in our working memory. In the example above, the student improved his performance by artificially gluing these novel numbers to his preexisting structured knowledge about running times. He, in effect, forced patterns into unpatterned data. But often there really is a clear structure or pattern to the information streaming in from our senses, and in these situations our consciousness seems particularly alert to its detection—probably because such novel information promises significant improvements—and we can rapidly exploit this newfound knowledge.

  There is good experimental evidence that we spot and successfully use any structure in sequences to aid working memory. For instance, sticking with the task of simply remembering sequences of digits, colleagues and I in Cambridge presented volunteers with novel sequences of four double digits, some of which had a hidden mathematical relationship between them, such as 49, 60, 71, 82 (so, increasing by 11 each time). Other sequences had a random spacing between items. As you’d expect, participants were considerably better at recalling the structured sequences than the random ones. Volunteers noticed the structure and found that the task became easier when there were discernible patterns, as if there had been fewer items to remember, precisely because they had found the rule that linked the digits together. Although we didn’t test this, we could have given subjects patterned sequences 300 digits long, and they probably still would have had no trouble recalling the sequences—all they would have needed to remember would have been the first number, the last, and the rule. In contrast, they would have utterly floundered with 300-digit-long random sequences on their first session, and even on their 200th session. Importantly, chunking by rules is usually far more effective than chunking by memory alone.

  One classic area of expertise in working memory is that of chess. We novices may look at a board full of about thirty chess pieces in some complex position and be lucky if we remember a few of those pieces. Chess masters, however, can remember almost the whole board with just a look. How do they do this? Very probably they are using a combination of memory (say, remembering the position of the pawns because pawn structures tend to be diagonal), and logic (such as perceiving doubled rooks as a powerful push up a line toward the opposing king). Chess expertise is a good illustration of how memory, logic, and strategy can sometimes inextricably intertwine, with structured information at the core.

  Indeed, with the contents of working memory allowed to be virtually anything, this conscious playground of ideas is at its most powerful when the contents themselves are goals or strategies: If each mental trick can be treated as a separate building block in consciousness, where it can be combined with others in order to generate novel, more potent strategies, an unrivaled potential for learning and understanding is unleashed.

  So we may be biologically constrained to consciously store only a handful of items for a few seconds, but we are also able to use any trick in the book to dramatically increase the amount of information per item. This may involve employing relatively trivial tactics, such as repeating numbers to ourselves. But we might just as easily use grander strategies, such as linking large amounts of novel information to preestablished memory chunks, or noticing the logical rule that binds many unfamiliar items together into a more coherent single unit.

  BELITTLING THE RICHNESS OF EXPERIENCES?

  This is an appropriate point at which to pause and meet one obvious objection to the thesis that consciousness boils down to an attention-gated working memory, with up to four chunks making up its contents. If our consciousness is really limited to a small handful of highly processed items, then how can we at least appear to see many more objects at once? It certainly seems that if I gaze up at the sky, I can make out more than four objects—maybe hundreds more in one go, and I can see them all clearly.

  But I would argue that in this situation, attention is spread wide and thin, like an overblown balloon. Its thinness means that we are in
deed aware of these hundreds of objects, but in a minimal, approximate way. Gazing up at the sky without any knowledge of star charts is akin to seeing the whole collection of stars as one fuzzy, complex object. If we want to remember things better, if we want to start seeing groups of stars, and memorize their relationship to each other, then—guess what?—we develop chunks to help us. I recognize the Plough because it looks like a deep frying pan with a wonky handle. And that constellation is Leo, a proud lion resting on the savannah. Without these chunks of stars, linked to well-known objects in memory, we’d have struggled to recognize any astral features, and historically this would have been a disaster for both navigation and agriculture.

  There is strong experimental confirmation of this sense that the more we see, the less we actually take in of each object. For instance, if we have to identify varying numbers of letters or digits, which flash briefly on the screen, then the greater the number, the less likely we are to identify each one. In fact, however many there are, we’re unlikely ever to remember more than four of them, even if we do get a vague sense of the approximate number of objects we’re seeing. And, as I’ve discussed, in whatever way we are looking at the world, with any sense or stimuli, we only ever are fully aware of about four objects. Any impression that we are aware of more items may simply be an illusion. This illusion is partly a product of our extreme readiness to group items together to take up a single working memory slot—for instance, the collection of all the visible stars that we are currently viewing.

 

‹ Prev