The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning

Home > Other > The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning > Page 16
The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning Page 16

by Bor, Daniel


  At the extreme end of this continuum of information input and analysis capabilities sits the human brain, with enormous cortical regions capable of very deep analysis. Hence the necessity for highly aggressive attentional filtering and boosting. This filtering can be so intense and focused, for instance, that we can foolishly walk for 30 minutes and hardly notice our surroundings at all.

  Human eyes have around 100 million photoreceptors, each of which can pick up about ten visual events every second, so our eyes are effectively receiving a billion pieces of information each second. If you include the information pouring in from our other senses, that’s a staggering quantity of data for our brains to sift through every moment of our waking lives. This weight of input is hardly unique in the animal kingdom—after all, many mammals have senses at least as acute as ours. But humans come up trumps from the analysis point of view, as, relative to the rest of our brains, we have considerably more general-purpose neural real estate by which to carry out detailed processing on the data we receive. This creates a dizzying potential for learning. And, as our everyday lives illustrate—and the fruits of science and technology reinforce in a muscular way—the deepest, most forensic analyses tend be the ones that offer the most rewards.

  If we had an infinite source of energy by which to crunch the numbers, and an infinitely fast brain by which to make the calculations, then there would be no problem, as we could analyze every scrap of data to its fullest capacity and never miss an opportunity or be caught by a threat. But of course, in reality, it takes time to process anything, and human brains consume a frighteningly large proportion of our body’s total energy resources.14 So we have to be ruthless in what we filter out, and tremendously picky about what data we allow through to the highest levels of processing.

  One key question here is just how you know what is biologically relevant. Imagine that there’s a poisonous snake near a plentiful food supply. An animal walks toward the food, and then quickly, effectively, its attentional system focuses in on the snake and little else. It recognizes the danger and flees. This has potentially saved the animal’s life. But what if the snake were dead, and the animal were starving, with little other food around? If greater analysis could have revealed the snake to be no longer a threat, perhaps by smell, or, in the case of particularly smart animals, a tentative poke with a finger or stick, then the animal’s life might have been saved by this extra understanding, which prevents it from running away and enables it to eat the food in this location.

  This example shows that, ideally, you need an attentional system that is guided closely by various emotional and instinctive signals so that it can automatically—and quickly—focus in on any immediate threat. If threats aren’t present, then attention should choose to center on other vital biological needs: food, procreation, social status, and so on. But for all these drives, there may be a better, slightly more lateral step to achieve them, if only we understood the situation a little more deeply.

  In a broad sense, because of these potential alternate routes to optimization, almost everything could be biologically relevant, potentially, and if you have enough brain resources, why not explore the informational terrain extensively? Humans thus readily attend not only to local external events, such as dead snakes, but to more abstract ones, too, such as the movement of the stars, or the factors that help crops grow, as well as to the internal information stream—for instance, an inner monologue about book compositions. Although such subjects have little to do with direct survival, on the surface, we continue attending just in case a deeper analysis reveals that they may assist us, if only we could reveal some hidden spark of wisdom from them. After all, our universally curious natures occasionally do strike gold, as exemplified by science and all of its technological products, which have made it so much easier for us to meet our basic survival needs.

  NOT SPOTTING THE WOOD, THE TREES, THE BIRDS, THE SOIL, THE FLOWERS, THE . . .

  In my embarrassingly blind walk described at the start of this chapter, it felt to me that attention was the gateway to my awareness—without attention directed toward some feature of the world, I simply would not be aware of it. This intuition has been repeatedly demonstrated in psychological experiments.

  The first, commonly referred to as “change blindness,” is perhaps one of the most striking and important experiments in either attention or consciousness, and reveals almost all key features of both (see Figure 5 for an example). Its standard form, first carried out by Ronald Rensink and colleagues, involves two photos identical except for one feature. These are presented to the volunteer on a computer monitor, with a blank screen sandwiched in between for about 80 milliseconds. When I tried this experiment some years back, in a large seminar at an academic conference, both pictures were of a military plane at an airport. With lots of flicking back and forth between pictures, I was initially convinced that the experimenter had made a mistake and that the two pictures were in fact identical. It didn’t help that the speaker, Bob Desimone, a world leader in the neuroscience of attention, cheekily suggested that the faster you spotted the change, the higher your IQ (to my relief, this isn’t true!). It took me and most of the audience a long time—perhaps 30 seconds—before we noticed that one of the pictures showed the plane missing an entire engine. My attention, heavily influenced by my expectations of what was important in the picture, was moving from interesting feature to interesting feature, such as the national insignia that identified the plane, the soldiers in the foreground, at the plane exit, or coming down the stairway, and so on. It simply didn’t occur to me to direct my attention to the engines, as you don’t expect planes on runways to lack these.15 And because I couldn’t attend to more than around a few items for each of the two pictures, I never had a chance to notice the change. This staggering delay to spot obvious, blatant differences between viewings is common, and another example of how limited the scope of our attention—and awareness—can be.

  A real-world version of this experiment, by Daniel Simons and Daniel Levin, shows even more dramatic results. Unwitting volunteers walking on paths in the university campus are asked by one experimenter for directions to a certain building. A map is produced, which the volunteer helpfully starts examining to work out the best route to the building. Rather rudely, a door is then briefly passed in between these two people by two other secret experimenters, and the person asking for directions is swapped with someone else. This new person is holding an identical map and continues asking for directions as if nothing has changed. Incredibly, despite the fact that the new person has a different face, voice, clothes, and so on, less than half the volunteers notice that the person has changed.16 They are presumably so busy attending instead to their inner spatial worlds, as they work out the best route to advise the passerby, that they fail to attend properly to the man—or rather men—holding the map. And without attending adequately, they aren’t aware of the change. In some ways this is the experimental crystallization of absentmindedness—a trait I far too readily demonstrate, for instance, when I attempt in vain to go on walks to Byron’s Pool, or Weiner exhibited by being so distracted as to fail to recognize his own daughter.

  It isn’t merely changes that are missed if our attention is elsewhere, but also obvious anomalies in the world. In another famous attention experiment, known as inattentional blindness, originally carried out again by Daniel Simons and this time Christopher Chabris, volunteers watch a video of people playing with a basketball, the task being to keep a careful count of the number of passes. At some point in the video, someone slowly walks through the basketball players wearing a full-body gorilla outfit. Amazingly, only 44 percent of people actually notice the gorilla, despite the fact that it was in the video for a good 5 seconds, even pausing in the middle of the shot to face the camera and comically beat its chest.17 Again, because people are deeply attending to something else—in this case the basketball passes, as well as their internal tally of these passes—they don’t attend to, and aren’t in any w
ay aware of, the mock gorilla invading the scene. Such black holes in our awareness are bread and butter for pickpockets and magicians, who are experts at manipulating our minds so that we fail to attend to the loci of their tricks.

  In all these instances, the story is the same: What we attend to equates with what we are aware of, and the boundaries of consciousness are extremely tight compared to the broad scope of stimuli entering our senses. This small subset of the world that we are actually aware of allows us to miss striking changes in a scene, or highly unexpected events occurring right in front of us. But the flipside of this phenomenon is that it helps us understand a few features of the world far more deeply than we would otherwise, and potentially to perform complex tasks in relation to them.

  A BRIGHTER, MORE VIBRANT WORLD

  So there is clear experimental evidence for the suggestion that we engage in tremendously aggressive attentional filtering in order to focus our awareness on a small component of our informational world. But what about empirical support for the second feature of attention, that it also acts to boost information processing?

  If I’m trying to spot my wife in a crowded train station, I know she’s wearing a red sweater and that she has black hair, so I concentrate on scanning all the people for red sweaters and black hair, and then, if I find a partial match, I look for her facial features. When I do this, it feels as if I’m barely aware of the sounds around me, but every red item of clothing or head full of black hair stands out vibrantly. My whole mind seems attuned to redness, as if other colors are only important at that moment in a negative sense, because they are not red.

  There is good experimental evidence to back up my impression that this controlled, directed attention is capable of actually enhancing both information processing and one’s awareness of certain features of the world. This is trivially true when attention causes me to move my eyes toward some object of interest, thus allowing the central part of my visual field, the fovea, to focus on the item. The fovea has a particularly dense collection of photoreceptors and considerably more acute resolution than my peripheral vision. So the simple act of moving my eyes and focusing on an object allows a far higher, richer stream of visual information to be absorbed than if I wasn’t looking directly at the object. Of course, once an animal has oriented toward an object, it tends to stare at it, soaking up the greater stream of input so that it can analyze more carefully what the object is, and thus further boost its processing of the information it is collecting for that object.

  But you don’t have to be looking anywhere near the object for the boost to occur. For instance, consider the following experiment. Say you are required always to stare at a dot in the center of a computer screen. On half the trials, an additional dot will then randomly appear very briefly near one of the four corners, while on the other trials there will be no such peripheral dot. When this dot does appear, you really pay attention to it in your peripheral vision, because you know from the task rules that this corner dot is warning you that a difficult-to-detect, very faint object is about to turn up in that location—and the whole purpose of the experiment is to notice these faint objects. So even though you are constantly staring at the central dot on the screen, as instructed by the experimenter, you somehow shift your attention to this quarter of space, to prepare for the difficult-to-see object that’s about to appear. If you’re ready and attending in advance to this quadrant, then, for this quadrant and this alone, your vision actually improves—you are conscious of fainter targets than normal, objects you wouldn’t normally have seen, and you can also detect targets faster.

  Not only this, but there’s evidence that we actually see things more vibrantly, too, under the amplifying power of attention. If you attend to the location where a clearly visible object is about to appear, that object will be perceived to have more contrast than if you are not attentionally preparing for it to turn up at that location.

  Attention, then, clearly boosts awareness. It allows us to consciously see things that otherwise would have been too faint to detect, and it really does make things appear more vivid.

  THE ATOMS OF THOUGHT

  But how does attention filter and boost the incoming signal and shunt this output into awareness? Although this chapter is primarily a psychological one, insights about the mind can often gain clarity if examined in the context of corresponding brain processes. Attention definitely falls into this category.

  While some key details are yet to be discovered, the generally accepted view of how attention works in the brain involves a certain kind of battle between neurons, with the direction of our attention emerging from the outcome of this collective neuronal war. In this situation, the parallels with biological natural selection are very apparent, although the winners and losers in this particular survival of the fittest aren’t organisms or even neurons—no neurons actually die in these fights. Instead, the clashes emerge a level above that of the neurons, with competing sources of information all jostling for finite attentional resources.

  In order to explain this fierce neuronal competition in greater detail, I need to provide some background about how neurons interact to process information.

  Journalists appear to fall over themselves to cover any result of the formula, “Scientists have discovered the brain region for X.” While mapping brain regions according to function does provide useful clues to how we think, and ultimately how we might be conscious, there can be a danger in mistaking a real model of the mechanism of thought for what sometimes appears little more than stamp collecting. But if such labels aren’t a sufficient explanation for how our thinking is put together, what is?

  In 1961 the celebrated theoretical physicist Richard Feynman was asked to take over the introductory physics lectures at Caltech. As he was not particularly known for his interest in students, this was a slightly surprising request, but the Caltech physics lectures had become so antiquated and piecemeal, and Feynman’s didactic skills so renowned, that it seemed the natural choice. The result was the most famous student physics lectures—and subsequent textbooks—of all time: The Feynman Lectures on Physics. Toward the beginning of the very first lecture, Feynman told his packed class of fresh-faced undergraduates:If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that all things are made of atoms—little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence, you will see, there is an enormous amount of information about the world, if just a little imagination and thinking are applied.

  In neuroscience, our atomic equivalent is the neuron, and the one neuroscientific sentence we might want to pass on to that next generation of creatures, before we all perished in that cataclysm, might be: All conscious and unconscious mental processing equates to the electrical activity of vast collections of neurons—information-processing brain cells, each of which has a biological version of thousands of input and output wires connected to other neurons, thus allowing each neuron to influence, and be influenced by, the activity of many others.

  A single neuron is essentially a simple node in our biological computational network. It has a multitude of branches surrounding its cell body, each ending in a single wire that receives an input from another connected neuron. The neuron also has a long tail that splits up into many output wires, sometimes counted in the thousands, allowing it to send a signal out to an extensive selection of other neurons. The signal is usually a simple electrical firing at a standard voltage, so basically a binary code, with no firing being a 0 and firing being a 1. Almost all other neurons will be working with the same binary language. One more key fact: The wires between these neurons usually do not actually touch. Instead, when a neuron fires, it releases a small amount of a chemical known as a neurotransmitter in
the gap between the wires, and it is this chemical signal that the other neuron picks up. There are many different types of such chemicals in the brain—some are designed to suppress activity, while others may enhance it.

  Let’s expand on the e-mail analogy of Chapter 1 to illustrate certain features of this neural system: Say I am a manager in a large company, a Fortune 500 conglomerate. Because we’re so cutting-edge, we’ve done away with phones and rely exclusively on e-mail. As a result of this “enlightened” decision, I’m constantly deluged with e-mails. I cannot respond to every single one. So I set up a rule that if and only if I receive more than 100 e-mails on a particular topic (they all have the same topic of “FIRE,” or 1, but we’ll ignore this for the moment), then I will pass it on to everyone in my address book. It happens that I receive a lot of e-mails from one particular person, N. Uron. And I actually find myself frequently writing to him as well—strangely, he seems to be either active or twiddling his thumbs just when I am. It would therefore make sense to prioritize this closely similar employee’s e-mails above the others, and if I get an e-mail from him, class it as the same as 10 other people, so that I need fewer e-mails on a topic, if he’s one of the senders, before I decide to pass it on.

  There are some times when I’m feeling rather less productive than others. If we all have to work nights, then I’m going to feel tired and resentful that I’m stuck in this damn office without overtime. At those times, I simply ignore half the e-mails I receive. So to get me bothered enough to forward some message, I’m going to have to receive it 200 times. And in the middle of the night, everyone else around me is exhausted, despondent, and not exactly glowing with company pride. So everyone becomes quieter at night, and messages tend to die soon after they are sent. On the other hand, in the morning, 20 minutes after the coffee break, we’re all buzzing and almost begging for some activity to latch onto. At these times, just fifty messages of the same topic, on average, will make me forward the information to those around me. And because everyone else is also buzzing, much information flows around the company building, at breakneck speed.

 

‹ Prev