by Bor, Daniel
Another lunge to save free will came from the philosopher Daniel Dennett, who claimed that the problem went away if you assumed that we were measuring things too accurately. If I were to try to explain where Cambridge University was located, for instance, it would be ludicrous of me to pinpoint it as the stone crown above the front gate of Kings College, even if this is a relatively central location of one of the grandest university buildings. Instead, the university is located via the hundred or so college, department, and administrative buildings strewn around Cambridge city.
Similarly, Dennett argued that all the Libet experiment really showed was that consciousness was smeared across time, perhaps of the order of half a second long, and that it’s entirely invalid to assume it has a single precise temporal location. This is in some ways a very plausible idea—consciousness involves a massive collaboration among a multitude of large brain regions and is undoubtedly a complex, perhaps even slightly lumbering, process. So the suggestion that consciousness has a somewhat nebulous timescale would make perfect sense.
A recent computational model by Stanislav Nikolov and colleagues for how the brain recognizes its own important neural events, over and above the mere random chatter of neurons, provides a detailed justification for Dennett’s position. Nikolov’s model showed that it was actually counterproductive to detect a decision when brain activity for the decision was just on the cusp of rising above the usual random baseline hum. Neural activity is always rising and falling, because the brain is a noisy, semichaotic place. What the brain needs to do is to carry out some solid, cautious statistical tests on its own activity, and only trust that a decision has been made when collective activity is quite high, clearly above chance—and this point is reached considerably later than the point at which this quiet ramping-up process begins. Otherwise, the brain would constantly be misinterpreting chaotic noise as significant, meaningful activity.
Imagine, as an analogy, that you are standing in the crow’s nest of an early nineteenth-century military ship, in the midst of war. It is your task to alert the crew that an enemy vessel is approaching. One strategy might be to get excited by any movement on the horizon, and shout to the captain far below that there may be an enemy approaching. But what if most of the time after relaying these terrified alerts, you actually found that you’d spotted a bird, or a jumping dolphin, or a particularly high wave, or that there was just a speck in your eye? The whole crew would be pushed to battle stations a hundred times a day, and would be too exhausted to fight when that dangerous frigate did in the end turn up. Instead, holding fast to the lip of your crow’s nest, you set a criterion: You say to yourself that you will only alert the captain when you are reasonably sure that an enemy ship is detected. This rule might cause you to lose a few seconds of preparation, but the battle stations will most likely only need to be set that one time, in a few days when they are actually needed, saving much collective energy.
Likewise, returning to the Libet experiment, where you apparently decide to lift your finger considerably after your unconscious brain chooses to, Nikolov’s computational model argues instead that the brain simply cannot tell that a true decision has occurred until activity has passed a high enough threshold, possibly at exactly the time when the subjects say that they decided to lift a finger. In other words, there may be no unconscious decision to speak of—it’s just that the decision only qualifies as a decision when neural activity is sufficiently high.
In a far more recent imaging experiment, now using fMRI, John-Dylan Haynes and colleagues adapted Libet’s experiment to show that brain activity in the front section of the prefrontal cortex could reliably pick out the difference between whether we’ve chosen to initiate a left or right finger movement. This in itself is exciting enough, but what’s even more remarkable is that this activity was detected up to 10 seconds prior to the conscious decision.
Although, conceivably, the same defense could be applied here—that for most of these 10 seconds the increase is too indistinguishable from random fluctuations in the brain to reliably call it a decision—that now seems quite a stretch. Clearly, more research needs to be carried out in order to untangle this issue. But the possibility that every single one of our apparent conscious decisions was actually previously fixed in our unconscious minds needs to be given some consideration.
Provisionally, though, there are multiple potential routes for consciousness to play a key role in our decisions, particularly if we look not just at simple, arbitrary choices, but the whole gamut of our decisions. One important distinction again is between habitual choices and novel ones. For some routine, repetitive decisions, which seem quite automatic and potentially unconscious, there may well have been some initial careful conscious thought to set it up that is now long forgotten—analogous to the distant memory of learning the specifics of forehand in tennis. For instance, I give little thought to what I eat for breakfast out of a few choices, but that’s partly because I at one point in the past did give this some conscious assessment and informal experimentation, so that I landed on a set of healthy choices that I enjoy eating. Personally, if there is some routine that initially started as an arbitrary gut instinct, then I would be worried about that habit and wish to revisit my choices.
But what of novel decisions? It’s almost a waste of resources to apply consciousness to decide when to move your finger in an experiment, for instance. In contrast, there may be many hours of clear conscious thought put into choosing what degree to study, or what career to move into. If these were written in a diary, then you could even clearly see the evidence for consciousness at work: the logical arguments, the weighing of pros and cons, and so on. Given that the unconscious mind cannot operate on these structured, highly refined levels, there is a strong case to be made for consciousness being heavily involved in those important complex choices.
ASPIRING TOWARD FREE WILL
So although the Libet experiment has been classified as the quintessential psychology experiment to refute free will, its implications may not extend much beyond the simplest of mental decisions, and its conclusions may be further weakened by taking into account the cautious statistical approach required to detect neural events in a sea of random noise.
But leaving this debate aside, I see the issue of free will rather simply. If the question is whether we have the freedom to choose, independent of the machinery of our brains as it interacts with the world, then of course we don’t have free will. We are that machine, so how could we be independent of it? If one believes that no machine could ever have free will, then we don’t either, and that’s the end of the debate.
But we underestimate the detail of our neural machinery at our peril. We are unimaginably complex, with around 600 trillion connections inside each of our brains. And this particular machine is very special because it is an immensely powerful information-processing device, with a tremendously rich internal model of the world, along with accurate copies of various events from its past. These fascinating properties give the enticing, persuasive illusion that we can make decisions independent of the outside world, or even outside of the fettered constraints of our neural hardware. For instance, imagine that, by some quirk of fate, I lost all my senses, and medicine kept me alive by various feeding and breathing tubes. I would lose all external input, but I might still have an active conscious life for many years—I might mentally write the odd novel, compose some music, or generate some naive theories about politics and philosophy. What I would do in my own internal world is hard to predict exactly, but there could be any one of a million paths as I creatively explored the knowledge I’d accumulated up to the point where my interaction with the outside world was cut off. No artificial machine could come remotely close to the range of possible, unpredictable activities that might ensue if a human lost input from the outside world.
I’m not arguing here that the sheer complexity of our minds somehow allows free will after all. Instead, I merely want to highlight one reason why the illusion
seems so compelling. As it is, there is little we can do to shake the embedded illusion of free will, so we may as well play along. Part of the game we play is to redefine free will more softly in terms of decisions that are consciously, rationally made under normal circumstances. For instance, a person acting on the delusions of his schizophrenic illness would be assumed to have a substantially diminished free will, because his insanity robs him of the chance to make those conscious, rational decisions that we take for granted.
Even on this level, though, there are strong grounds for giving many people, under many circumstances, the benefit of the doubt, when we might initially have assumed they committed some wrong. A substantial proportion of our everyday decisions have an unconscious bias aggressively pushing us toward a selfish, short-termist agenda. And, as neuroscience progresses, we are increasingly realizing that thought patterns and behaviors that previously would have been classified as “personality problems” are actually forms of mental illness, with detectable genetic and neurophysiological roots.13
Unfortunately, as a species, we do have a vast potential for destructive acts. This toxic capacity is partly due to the supercharged conscious component of our brains, which can be engaged ruthlessly and innovatively to achieve our irrational unconscious goals. But at the same time, the enormous analyzing capacity of our consciousness also provides the potential to overcome these default limitations. Our conscious minds are uniquely capable of analyzing the consequences of our choices, even though, by default, many decisions may be made with little conscious input. The trick, somehow, is to wrestle control, as much as possible, into the realms of consciousness.
One strategy may simply be to use our conscious minds more often: Our lives could improve considerably if we tried to ensure that any remotely important choice was given the full force of conscious, rational deliberation. By trusting the sophisticated conscious space we have, we can bring to bear the most refined, capable computational aspect of our brains, understand our own goals, avoid inherent biases, and make far more enlightened decisions.
One aid to this endeavor would be to elucidate the psychological and neural landscape of our conscious minds so that we could better understand the scope and limitations of awareness. Although this chapter has begun to make some headway toward this goal by exploring the distinction between conscious and unconscious processes, in the following two chapters I will more directly elucidate the psychology and neurophysiology of awareness.
4
Pay Attention to That Pattern!
Conscious Contents
DANGEROUS DAYDREAMS
Occasionally, when composing a new piece of writing, I like to go for a stroll, as I find it useful for generating free-wheeling ideas. And so, to help cement a few concepts together for this current chapter, I set off on a leisurely midsummer walk to Byron’s Pool, halfway between my house in South Cambridge and the village of Grantchester. Named after the poet Lord Byron, who was said to frequently bathe at this picturesque little spot when studying at the nearby university, Byron’s Pool has been a favorite, secluded swimming location for many generations of students, including Virginia Woolf and Ludwig Wittgenstein.
On the twisting country road, the signpost for the pool is quite prominent, and it’s not as if I’ve never visited the site before. I know I should hit the turning about 15 minutes into my walk. But after 30 minutes, I somehow found myself at the Green Man pub in the middle of Grantchester, having completely missed the sign many minutes back. In fact, I could hardly remember the visual part of the walk at all—not Canteloupe Farm, whose name always makes me want to eat a melon; not the bridge over a round, widening bubble of the river Cam, surrounded by trees and statues of animals; nor Grantchester Orchard. Though I had been distracted once, very briefly, by a loud car horn on the road beside me, a few seconds later I was back in my own world, concentrating on the psychology of consciousness. What occupied me most of all during this walk was how to piece together the structure for this current chapter, a few linked items at a time.
No matter, I thought, I would be bound to catch the sign on my way back, and at least visit Byron’s Pool on my second attempt, about 15 minutes later. I shouldn’t have been so confident. Thirty minutes after I set off on my return route, I found myself back in the city of Cambridge, near my house. I had succeeded in entirely missing the sign for Byron’s Pool not once, but twice. The chapter structure was shaping up well, though, so I wasn’t too disgruntled—just feeling a tad sheepish.
This wasn’t the first time that I’d been completely oblivious to my surroundings while faithfully attending to the inner cogs of my mind. In fact, I seem to have a disturbing knack for entering my own world and utterly ignoring whatever my senses are brightly picking up right in front of me. When I’m driving on the motorway, or even in the busy city, it doesn’t take much for me to be engrossed in replaying a memory or following a train of thought, while my perception of the road, of moving the steering wheel, the pedals, and so on, all disappear from consciousness. All I’m aware of at these times is the memory of that strange discussion the other day, or the words of a future paragraph I want to write, or the format of a new experiment I want to run. Somehow I do slow down and speed up in response to cars around me, and navigate through traffic lights—but all without the watchful presence of my conscious mind. In fact, I might also have the news on the car radio, or be “listening” to an audiobook, but these may as well be random noises that bash in vain against my ears, failing to penetrate very much further in.
In my defense, it never actually seems dangerous. As soon as anything unexpected happens, such as a car suddenly braking in front of me, I’m back immediately—aware of the car and my surroundings. I swiftly calculate what to do and act on those thoughts to negotiate the problem safely.
I know I’m not alone in my absentmindedness. In fact, there seem to be some people considerably more adept than me at crawling inside their own intellectual shells. The worst case I heard, from the mathematician Ian Stewart, was of the pioneer of cybernetics, Norbert Wiener, who, as well as being a very talented mathematician, was notorious at being distracted by his work and forgetting important details of his own life. When they moved to a new home, his wife, knowing exactly what he was like, wrote the address very carefully on a piece of paper for him and pleaded with him not to mislay the item. “Don’t be silly, I’m not going to forget anything as important as that,” he replied, safely putting the paper in his pocket. But later on, Wiener started obsessing over his latest insight, grabbed the nearest, most convenient piece of paper he could find—which happened to be his new address—and furiously scribbled equations all over it. Having found that the new idea had some serious flaw, he threw the piece of paper away in disgust.
When it was time at the end of the day to return to his family, he vaguely remembered something about a new home, but of course had no paper address anymore, and couldn’t find it anywhere. His only recourse was to head back to his old house, and sheepishly see if any neighbors knew where he’d moved to. As he arrived, he noticed a young girl sitting beside the old house and approached her.
“Pardon me, my dear, but do you happen to know where the Wieners have mov—?”
“That’s okay, Daddy. Mummy sent me to fetch you.”
ATTENTION FUNNELING RAW DATA TO BUILD EXPERIENCES
So far in this book we’ve arrived at the position where consciousness is a physical, brain-based process, most effectively investigated by science. While information processing and the management of ideas is at the heart of evolution, the extra, far more capable forms of information processing inside a brain allow consciousness to emerge from this exquisitely designed biological computer. But awareness doesn’t arise in all species, or even at every moment in human life. In those species with the capacity for consciousness, low-level processing or routine actions are carried out unconsciously. Only when our data processing is of a sufficient magnitude and complexity, and of a certain type, does conscio
usness occur.
This and the next chapter will continue exploring exactly what forms of complexity, and what type of information processing, relate to consciousness, and how attention funnels the raw data we soak up from the world and converts a small portion of our input into the experiences that fill our lives with meaning. In this chapter I’ll be centering on the psychology of consciousness, and in the next on how the brain creates our experiences.
As my stories about absentmindedness illustrate, attention is closely related to awareness: What I attend to is what I’m conscious of, and whatever falls outside of my attention is processed, if at all, by my unconscious mind alone.
But before describing the intricate psychological details of what attention is, and how it relates to awareness, I will pause and ask, from first principles, what the purpose of attention might be.
Attention addresses a basic data-processing issue that almost all types of computers face. Simple information-processing systems, such as plants or bacteria, are receiving only a faint trickle of information from their senses and have only a rudimentary ability to process that information. On the whole, all the information they receive, they process as far as they can. But for more complex systems, where the information stream flowing in is overwhelming the system’s ability to process every item fully, there needs to be some decision process about which subset of all this mountain of data is most deserving of further analysis, and which other subsets are best ignored. This data-filtering and -boosting mechanism is attention.