The Ravenous Brain: How the New Science of Consciousness Explains Our Insatiable Search for Meaning
Page 5
THE MOST COMPLEX OBJECT IN THE KNOWN UNIVERSE
The answer to this question may seem simple. Our intuition tells us that there cannot be any consciousness or meaning in this special room because the small book is a simple object. How could one slim paperback actually be aware? But the thought experiment’s second slippery trick is to play with the idea that something as incredibly sophisticated and involved as language production could possibly be contained in a few hundred pages. It cannot, and as soon as you start trying to make the thought experiment remotely realistic, the complexity of the book (or any other rule-following device, such as a computer) increases exponentially, along with our belief that the device could, after all, understand the Chinese characters.
Let’s say, for simplicity’s sake, that we limit our book to a vocabulary of 10,000 Mandarin words, and sentences to no longer than 20 words. The book is a simple list of statements of the form: “If the input is sentence X, then the output is sentence Y.” We could be mean here. Let’s assume that the Chinese people outside the room are getting increasingly desperate not to lose their bet. One of them actually thinks he half-spotted the Turing’s Nemesis ringleader slip some kind of book to the guy in the room. Another member of the Chinese group happens to be a technology history buff and has played on a few clever computer simulations of human text chatters from the early twenty-first-century Turing Test competitions.4 He suggests a devious strategy—that they start coming up with any old combination of sequences varying in length from 1 to 20 words, totally ignoring grammar and meaning, to try to trick the person inside the room into silence. How big would the book have to be to cope with all the possibilities? The book would have to contain around 1080 different pairs of sentences.5 If we assume it’s an old-fashioned paper book, then it would have to be considerably wider than the diameter of our known universe—so fitting it into the room would be quite a tight squeeze! There is also the issue of the physical matter needed to make up this weighty tome. The number of pairs of sentences happens to equal the number of atoms in the universe, so the printer of the book would be running out of paper very early on even with the first copy! Obviously, it would be hopelessly unrealistic to make any kind of book that not only contained every possible sequence of up to 20 words, but also connected each sequence as a possible question to another as the designated answer. And even if the book were to be replaced by a computer that also performed this storage and mapping, the computer engineers would find there was simply not enough matter in the universe to build its hard disk.
Let’s try to move toward a more realistic book, or more practically, a computer program, that would employ a swath of extremely useful shortcuts to convert input to coherent output, as we do whenever we chat with each other. In fact, just for kicks, let’s make a truly realistic program, based exactly on the human brain. It might appear that this is overkill, given that we are only interested in the language system, but our ability to communicate linguistically is a skill dependent on a very broad range of cognitive skills.
Although almost all neuroscientists assume that the brain is a kind of computer, they recognize that it functions in a fundamentally different way from the PCs on our desks. The two main distinctions are whether an event has a single cause and effect (essentially a serial architecture), or many causes and effects (a parallel architecture), and whether an event will necessarily cause another (a deterministic framework), or just make it likely that the next event will happen (a probabilistic framework). A simple illustration of a serial deterministic architecture is a single line of dominoes, all very close together. When you flick the first domino, it is certain that it will push the next domino down, and so on, until all dominoes in the row have fallen. In contrast, for a parallel probabilistic architecture, imagine a huge jumble of vertically placed dominoes on the floor. One domino falling down may cause three others to fall, but some dominoes are spaced such that they will only touch another domino when they drop to the ground, which leaves the next domino tottering, possibly falling down and initiating more dominoes to drop, but not necessarily.
Although modern computers are slowly introducing rudimentary parallel features, traditionally, at least, a PC works almost entirely in a serial manner, with one calculation leading to the next, and so on. In addition, it’s critical that a computer chip functions in a deterministic way—if this happens, then that has to happen. Human brains are strikingly different: Our neurons are wired to function in a massively parallel way. The vast majority of our neurons are also probabilistic: If this neuron sends an output to many other neurons, then that merely makes it more (or sometimes less) likely that these subsequent neurons will be activated, or “fire.”
Why have one form of computer architecture over another? Partly because a serial deterministic architecture is so simple and straightforward. A computer can apply billions of very basic calculations in a second, whereas my brain carries out only a few major thoughts at a time, at the most. Consequently, it takes my PC a fraction of a second to calculate the square root of 17,998,163.09274564, whereas most of us would give up on such a fiendish task before we’d even begun. But because of the parallel, probabilistic nature of our brains, our processing is far more fluid and nuanced than any silicon computer currently in existence. We are exquisitely subject to biases, influences, and idiosyncrasies. For instance, if you read the following words: “artichoke artichoke artichoke artichoke artichoke,” you will spend the rest of the day (or even somewhat longer) recognizing the word “artichoke” a little bit quicker than before (and you might even be a little more likely to buy one the next time you go to the supermarket). My word-processing program simply produces angry red lines to punish me for my ungrammatical repetitions of “artichoke.” It does not learn to insert those red lines any quicker by the fifth repetition of the word, compared to the first.
This continuous, subtle updating of our inner mental world means that we can also learn virtually anything very effectively. For instance, we would consider the task of distinguishing between a dog or cat in a picture to be a simple and trivial matter. But computers are still cripplingly impaired at such processes. The reason is that although recognizing different animals appears basic to us, such skills are, in fact, behind the veil of consciousness, fiendishly complex, and ideally they require an immensely parallel computational architecture—such as the human brain. Of course, it makes no sense for evolution to have shaped our brains to be highly skilled at accurately calculating square roots. But, from a survival perspective, having a general-purpose information-processing device, which can learn to recognize any single critical danger or benefit in a moment, and then appropriately respond, is highly advantageous.
Therefore, over a few seconds, serial deterministic processing is best suited to performing huge quantities of simple tasks, whereas parallel probabilistic processing is only effective at carrying out a handful of tasks. But these tasks can be very complex indeed.
In order to capture the scale of the challenge ahead for anyone wanting to make a book or computer that could speak the Chinese language, we need to delve further into the details of our human probabilistic parallel computer and understand precisely how it differs from a standard PC. Assume for the moment that a single neuron is capable of one rudimentary calculation. There are roughly 85 billion neurons in a human brain. An average PC processor has around 100 million components: so, about 850 times fewer components than a human brain has in neurons—an impressive win for humans, but not staggeringly so, and indeed there are some supercomputers today that have more components than a human brain has neurons. But this is only the beginning of the story. There is another critical feature of human brains that, in a race, would leave any computer in the world stumbling along, choking pitiably on the dust of our own supercharged biological computational device. While each component on a central processing unit may be connected to only one or a handful of others, each neuron in the human brain is connected to, on average, 7,000 others. This means there
are some 600 trillion connections in the brain, which is about 3,000 times more than the number of stars in our galaxy. In every young adult human brain these microcables, extended end to end, would run about 165,000 kilometers—enough to wrap around the earth four times over! The complexity of the human brain is utterly staggering.
To begin to understand the sheer vastness of the parallelism of human brain activity, imagine that the human population is around 85 billion people, about 12 times what it is now. You’ve suddenly discovered an earth-shattering revelation, and you simply have to tell everyone. You e-mail every single person in your contact list, all 100 people, and tell them to pass this wondrous insight on to 100 new friends. They do so, then the next group follows the same instructions, and so on. Let’s assume for the moment that there are only a few overlaps in most people’s address books, and, for the sheer, unadulterated genius of the wisdom imparted, that everyone obeys the instruction to forward the e-mail within a few seconds. From the starting point of a single send, it actually only takes six steps for the whole multiplied world of 85 billion people to get the message, and a handful of seconds. Indeed, in the human brain it’s thought that no neuron is more than six steps from any other neuron in the family of 85 billion.
Now imagine everyone in the world having such a revelation and everyone e-mailing their 100 address-book contacts about the news each time—and everyone doing this about 10 times an hour. If you didn’t turn on some fantastically effective spam filters, your inbox would receive around a thousand e-mails an hour. Everyone in the entire population each receives a thousand e-mails in that single hour, collectively amounting to 85 trillion messages.
But a neuron may fire 10 times a second, instead of per hour, and send its output to 7,000 other neurons, instead of 100. So a nauseatingly dizzying complexity occurs in your brain every single second, with hundreds of trillions of signals competing in a frenzied, seemingly anarchic competition for prominence.
This massively parallel web of neural activity simply is the propagation of information. In many ways, this spread of data by minuscule parts is unintuitive: A PC stores a single piece of information in only one location, and that location cannot store any other data; in stark contrast, populations of neurons—those, for instance, in the fusiform face area (FFA)—store as an ensemble many different faces, with each neuron only contributing a small fraction of each memory for a particular face, but humbly capable of playing its minute part in supporting hundreds or even thousands of face memories.
But for all these differences, the fact of the matter is that both brains and standard computers are essentially information-processing machines, and are secretly far closer cousins than at first appears. So whatever algorithm a brain uses to process information could be recreated, in principle, on a PC. Indeed, in neuroscience, there are already prominent computer models closely approximating the biological characteristics of a large population of neurons (in one recent case, a million neurons, with half a billion connections), and these are showing interesting emergent trends between groups of pseudo-neurons, such as clusters of organization and waves of activity.
THE CASE FOR ARTIFICIAL CONSCIOUSNESS
To return to the Chinese Room: If the mind is indeed a program, then it’s clear that this “software” occurs first and foremost at a neuronal level. So, it is simply too great a task for a book to represent all the incredible intricacies of human brain activity—the interacting complexities are just too staggering.
No, if we are to have some artificial device that captures the computational workings of a human brain, it needs to be a computer. And it’s not inconceivable that a computer in four hundred years’ time could be fast enough to run a program to represent our massively parallel brains, with their hundreds of trillions of operations a second. Let’s give this computer a pair of cameras and a robotic arm. The arm can manipulate the pieces of paper fed in from the IN box and write new ones for the OUT box. Now, if this computer was able to communicate effectively with a Chinese person, what does our gut tell us about what’s happening inside? I think it would take a brave person to claim with confidence that this immensely complicated computer, with billions of chips and trillions of connections between them, using the same algorithms that govern our brains, has no consciousness and doesn’t know the meaning of every character it reads or writes.
To reinforce this point, imagine that four centuries into the future, neuroscience is so sophisticated that scientists can perfectly model all the neurons in the human brain.6 The most famous neuroscientist in China, Professor Nao, is dying, but just before his death he is willing to be a guinea pig in a grand experiment. A vast array of wonderfully skilled robotic micro-surgeons opens up Nao’s skull and begins replacing each and every neuron, one by one, with artificial neurons that are the same size and shape as the natural kind. This includes all the connections, which are transformed from flesh to silicon. Each silicon neuron digitally simulates every facet of the complex neuronal machinery. It could just as easily do this via an Internet link with a corresponding computer in a processing farm nearby, such that there are 85 billion small computers in a warehouse a kilometer away, each managing a single neuron. But although it makes little real difference for this argument, let’s assume instead that miniaturization is so advanced in this twenty-fifth-century world that each little silicon neuron embedded within Nao’s brain is quite capable of carrying out all the necessary calculations itself, so that inside Nao’s head, by the end, is a vast interacting collection of micro silicon computers.
Nao is conscious throughout his operation, as many patients are today when brain surgery is conducted (there are no pain receptors in the brain). Eventually, every single neuron is replaced by its artificial counterpart. Now, the processing occurring in Nao’s brain is no longer biological; it is run by a huge bank of tiny PCs inside his head (or, if you prefer, all his thoughts could occur a kilometer away in this bank of 85 billion small yet powerful computers). Is there any stage at which this famous scientist stops grasping meaning, or stops becoming aware? Is it with the first neuronal replacement? Or midway through, when half his thoughts are wetware and half are software? Or when the last artificial PC neuron is in place?
Or, instead, does Nao feel that his consciousness is seamless, despite the fact that a few hours ago his thoughts were entirely biological, and now they are entirely artificial? It is entirely conceivable, I would propose, that as long as the PC versions of his neurons are properly doing their job and running programs that exactly copy what his neurons compute, then his awareness would never waver throughout the process, and he wouldn’t be able to tell the difference in his consciousness between the start and end of surgery.
Now let’s say that Nao goes into the Chinese Room with his new silicon brain intact. Any Chinese person that passes by would have a perfectly normal paper-based conversation with him via the IN and OUT letterboxes, even though his brain is no longer biological. And both the outside conversationalists and the newly cyborg neuroscientist inside the room would assume that he, Professor Nao, was fully conscious, and that he understood every word. Is this not formally identical to the rule book that John Searle had in mind when he originally presented his Chinese Room thought experiment? We could even, for the sake of completeness, swap the book for Nao. We tie Nao’s hands behind his back and bring back the young man who was in the room before, but this time with no book. The young man would duly show Nao the Chinese characters that rain down on the floor from the IN letterbox, and then follow Nao’s instructions for copying out reply characters and posting them in the OUT box. The young man would still have no clue what he was helping to communicate, of course, but something in the room would—namely Nao!
And, of course, if the Turing’s Nemesis gang offered the passing group of Chinese people the bet that any local guy could speak Mandarin, the situation would quickly turn sour: The Chinese group would cry foul as soon as Nao was led into the room with their choice of Caucasian subject. E
ven if it were made perfectly clear that Nao had a silicon brain, the Chinese group would very likely no longer be interested in the bet. Nao would appear fully conscious when tested, both in his mannerisms and conversation—and that would be all the Chinese group would need to steer clear of this silly scam.
I’m willing to confess that my brain-silicon-transplant thought experiment rests on untested intuitions, just as the original thought experiment did. For instance, it may never be practically possible to capture every salient cellular detail of our brains so that they could be exactly simulated within a computer. But at least Nao helps to rebalance things, by suggesting that any appeal to the mysterious, special, noncomputational nature of our minds rests on naive assumptions about how the brain works.
In the end, the most famous attack on the idea that meaning can be found in computer programs, Searle’s Chinese Room, doesn’t seem to be all that convincing, mainly because it implied that the programming required for language communication was grossly simpler than it actually is. Instead, we must at least be open to the idea that our minds really are our brains, which in turn are acting as computers running a certain (parallel) kind of program. Consequently, we could, in principle, be converted into silicon computers at some point, where real meaning and awareness would persist. There is certainly no convincing argument against this view, and I happen to believe it is extremely likely that silicon computers could, in the future, be just as conscious as humans.
The fact of the matter is that we are deluged with the idea of conscious robots in plausible ways in books or on the big screen, and they are believable to us because they are shown to have stupendously complex artificial brains. When we watch Data from Star Trek, or many of the characters in Blade Runner, for instance, we have absolutely no trouble entertaining the possibility that artificially created beings could be conscious in very similar ways to us. Indeed, almost all of these characters live inside worlds where a common theme is the unjust lack of rights they receive as machines. As characters, these androids are at least as aware as the humans enforcing their prejudiced rules.