by Bor, Daniel
What happens when the chimps, bonobos, and monkeys try this task? Much of the time, even with extra, guided training, they only reach the first developmental stage that the children reach. A minority of the bonobos can at least reach the intermediate, nonhierarchical stage of stacking all three cups correctly, like the human sixteen-month-olds. But virtually none of the animals can grasp the idea of combining two objects together and then moving them, even though the experimenter had just that moment shown them exactly this.
Therefore, young children can outperform adult chimps, bonobos, and monkeys on tasks requiring that they chunk in this hierarchical way. Although these other animals may well be highly conscious, the exceptional richness of human awareness may critically be reflected in our superior ability to find and combine structured information.
Of course, stacking cups, while surprisingly complex, is nevertheless one of the simplest tasks human children perform. As children grow up, the toys they play with rely on an increasing number of levels of meaning as well as more sophisticated relationships between items. And each stage of human play reflects the widening gap in cognition and consciousness between children and primates of the same age.
I described in Chapter 5 how there is nothing exceptional in the quantity of items humans can place in consciousness: Humans, like a wide variety of other species, are limited to only three or four working memory objects. However, human consciousness is so rich, so powerful, because of the extent to which we can manipulate and combine this handful of online items, especially in hierarchical ways. The above experiments demonstrate how this seemingly trivial difference between humans and other species starts conceptually to explode after only a couple of years of life.
INFANT AWARENESS
Does this mean that human infants aren’t conscious until they are around twenty months old and are able hierarchically to combine objects and actions together? Almost certainly not. This milestone merely signifies that a critical stage in a growing consciousness has been reached—a stage where experiences will be far more varied and complex, and where learning can skyrocket.
When, then, do the first seeds of infant awareness start reaching up from the soft soil? Does this occur before birth? A fetus can be remarkably active, kicking and punching away on a regular basis, from surprisingly early in pregnancy—usually by about seven or eight weeks, although the mother won’t feel these movements until a few months later, when the fetus has sufficiently grown. Toward the end of pregnancy, the fetus can also be highly responsive to the outside world, either via pressure on the uterus or muffled sounds filtering in from outside. Does all this signify consciousness? This is unlikely, because both the mother’s placenta and the fetus itself work actively and in concert to keep the fetus under safe sedation while inside the uterus. Effectively, the fetus lives its prebirth existence in a kind of dream state—though with few stimuli so far absorbed, such dreams would be unlike ours, with little, if any, detail. Instead, the fetus only really wakes up in the sudden, shocking moment of birth.
Is this the moment of first consciousness? This is where the behavioral approach fails us, as there are few clues early on that clearly demonstrate awareness. My personal intuition, from watching my own baby daughter develop, is that you can’t help assuming that there is a strong sense of awareness from soon after birth. If consciousness cares about novelty and unexpected events, then my daughter’s profound surprise for much of her first two months whenever she developed the strange sensation of hiccups is one intriguing piece of evidence in support of early consciousness. Then, just shy of her third month, she started laughing at my silly antics. Admittedly, with little science to back this up, I felt that this humorous reaction was the signature sound of conscious surprise, and it gave me little doubt that my daughter was indeed now aware of the world.
Still, another approach that ignores behavior entirely might provide more definitive answers to such questions where they otherwise cannot be found.
MEASURING CONSCIOUSNESS IN ANIMAL BRAINS
Assessing consciousness in other beings via these roundabout behavioral measures is a fascinating thing to do, but there are problems with this approach. The first, which I have already mentioned, is that it is difficult to interpret a positive result. Because the animal cannot tell you that it is conscious, any pass in a test has to be taken with caution as to its implications for the exact nature of the animal’s inner mental world. This is more readily true in artificial intelligence, where a robot could be programmed to pass any of the tests I mentioned above, from the mirror-recognition test to the gambling tasks. This signifies very little. The robot would probably fail any other task of awareness, passing only the one it was programmed for, and of course a few lines of code does not equal consciousness.
The second problem is that we cannot rely on a negative result in these tests either. In the novel The Girl with the Dragon Tattoo by Stieg Larsson, the title character, Lisbeth Salander, is quite a wild child. The teachers and authorities investigate her unruly behavior by subjecting her to every psychological and educational test in the book, but her response is to refuse even to lift up her pencil to write her name. Because she effectively fails each of these tests, the authorities assume with surprising dogmatism that she must therefore be mentally retarded, and she is consequently officially classed as such well into adulthood. The truth, it increasingly transpires, is that she is in fact fiercely intelligent, and she is quite capable, when the need arises, of twisting these prominent authority figures around her highly independent finger. What the authorities should have understood when testing this girl was that if someone fails a test, an inability to carry out the required attribute is only one of a range of possible explanations.
Similarly, if an animal is not interested in playing along, you have no way of knowing if it is capable of passing the test. Maybe it could easily pass it, but stubbornly refuses to try. In other words, as with so many tests in psychology, a negative result simply cannot be interpreted as an inability of the animal to perform a given skill. Another problem with the behavioral approach is that there is no clear way to gauge how conscious an animal is. Only a few very distinct stages are explored, and there is little scope for a continuum of conscious level, which is probably a more appealing idea than the more ugly assumption that you either have consciousness or you don’t.
There is a way to sidestep these traps, and that is by ignoring behavior and merely investigating the structure and function of a brain, or indeed any computational object, for telltale signs of awareness. The first, crudest attempt is simply to rate an animal according to its brain size. If we use such an index as a rough estimate of consciousness—perhaps by virtue of the provisional argument that more neurons means that more information can be processed in a given brain—then humans come near the top of the table, but are by no means the leaders of the pack. Our brains weigh about 1.3 kilograms, a shade less than the bottlenose dolphin, whose brain weighs in at 1.8 kilograms. The African elephant has a brain that weighs around 6.5 kilograms, nearly five times that of the human. Of all the animals on land or sea, the sperm whale wins comfortably, with a brain that tops 8 kilograms. So if it’s true that brain size alone reflects consciousness, then an intriguing thought is that dolphins, elephants, and some whales have considerably more of it than us.
But in some ways it’s not surprising that a sperm whale has a brain six times the size of our own. After all, sperm whales can weigh nearly a thousand times what we do. Much of that extra brain mass is probably needed to move a body that’s 20 meters long, as well as to keep track of all the other internal states that need to be managed rather more carefully when an animal is larger than most buses. Because of this, most scientists believe that a better comparison to make is the size of a brain compared to the animal’s body. The logic behind this is that if a brain is far larger than you’d expect from the animal’s body, then all those extra neurons must be doing something over and above the standard tasks of making the ani
mal move, regulating its states, and so on—and very probably the extra brain matter relates to more complex processing, including consciousness.
Although calculating this brain-to-body ratio is rather more complex than it at first sounds,28 humans come out on top in the entire animal kingdom, with a considerably larger brain than you’d expect for our size and body type. Dolphins aren’t too far behind, followed by chimps, bonobos, orangutans, and gorillas. It’s broadly assumed that an animal’s position on this line is a reasonable reflection of its ability to learn, but we only have circumstantial evidence that our large brains really do reflect our greater consciousness. We also have no idea how consciousness scales with brains—the threshold for consciousness may be a hair’s breadth under the human brain-to-body ratio, or a billionth of it—if even such a threshold exists. Therefore, this biological comparison can only be seen as a suggestive hint of how conscious an animal is.
CHAUVINISTIC ANATOMICAL BOOTSTRAPPING
A more promising approach is a kind of bootstrap method, where we learn from humans what parts and processes of the brain are important for consciousness and we then assess the level of similarity between these key features in humans and other animals. The thalamus and prefrontal parietal network are crucial for consciousness in humans. So to what extent do other animals share these structures with us? All vertebrates have a thalamus in some form, but not all have any kind of cortex resembling the prefrontal parietal network. From this line of evidence alone, we’d conclude that our great ape cousins, with a prefrontal parietal network not dissimilar to the human model, have the most similar levels of consciousness to us. Other primates, such as monkeys, have prefrontal and parietal structures that we can broadly match with our own, but the anatomical stretch suggests that their conscious capacity is diminished compared to ours. Most mammals at least have a cortex—and some capacity for consciousness, perhaps—while nonmammals, with little hint of a prefrontal parietal network, may not have any consciousness at all.
But this approach, while at least adding more clues to the collection, feels rather circumstantial. It also discounts the possibility that consciousness could arise in animals with a very different brain to ours.
We tend to think of all those animals that never left the oceans as far more mentally simplistic than us, almost certainly not conscious—which is part of the reason why many people still consider themselves vegetarian if they eat fish. But the octopus’s cognitive skills, if fully known, would raise doubts in many who believe such assumptions. The octopus, although an invertebrate—with no thalamus or cortex to speak of—behaves in ways that utterly belie its primitive label. It has around 500 million neurons, not too far from the numbers in a cat. But the octopus brain is decidedly unusual, with an exceptionally parallel architecture—almost always a positive quality when you are talking about brains. The majority of octopus neurons are to be found not in its brain, but in its arms. In effect, if you include the neuronal bundles in its limbs, the octopus has nine semi-independent brains, making it unique in the animal kingdom. The octopus is also a genius among ocean creatures. It has highly developed memory and attentional systems. In nature, this allows these invertebrates to take on a wide range of shapes to mimic other animals, rocks, or even plants. In the lab, octopuses can distinguish shapes and colors, navigate through a maze, open a jar with a screw-on lid, and even learn by observing the behavior of another octopus—an ability thought previously only to exist in highly social animals.
David Edelman, who studies octopus cognition with Graziano Fiorito, has spoken of his uncanny experiences upon entering the octopus room in the grand pillared basement of their palatial Naples zoological department. All of the octopuses immediately press their faces to the sides of their tanks and carefully, continuously track the movements of this new intruder. Such sustained attention is normally only found in obviously intelligent animals. If octopuses are conscious of their world, then we would simply never realize it from this comparison of brain anatomy, as their brains are utterly unlike human or even mammalian brains.
QUANTIFYING CONSCIOUSNESS
While all these approaches in concert help inform the debate about animal consciousness, one theory stands out in its promise of a definitive solution. Giulio Tononi’s information integration theory is a well-regarded, modern theory of consciousness. It promises a single number for conscious level, calculated from the measure of the number of neurons in a brain, how they are wired together, and how they interact. In principle, this could show, to pluck numbers out of the air, that fully awake humans have a consciousness of 100 units, coma patients 2 units, chimps 50, rats 10, and so on.
A clear consequence of this theory is that virtually every animal will have some value for its consciousness level. A honeybee, for instance, with nearly a million neurons, certainly will. Even the simple nematode worm, C. elegans, with its 302 neurons, will have a value for its level of consciousness, although this figure will admittedly be minuscule. It’s even conceivable that a colony of ants would, under this system, be collectively classed as conscious. Some people are uncomfortable with the notion that such lowly creatures could even have a minimal level of consciousness, let alone a group of animals. Although it is true that more work needs to be carried out to validate this theory, it may turn out that such skeptical intuitions may be wrong. It may well be that any kind of brain, however small or simple, will generate some level of consciousness. The fruit fly, for instance, shows signs of a rudimentary attentional system, which is certainly one of the prime mental components of consciousness.
Tononi’s information integration theory is also compatible with the notion that computers or robots will at some point have consciousness—and we could in principle use the mathematics of the model to rate the level of consciousness of some artificial being according to its network equivalent of a brain.
However, this theory—and indeed all current theories linking consciousness with joined-up information in a network—rules out consciousness in bacteria and plants, despite their rudimentary computational processes. There simply is no information network to speak of, nor, returning to my main thesis, is there any capacity for the creature, in the moment, to combine lower-level information to form a more meaningful chunk.
In practice, things aren’t so simple. As the theory stands in its current form, the number of computations required to calculate this consciousness number scales up ferociously with the number of nodes, or neurons, so that even with a simplistic simulation of the humble C. elegans, with its 302 neurons, it would take 5 × 1079 years on a standard PC to calculate its level of consciousness—by which point the universe may no longer exist! Whatever the other strengths of this theory, unfortunately it cannot practically be used to measure the conscious levels of any animal, and certainly not of humans.
However, there are researchers, including my current lab colleagues Adam Barrett and Anil Seth, who are working hard to adapt the theory to make it practically computable for large-scale systems like the human brain. So in all likelihood there may be effective ways of calculating the level of consciousness of any being within the next decade, at least based on Tononi’s theory. All that would be required would be a mathematical algorithm applied to an approximation of the number of neurons (or artificial nodes) and the connections between them in a given brain—data that are already available for many species.
Rather than waiting for the mathematicians to adapt this measure of consciousness so that the calculations don’t take the age of the universe, Giulio Tononi, along with his colleague Marcello Massimini, has already been developing an intriguing method by which to make a practical rough-and-ready approximation. The experiment uses EEG to record brain waves as well as a transcranial magnetic stimulation (TMS) machine. TMS involves a figure-eight device about the size of a small book that is placed on the scalp. This machine is in fact a powerful electromagnet. The magnet is turned on for a fraction of a second, which causes the cortical neurons under the scalp, a
t the center of the figure-of-eight coil, to fire. All that the subject feels (I’ve experienced TMS myself multiple times) is a kind of tap on the head. In this particular experiment, the only task the volunteers had to perform was to nod off while the TMS pulses were delivered every couple of seconds. If the subjects were awake, the TMS would cause a spike of brain waves that would spread almost all over the brain during the next few hundred milliseconds or so. When the subjects drifted off to a dreamless sleep, although the initial spike of activity was greater, it would die down faster, and would remain only at the local site that was stimulated by the TMS machine. This study shows firsthand how, in wakefulness, information can flow freely across our entire cortical surface, but when we’re asleep, although the neurons are just as capable of firing, they only weakly transmit their information to their nearest neighbors. Massimini and Tononi see this as evidence that our ability to combine information throughout much of the cortex is high when we’re awake, but low when asleep (these data are also applicable to other consciousness theories, however).