From time to time, of course, animals do produce new behaviors. Japanese macaques wash potatoes (at least, some do). Foxes and coyotes kill pets in new subdivisions. Bears rifle garbage cans. Once upon a time in the West, Yvonne and I drove into an isolated campsite on a hilltop. It was high summer, so we were surprised to find it deserted. Then we saw the crushed and twisted garbage cans; some had been chained to steel stakes, but that hadn’t stopped the grizzlies. We were out of there in seconds.
All these were new things, but all were accidental. For instance, bears can smell food a long way off, and American campers are wasteful. After the first few lucky finds, bears began systematically to target campgrounds. It’s quite likely that stone tools started in a similar way. Protohumans bashed rocks, maybe in the course of nut-cracking (chimps do this regularly on the Ivory Coast), maybe as part of a dominance display (chimps do this with tree branches), maybe just for fun. Some of the rocks split and got sharp edges. Some smart ancestor realized you could crack bones with them, even cut stuff with the flakes from the core. So splitting and shaping rocks got to be a tradition, and for two million years paleopeople went on doing it. They got better at it, of course. Some of the pieces had slightly different shapes and sizes and gradually got modified, presumably for different purposes (although experts are often far from agreeing on what those purposes were). But there was a basic form common to all of them—they were all single, stand-alone pieces, longer than they were broad, with one end more (or less) pointed and one end more (or less) rounded. They were, fundamentally, variations on a single theme.
Now look at an Aterian point. Aterian points were made in North Africa starting perhaps as long as ninety thousand years ago. They’re often described as arrowheads, but nowadays most people think it unlikely they were used on arrows, at least not the early ones. Probably at first they were spear points, then points on darts thrown with the aid of an atlatl, or spear-thrower, and only later became arrowheads.
At first sight, an Aterian point may look to you like no more than a downsized version of the old pear-shaped tools. But then you realize it’s not a stand-alone piece. It’s useless by itself. It has to be hafted onto a shaft of some kind, and that’s new. You had to use maybe as many as four different types of material: stone for the point, wood for the shaft, mastic (a sticky resin from a bush that grows round the Mediterranean) and maybe gut or vine to bind point to shaft. You not only have to make the things you need; you can’t make them unless you’ve figured out in advance how they could fit together and work together. You can’t do that for the first time by trial and error, the way previous tools were first made. You have to work it out in your head—imagine it all, before you can start. To do that, you have to have concepts of the things you’re working with and what you’re going to do with them.
Look at the point more closely. Look at its tang. The tang is the part that fits into the shaft. Above it the point flares out with two flanges, almost barbs, before narrowing at the tip. Once the point pierces skin, that broadening will hold it there so the prey animal can’t shake it loose. But the real function of the tang is to make a firm but narrow base that will fit into a space bored or split into the end of the shaft and filled with mastic so the tang fits inside it and can be bound for added security. The whole system, even before the atlatl, required forethought and planning. Forethought and planning in turn demand that you work not with physical objects but with your ideas of those objects—concepts you can move around in your mind to make new patterns and create marvelous and unprecedented things.
Now note precisely where the divide, the discontinuity, the boundary between human and nonhuman falls. Not between human ancestors and apes. It falls between our own species, on the one hand, and on the other, all other species that live or have ever lived, including our own immediate ancestors. Only our own species, it seems, has ever produced artifacts that needed forethought; therefore only our own species has ever practiced offline thinking.
CONCEPTS VERSUS CATEGORIES
Critical here is the difference between a concept and a category. These words are often used very loosely, even treated as if they were interchangeable. In a moment I’m going to try to define them in neurological terms, because that’s how we ought by now to be starting to define all those old-fashioned notions about things in the mind that we’ve been tossing around recklessly since before Plato.
For now, let’s get a loose grip on them by merely saying that a concept is something you can “think about” and “think with,” whereas with categories, all you can do is say whether something belongs in them or not. That’s the difference. The similarity is that both terms refer to some kind of class into which things can be sorted—leopards, or tables, or grandmothers, anything at all. Because of that similarity, categories and concepts are sometimes treated as different names for the same thing. But if we don’t distinguish between them, we’ll never understand why humans differ from nonhumans.
Now look at all this through the lens of evolution. How can a brain best contribute to an animal’s fitness? By telling it what’s out there—what dangers it faces, what opportunities await its grasp. If the brain knows what’s out there, it can tell its owner how to react. It’s an X—eat it! It’s a Y—up the nearest tree! It’s a Z—freeze, and hope it goes away! Most of the time, of course, it’s a W—no problem, go on doing whatever you’re doing. But the brain’s owner has to know. So it comes to divide things into classes—categories—that differ recognizably from one another (if it’s an X, no way it can be a Y or a Z).
Let X be a squash and Y a leopard. Does the animal have two neat little packages in its head, one labeled “squash,” the other “leopard”? Certainly not at first. In the early stages of brain evolution, the brain must have first picked up particular salient details: a kind of rapid movement, an unusual combination of colors. As senses sharpened and the ability grew to distinguish between things, even quite similar things, such details must have multiplied. Now a glimpse of a spotted coat through foliage, a distinctive cough, a particular swirl of movement in long grass, a pungent odor, the sound of paws landing on leaves as their owner sprang from a low branch—any of these or any combination of these could trigger the appropriate set of responses to an imminent leopard attack.
To be more precise, neurons in different regions of the brain, regions that dealt separately with sounds and sights and smells, would change their rate of firing in response to the incoming data, which in turn would trigger other neurons whose job it is to determine what the sensory neurons are talking about and what to do about it. And these decision-making units, if sufficiently excited, would then send, to neurons in the motor regions that control the animal’s movement, signals that would indicate whatever response seemed most appropriate—freeze, flee, fight, climb a tree, or whatever.
Where’s the concept of “leopard”?
You might say, “In the neurons that identify all the sounds, sights, smells, etc. as coming from a leopard.” But do they, or is that just how we’d naturally think of it, since we’re human and have a typically human kind of concept? Might it not equally be the case that the decision neurons are merely identifying “things on sensing which you’d better run up a tree”? And would there need to be any distinction between leopards and anything else that might make you want to run up a tree?
Let’s be generous and allow that some neurons in the brain respond selectively to phenomena produced by leopards and only by leopards. Would they then represent a real equivalent of our concept of “leopard”—a concept that, if we choose, will link with every feature of leopards, their spots, their location, their hunting patterns, and on and on? Or would they represent only an identification—“It’s a leopard!”?
Animals don’t have to think about leopards once a particular leopard has gone away. They don’t have to worry about what might happen when they next meet one, or devise elaborate plans for evading leopards. Remember how the vervet leopard warning means “leopard”
only when there’s a leopard there. Well, what I’m claiming here is that their communication directly reflects what goes on in their minds. It’s not what Hurford and many other writers seem to think—that they have a rich mental life but have never found out how to communicate about that life. To the contrary, they can only communicate about the here and now because their minds can only operate in the here and now. They can’t think, as we can, about leopards in the past or in the future or just in our own imagination (“I wonder if I could tame a leopard and have it as a pet?”) because they don’t have any sufficiently abstract mental units with which they could do so.
None of this means that nonhumans don’t have a rich knowledge base, layer upon layer of memories, two or three different kinds of memory if it comes to that. If they didn’t have such a stock to draw on, they wouldn’t be able to function as well as they do. And nothing I have said should suggest that they don’t have full access to these memories. Any of this knowledge base can be tapped, any memories triggered, by events in the world. A memory, once triggered, may trigger another, if that’s relevant to the task at hand. What the animal can’t do is think constructively about leopards when there’s no real-life leopard around. That’s because there’s no neuron or cohort of neurons that works as a pure symbol for “leopard.”
In fact, the difference between human and nonhuman memory resembles in one respect the difference between RAM (random access memory) and CAM (content-addressable memory) in computers. In one, the user (read here the environment) supplies a memory address and RAM returns just the data stored at that address; in the other, CAM links that address with relevant data stored anywhere. (As you might expect, CAM is more complex and more expensive than RAM.)
So what exactly is there, in a leopard-identifying animal’s brain?
I think there isn’t anything in its brain that relates specifically to leopards in the way that either a thought or a word in the human brain does. All over the brain there are cohorts of neurons that respond directly to all the sights and sounds and smells that come in from the world by changing the rate at which they send out electrical impulses. Among all of these cohorts are neurons responding to sights and sounds and smells that might be made by leopards. When “enough” of these neurons (“enough” being still a black box) are triggered by a leopard appearance, the animal goes into high alert, may issue an alarm call, may take appropriate action. But the neurons activated on any given occasion are just one subset of the complete set of potentially leopard-responding neurons. The next appearance of a leopard may trigger a quite different subset, though the result (in terms of the animal’s reactions) may be identical. Bottom line is, there’s nowhere any fixed, determined set of linked neurons that represents “leopard” and nothing else.
But once you have a word or sign for “leopard,” there has to be such a set—we’ll see why in a moment. There has to be a fixed, permanent set of neurons that represent the sounds or gestures needed to produce the word or sign in question. But for that word or sign to have meaning, this fixed set has to link to all the different representations of leopard-bits on which the original “leopard” category was based.
In other words, I’m arguing that what started human-type concepts—things that have a permanent residence in the brain, instead of coming and going as and when they are stimulated—was the emergence of words.
Careful here. I’m not saying that “concepts are words,” or “you have to have a word to have a concept.” Least of all am I saying, “You can’t think without words.” Nonhumans do it all the time. They think online, run all kinds of computations on what they’re doing. Imagine an eagle stooping on a running rabbit. While the eagle is in midfall, the rabbit changes course. The eagle has to recompute its trajectory in milliseconds. It may not be consciously aware of what it’s doing, but if that isn’t thinking, what is? You or I couldn’t do it, that’s for sure.
We too can think online; we can even think online and offline at the same time. Working on an assembly belt, driving a familiar route, we’re on automatic pilot; we’re thinking offline over things in our personal lives that don’t have any connection with what our hands and feet are up to. The computations of time and speed and relative distance that we run while driving through traffic may be quite unconscious, though they don’t have to be. The difference between online and offline thinking isn’t unconscious versus conscious. The difference is that in online thinking, what’s being thought about is right there in front of you, while in offline thinking it isn’t.
Online thinking can be conscious or unconscious; when you’re assembling a new piece of furniture from a list of printed instructions, it had better be conscious. But offline thinking has to be conscious, because by definition the things you’re thinking about can’t be there. Only the concepts can be there.
Maybe offline thinking is consciousness. But let’s not get into that; we’ve got enough on our plate already. Let’s get back to words. Words are reassuringly concrete, at least relatively so, compared with concepts and consciousness and suchlike, which tend to make you feel dizzy if you focus on them too long.
So all I’m saying is, without words we’d never have gotten into having concepts. Words are simply permanent anchors that most concepts have—a means of pulling together all the sights and sounds and smells, all the varied kinds of knowledge we have about what the concept refers to. But once the brain found the trick of making concepts, it no longer needed a word as the base for a new concept. It just needed some place where all the knowledge could come together and link with other concepts.
Once we had proper words (and I’m jumping the gun here; I still have to tell you how iconic “mammoth” sounds got to be words—I’ll do that in the next chapter), here’s what happened. The word had to have some kind of mental representation. There had to be a bunch of neurons somewhere that, when they fired, would start the motor sequence that would cause the vocal organs to utter “mammoth,” or whatever. And that bunch of neurons had to be permanently accessible, had to be willing and able to fire whenever they were asked to do so.
SUMMING UP
I’d be the last person to pretend to you that the issues we’ve discussed are cut and dried, or that the answers to the questions I’ve raised here are plain and straightforward. In order to make the points I needed to make, I’ve had to simplify many complex things. In order to save this chapter from bogging down in a morass of detail, I’ve had to downplay or ignore topics that many experts in the field concerned will regard as of paramount importance. I still think I took the right course—the only course, if we’re to see the woods and not just the trees, if we’re ever to get a grip on what makes us so different from other species.
The only test of a story is its explanatory power. The best story is the story that explains the most things, that passes the greatest number of tests for what an explanation should accomplish. Before we look in more detail at how language and thought coevolved, I want to summarize where we’re at and give a few compass bearings for where we still have to go.
The main point to be borne in mind is that between humans and nonhumans there are two discontinuities, not just one. We have language and no other species does, and we have seemingly limitless creativity and no other species does. Language and creativity are both, for all practical purposes, infinite; is this mere coincidence? For two independent discontinuities of such size to exist in a single species is something altogether too bizarre in evolutionary terms. So at the very least it’s worth exploring the possibility that the two discontinuities spring from the same source.
Language involves the mind and creativity involves the mind—the mind being no more than the brain at work. So the likeliest cause of such a double discontinuity would seem to lie in a difference between the workings of human and nonhuman brains. One possible difference, one that would seem to give rise to all the phenomena we’ve been looking at, is that nonhumans have categories and humans have concepts.
Categories s
ort things into classes but can only be evoked by physical evidence that members of those classes are present.
Concepts sort things into classes but can in addition be evoked by other concepts even in the absence of members of any of the classes concerned. Hence they become available for offline thinking.
All the things nonhumans do that make it look as if they had concepts like ours can be explained by feats of memory, specialized and dedicated mechanisms for solving problems posed by niches, stereotypical strategies responding to different threats, and/or other causes or combinations of causes that at no point entail the possession of concepts.
Eventually, language and human cognition did coevolve. But first, the first words had to trigger the first concepts and the brain had to provide those concepts with permanent neural addresses. Only then could the creation of concepts enable the mind to roam freely over past and future, the real and the imaginary, just as we can do nowadays in our talking and writing. In other words, before typically human ways of thinking could grow, language itself had to grow. And in the next chapter, we’ll see how.
11
AN ACORN GROWS
TO A SAPLING
THE TRIPLE UNCOUPLING
At the end of chapter 8, I asked how such a small change in the way protohumans communicated—the tiny handful of signals required by recruitment—could have developed into anything as complex as language is today.
I can answer that question in just four words.
With the greatest difficulty.
If you believe that animals have minds with concepts just like ours, it should have been easy. Most people assume, as I did before I really thought about it, that once you realized what a linguistic symbol was, everything would be simple and straightforward. As soon as some kind of protolanguage got started, it would take off. All that would be involved was a slapping of linguistic labels on an array of concepts that were sitting there waiting for them.
Adam's Tongue: How Humans Made Language, How Language Made Humans Page 25