So while, in principle, australopithecines might have engaged in endurance hunting, I think it’s doubtful anyone did until they had some kind of weapon, a spear maybe, that would put them on more equal terms with other carnivores. So far as we know, effective spears go back maybe half a million years, and we’re talking now about two and a half million. A pointed stick, maybe? Come on, this is not the Monty Python self-defense-class skit. This is real life, late Pliocene style.
That left only one alternative: scavenging.
But there were problems with scavenging too.
We tend to think in terms of two classes of carnivore, hunters and scavengers—lions hunt, hyenas scavenge. Wrong: hyenas will pack-hunt, while most of the big cats scavenge when they get the chance. Predators lack any sense of sportsmanship—if they can eat without having to work for it, they’ll do just that. Hunting is energy-expensive; it’s what you do if there’s no reasonably fresh meat already lying around.
So there’s a natural hierarchy of scavengers. The big cats come at the top, naturally, above hyenas, wild dogs, and the like. But if there are only one or two big scavengers and plenty of smaller ones, the tables can be turned. Below them come the vultures, who will take care of pretty well anything the four-footed scavengers leave. Where in this hierarchy could the australopithecines enter?
Where do you enter any new enterprise? At the bottom, of course.
But what was left after the vultures had done their stuff? Well, hardly anything but the bones.
If ever a species needed to open up a new niche, Australopithecus garhi was it. And just such a niche was right there waiting for it—inside the bones, you might say. For inside the bones, inaccessible to any species without tools, was one of the richest and most nutritious foods known: bone marrow.
Smaller, more fragile bones can be, and often are, cracked and crushed by scavengers’ teeth. Big ones are too thick, too strong. But a nimble primate with a hammerstone in his hand can break into the biggest bones. What australopithecine Einstein first figured this out, we’ll never know. But sure enough, in recent years, cut marks made by primitive tools (and even a few of the tools themselves) have been found on bones at sites associated with garhi but too early for those tools to have been made by garhi’s successor, Homo habilis.
This has been seen as an embarrassment by some paleontologists, for the fact that Homo habilis—“handy man”—made tools, and the belief that no one else had done this, formed the main basis for making habilis the first identified member of the human family. And to be sure, the tools of habilis—the so-called Oldowan industry—may have been more sophisticated than the garhi tools. Or so the experts tell us. You or I, if we picked up one of either by chance, probably could not tell them from stones that had been cracked and shaped by natural forces—that’s how primitive both lots were.
What matters, though, is that garhi and habilis faced the same challenge and (regardless of whether or not one was the ancestor of the other) dealt with it in the same way. And, looking at it from a general primate perspective, you could say this was no big deal. Chimpanzees on the Ivory Coast use unmodified (but carefully selected) stones to break open palm nuts. Homology or analogy? Who knows? Maybe the last common ancestor broke stuff open, or maybe it’s just an idea that occurs spontaneously to any animal with a big enough brain when confronted by something hard with edible stuff inside.
But from our ancestors’ perspective, breaking into bones had at least four big advantages going for it:
• Abundance: there were lots of herbivores on the savanna, so there were always plenty of bones around.
• Permanence: bones were not about to go away, like live prey; they would remain accessible for long after the demise of their owner.
• Lack of competition: no other animal could utilize this particular food source, so other scavengers would be long gone when our ancestors arrived on the scene.
• High-value product: nothing on the savanna was more nutritious, ounce for ounce, than bone marrow.
So first garhi and then habilis became low-end scavengers. And, lo and behold, brains began to grow.
Primate brains are bigger, relative to overall body size, than those of other mammals, and a necessary condition for that is a rich diet. (A sufficient condition is, once you’ve got that bigger brain, you have to find something for it to do, if it’s to earn its high-energy upkeep.) Brains pretty much stabilized throughout australopithecine time, because an omnivorous woodland diet could barely keep level with a fruit-enriched forest diet. Bone marrow set in motion a trend that didn’t reverse until quite recently, a progressive tripling of our ancestors’ brains.
But that wasn’t what started language. A bigger brain would indeed have come in handy once language had gotten started, and language itself would have selected for bigger brains. But for all the claims that “bigger brains made us more intelligent and that’s how we got language,” I’ve never seen even one backed by any kind of explanation of exactly how these developments came about.
For language, what you needed wasn’t brains, wasn’t even intelligence. Just the right kind of niche.
MEAT, GLORIOUS MEAT!
Though bone marrow may have been rich, it wasn’t really plentiful enough. Bones themselves may have been numerous, but the actual quantity of marrow each bone contained was quite small. However, there was another food source on the savanna that, while it might not carry marrow’s nutritional punch, sometimes became available in quantities that stagger the mind.
That source was dead megafauna.
Let’s look at the how and why of it. First of all, why were big animals there at all? The answer is the size niche. At the top of every order of being, there’s a size niche, whether we’re talking of trees (sequoias), ocean dwellers (blue whales), dinosaurs (sauropods), or mammals (mammoths). The size niche exists, permanently, within any order, simply because if you’re bigger than anything else around, you’re virtually invulnerable to attack. Nothing can become indefinitely large—constraints inherent in body plans, gravity, finiteness of food supply, and doubtless other factors prevent this. But in any order some animals will get to be as big as they can; natural selection guarantees it.
I’ve mentioned the large number of predators that roamed the savannas. Until a species arrived that could make weapons, size was the only real protection against these ferocious meat-seekers. So, in the savannas of two million years ago, efficient and widespread predation by carnivores selected for greater increase in size among herbivores. Indeed, there were several varieties of large herbivore: mammoths, deinotheriums and other predecessors of modern elephants, and the ancestors of rhinoceros and hippopotami. To their size they added a further line of defense: thick, leathery hides. These animals were perhaps the only ones that were often allowed the luxury of a natural death. And, even when dead, they enjoyed another day or two of invulnerability. For their skin was so thick and tough that, while a predator’s teeth might be able to pierce it, they could not slice it or tear it open to extract the masses of meat that lay beneath.
Scavengers had to wait—pacing to and fro impatiently, or, more wisely, just lying in the long grass conserving their energy—until the action of bacteria inside the dead flesh released gases, and the gases expanded until they ruptured the dead animal’s hide. Then, and only then, could the scavengers, in their order of precedence, move in for the feast.
That left an open niche, a narrow window of opportunity for any species that could cut the hide and access the meat before natural decay made it available to everyone.
Could our ancestors have opened up that niche?
Nicholas Toth, codirector of the Stone Age Institute in Bloomington, Indiana, and one of the leading authorities on prehistoric tools, set out to answer that question. With his wife, Kathy Schick, also a co-director of the Institute, and their associate Ray Dezzani, he took flint and lava flakes identical with those produced in the course of Oldowan toolmaking and they set about butchering an
elephant that had died of natural causes.
It was a daunting task. “Initially, the sight of a twelve-thousand-pound animal carcass the size of a Winnebago can be quite intimidating—where do you start?” Schick and Toth begin their account. To move it would have required heavy-duty machinery—“You have to play the carcass as it lies.” Schick and Dezzani started cutting anyway and were “amazed . . . as a small lava flake sliced through the steel gray skin, about one inch thick, exposing enormous quantities of rich, red elephant meat.” And since “modern scavengers normally do not eat a dead elephant until it has decomposed for several days, such carcasses may have provided occasional bonanzas for Early Stone Age hominids.”
Sure, but why only “occasional”?
The reason usually given is that very few megafauna remains have been found in what are known as “catchment sites.” To understand what this means, we have to understand the difference between “catchment scavenging” and “territory scavenging.”
Before two million years ago, most prehuman scavenging was catchment scavenging. Findings of artifacts and fossil bones, both prehuman and animal, cluster around particular locations—confluences of streams, rocky outcrops—making it seem that our ancestors used these locations as temporary or even semipermanent bases and scavenged in the area that immediately surrounded them. About two million years ago, a new strategy took over. Prehumans now ranged over broad territories, and instead of taking meat to a catchment site for processing, butchered and consumed it at or near wherever they found it. We’ll see in a moment evidence that a correlated but much more significant change also took place around two million years ago.
So the fact that few remains of large animals are found at catchment sites tells us nothing about how often such animals were found and processed after the date when such sites fell out of use. Since dead megafauna could have cropped up anywhere, we would have to dig up the whole of East Africa to find out. Obviously, that’s out of the question. We can only estimate how often our ancestors might have scavenged dead megafauna by getting help from modern statistics on large-animal populations.
Right now, the African elephant is an endangered species. However, there are still around a half million of them. These occupy a range of just over two million square kilometers, which means that, on average, there’s an elephant for every four square kilometers. Before humans started slaughtering them for their tusks, we may reasonably suppose a density closer to one elephant per square kilometer. Or, in an area of, say, 150 square kilometers, 150 elephants.
An area of 150 square kilometers is seven and a half miles long and seven and a half miles wide. To a group of human ancestors at the center of such an area, a large part of it would have been visible to the naked eye, and any part of it could have been reached on foot in a couple or three hours, without hurrying. So it doesn’t seem unreasonable to suppose that whenever a large animal died within that area, someone in the group would have spotted it pretty quickly.
Modern elephants in the wild live on average from sixty to seventy years. So within our 150 square kilometers, at least two elephants would have died every year. And that’s just elephants. We haven’t considered the ancestors of hippopotami, rhinoceroses, and any other large beasts that might happen to have been around. “Occasional bonanzas” would thus have occurred at least every couple of months or so.
All very well, you say, but deaths would hardly have been distributed evenly across an entire landscape. Maybe they all went and died near waterholes. And in any case, elephants don’t stay still; they roam all over the place. For all your statistics, years could roll by without a single elephant dying in any particular 150-square-kilometer range.
That’s very true. But it wouldn’t matter, once territory scavenging had taken the place of catchment scavenging. Suppose hominids took to following megafauna herds, just as human hunter-gatherers in high latitudes would later follow the seasonal migrations of caribou or reindeer. Or suppose a larger group split into smaller groups, vastly extending the scavenging range, a range that could have been extended still farther as scavengers learned to read signs—dung piles and beaten trails, or better still, the circling of distant vultures. Then, far from “occasional bonanzas,” dead megafauna could have provided our ancestors with a very substantial portion of their diet.
However, “could have” is a long way from “did.” Is there any evidence our ancestors did in fact develop and exploit this new and quite unique niche?
CUT MARKS AND OPTIMALITY
The answer is yes. There are two quite separate but mutually reinforcing lines of evidence to indicate that they did.
The first comes from sequences of cut marks on fossil bones, the second from something called optimal foraging theory. Let’s look at each in turn.
When you butcher a carcass with a sharp piece of flint or lava (or anything else for that matter), you inevitably leave cut marks on the animal’s bones. This is true even if you’re not trying to sever them. Bones just get in your way, as everyone who’s ever carved a turkey knows.
In the same way, when a predator chews up a carcass, every now and then its teeth catch on a bone. The big cats of those days had sharp teeth, and those teeth made indentations in bones that are quite different from the cut marks stone tools leave.
Sometimes both animals and human ancestors worked (at different times, presumably) on the same carcass. You can tell when that has happened because one set of marks is superimposed over another. This shows which accessed the carcass first—prehuman or nonhuman.
Up until around two million years ago, wherever such pairs of markings are found, the cut marks of tools are always uppermost. In other words, other carnivores were getting at the carcass before our ancestors had a chance at it. Those ancestors were still entry-level—at the bottom of the scavenging pyramid, breaking bones for the marrow they contained.
Around the two-million-year mark, things change. Now, with increasing frequency, the sequence of marking on bones is reversed. Now it’s the stone-tool cut marks that lie underneath, with animal bites superimposed on them. Our ancestors have somehow managed to move to the top of the scavenging pyramid. They’re getting at the meat before anyone else has a chance at it. And the most likely, perhaps the only, way they could have done this is by accessing megafauna carcasses before anything else had a chance at them—in other words, by cutting through intact hides just like Toth, Schick, and Dezzani did.
Notice the period when the cut-mark sequence changes—it’s around the time that catchment scavenging was replaced by territory scavenging. Could one change be the consequence of the other? It looks as though both were mere aspects of a larger process—the construction by our ancestors of the high-end scavenging niche.
Remember that two-million-year boundary and what lies on either side of it. Before it, there was catchment scavenging, and catchment sites with few if any bones of large animals. Naturally—just imagine slinging a mammoth leg over your shoulder and trotting off with it to home base. And in any case, if your main target was bones, you could afford to limit yourself to a relatively small home range, because bones were plentiful and didn’t wander around.
But then prehumans moved into the high-end scavenging niche. Here, conditions were quite different. Your main targets, whenever you could get them, were megafauna carcasses. These could be lying around anywhere, and you had to go wherever they were: territorial scavenging, in other words. The farther you roamed, the more carcasses you might find. And they’d be too far, in most cases, for you to lug the remains to any kind of refuge. You’d have to sit down and consume them on or relatively near the spot.
Well, you may ask, why go to the trouble? Catchment scavenging had gone on successfully for hundreds of thousands of years. Why change now, even if you had learned how to cut through mammoth hides?
The answer is optimal foraging theory.
Optimal foraging theory was originally developed by the late Robert MacArthur (then at Princeton) and Eric Piank
a of the University of Texas, Austin. (In a recent but unrelated development, creationists reported Pianka to the Department of Homeland Security for allegedly claiming that 90 percent of humans should be eliminated—Pianka claims he was merely warning that in a currently overcrowded planet, a mutated virus could do just that.) The theory states that any species will choose, out of available foods, just those that yield the highest calorific intake relative to the energy that’s expended in obtaining them. Since MacArthur and Pianka wrote the first paper on optimal foraging theory more than four decades ago, countless studies of species ranging from gulls to stream insects to white-tailed deer have, with relatively few and usually explicable exceptions, supported this theory.
For modern humans, a supersized McMeal clearly represents the most calories for the least effort (evolution never having even imagined a species some of whose members would have virtually limitless access to food). For our ancestors, however, meat obtained from dead megafauna filled the bill. It was not as nutritiously rich as bone marrow, but unlike bone marrow it was available in vast quantities—in the carcasses of Winnebago-sized monsters on whose flesh you could feast for days on end. And you didn’t have to work for it or hunt it. You just had to keep your eyes open and locate it, so energy expenditure would have been low compared with the caloric yield. For even if you had to look for a long time, there were always bones, plus the odd rodent, the odd tuber, the odd bees’ nest to keep you going.
The only problem was the risk factor.
It wasn’t only human ancestors for whom scavenged megafauna meat made the best nutritional bargain. Optimal foraging theory predicted the same outcome for any savanna-dwelling carnivore—for the big cats, the hyenas, the vultures, each and every one of the other scavengers. And were these animals, old hands at the game, going to let some Johnny-come-lately primate get away with what for millions of years they’d regarded as rightfully theirs?
Adam's Tongue: How Humans Made Language, How Language Made Humans Page 15