Out of Our Minds
Page 6
In a crazed frenzy, villagers of Hautefaye in the Dordogne in 1870 devoured one of their neighbours, whom a mad rumour misidentified as a ‘Prussian’ invader or spy, because nothing save cannibalism could satiate the fury they felt.3 For the Orokaiva people of Papua, it – until the island authorities banned it in the 1960s – was their way of ‘capturing spirits’ in compensation for lost warriors. The Hua of New Guinea ate their dead to conserve vital fluids they believed to be non-renewable in nature. In the same highlands, Gimi women used to guarantee the renewal of their fertility by consuming dead menfolk. ‘We would not leave a man to rot!’ was their traditional cry. ‘We take pity on him! Come to me, so you shall not rot on the ground: let your body dissolve inside me!’4 They unconsciously echoed the Brahmins who, in an anecdote of Herodotus, defended cannibalism on the grounds that other ways of disposing of the dead were impious.5 Until ‘pacification’ in the 1960s the Amazonian Huari ate their slaughtered enemies in revenge and their close kin in ‘compassion’ – to spare them the indignity of putrefaction. Aztec warriors ingested morsels of the bodies of battle-captives to acquire their virtue and valour.6 The hominids of Atapuerca launched an adventure in thought.7
Theirs was the earliest recoverable idea – recorded deep in layers of cognitive stratigraphy: the idea that thinkers can change themselves, appropriate qualities not their own, become something other than what they are. All subsequent ideas echo from the cave-walls of Atapuerca: we are still trying to refashion ourselves and remake our world.
By about 300,000 years ago – still long before the advent of Homo sapiens – when a landslide sealed the cave-mouth and turned Atapuerca into a kind of time-capsule, the locals were stacking the bones of their dead in recognizable patterns. What the rite meant is unfathomable, but it was a rite. It had meaning. It suggests, at a minimum, another idea – the distinction between life and death – and perhaps a kind of religious sensibility that treats the dead as deserving of the honour or trouble of the living.
Inklings of Afterlife
There is similar but more easily interpreted evidence in burials of some forty thousand years ago: recent enough to coincide with Homo sapiens but actually belonging to the distinct species we call Neanderthals. There is in principle no good reason to suppose that Neanderthals were less imaginative or less productive of ideas than humans of our own species. Redating, according to uranium–thorium techniques newly applied to palaeoanthropological materials in 2018, has reassigned to the seventieth millennium bce some of the representational and symbolic cave paintings in northern Spain (such as those examined in the previous section); they include sketches of animals, hand prints, and geometric designs of kinds formerly assigned to Homo sapiens artists of about thirty or forty thousand years ago.8 The same dating techniques have yielded dates of over 115,000 years ago for artefacts made of shell and for pigments.9 If the revisions are valid, the artists responsible were of a species that long predated the earliest known evidence of Homo sapiens in the regions, in a period when Neanderthals were already at home there. The evidence of active imaginations and powerful thinking in Neanderthal graves should not surprise us.
In a grave at La Ferrassie, France, for instance, two adults of different sexes lie curled in the foetal position typical of Neanderthal graves. Three children of between three and five years old and a newborn baby lie nearby among flint tools and fragments of animal bones. The remains of two foetuses were interred with equal dignity. Other dead Neanderthals were graced at burial with even more valuable grave goods: a pair of ibex horns accompanied one youth in death, a sprinkling of ochre bedecked another. At Shanidar, in what is now Iraq, an old man – who had survived in the care of his community for many years after the loss of the use of an arm, severe disablement to both legs, and blindness in one eye – lies with traces of flowers and medicinal herbs. Sceptical scholars have tried to ‘explain away’ these and many other cases of what look like ritual interments as the results of accident or fraud, but there are so many graves that the commonsense consensus acknowledges them as genuine. At the other extreme, irresponsible inferences credit Neanderthals with a broad concept of humanity, a system of social welfare, belief in the immortality of the soul, and a political system dominated by philosopher-gerontocrats.10
The burials do, however, disclose strenuous thinking, not just for such material ends as safeguarding the dead from scavengers, or masking putrescence, but also in order to differentiate life from death. The distinction is subtler than people commonly suppose. Apart from conception, which some people contest, no moment defines itself as the start of life. Impenetrable comas make it hard even today to say when it ends. But thirty or forty thousand years ago, Neanderthals made the same conceptual distinction as we make, marking it by rites of differentiation of the dead. Celebrations of death hallow life. Rites of burial are more than merely instinctive valuing of life. Those who undertake them make a display of their confidence that life deserves reverence, which is the basis of all human moral action.
It is tempting to treat ceremonial burial as evidence that people who practise it believe in an afterlife. But it might be no more than an act of commemoration or a mark of respect. Grave goods may be intended to work propitiatory magic in this world. On the other hand, by thirty-five to forty thousand years ago, complete survival kits – food, clothes, negotiable valuables, and the tools of one’s trade – accompanied the dead all over the inhabited world, as if to equip life in or beyond the grave. Corpses from very modest levels of society had gifts of ochre, at least, in their graves; those of higher rank had tools and decorative objects presumably in accordance with status.
The idea that death could be survived probably came easily to those who first thought of it. The constant transformations we observe in our living bodies never seem to impair our individual identities. We survive puberty, menopause, and traumatic injury without ceasing to be ourselves. Death is just another, albeit the most radical, of such changes. Why expect it to mark our extinction? To judge from goods normally selected for burial, mourners expected the afterworld to resemble life familiar from experience. What mattered was survival of status, rather than of the soul. No change in that principle is detectable until the first millennium bce, and then only in some parts of the world.11
Later refinements, however, modified the afterlife idea: expecting reward or punishment in the next world or imagining an opportunity for reincarnation or renewed presence on Earth. The threat or promise of the afterlife could then become a source of moral influence on this world and, in the right hands, a means of moulding society. If, for instance, it is right to see the Shanidar burial as evidence that a half-blind cripple survived thanks to the nurture of fellow-Neanderthals for years before he died, he belonged in a society that prescribed care for the weak: that implies either that the kind of costly moral code Social Democrats advocate today was already in place, or that his carers were trying to secure access to his wisdom or esoteric knowledge.
The Earliest Ethics
Everybody, at all times, can cite practical reasons for what they do. But why do we have scruples strong enough to override practical considerations? Where does the idea of a moral code – a systematic set of rules for distinguishing good from evil – come from? It is so common that it is likely to be of very great antiquity. In most societies’ origins myths, moral discrimination figures among humans’ earliest discoveries or revelations. In Genesis, it comes third – after acquiring language and society – among Adam’s accomplishments: and ‘knowledge of good and evil’ is his most important step; it dominates the rest of the story.
In an attempt to trace the emergence of morality, we can scour the archaeological record for evidence of apparently disinterested actions. But calculations unrevealed in the record may have been involved; we might pay homage to Neanderthal altruism, for instance, in default of information about the Shanidar carers’ quid pro quo. Many non-human animals, moreover, perform disinterested actions without, as far as
we know, having a code of ethics (although they do sometimes get depressed if their efforts go unrewarded. There is a credible story about earthquake-rescue dogs demoralized after a long period with no one to save. Their keepers had to draft in actors to pretend to be survivors). Altruism may deceive us into inferring ethics: it may be just a survival mechanism that impels us to help each other for the sake of kickback or collaboration. Morals may be a form of self-interest and ‘morality’ a misleadingly high-minded term for covertly calculated advantage. We behave well, perhaps, not because we have thought up morality for ourselves, but because evolutionary determinants have forced it on us. Maybe the ‘selfish gene’ is making us altruistic to conserve our gene pool. Evidence of a distinction between right and wrong is never unequivocal in preliterate sources in any but a practical sense.
We are still equivocal about the difference. Right and wrong, according to a formidable philosophical tradition, are words we give to particular ratios between pleasure and pain. Good and evil, in a sceptical tradition, are elusive notions, both definable as aspects of the pursuit of self-interest. Even philosophers who grant morality the status of a sincerely espoused code sometimes argue that it is a source of weakness, which inhibits people from maximizing their power. More probably, however, goodness is like all great goals: rarely attained, but conducive to striving, discipline, and self-improvement.12 The benefits are incidental: committed citizens in societies that loyalty and self-sacrifice enrich. We can get closer to the earliest identifiable thoughts about good and evil, as about almost everything else, if we turn to the explosion of evidence – or, at least, of evidence that survives – from about 170,000 years ago. It may seem recent by the standards of Atapuerca, but it is far longer ago than previous histories of ideas have dared to go.
Identifying the Early Thoughts of Homo Sapiens
The longer an idea has been around, the more time it has had to modify the world. To identify the most influential ideas, therefore, we have to start with the earliest past we can reconstruct or imagine. Ideas are hard to retrieve from remote antiquity, partly because evidence fades and partly because ideas and instincts are easily confused. Ideas arise in the mind. Instincts are there ‘already’: inborn, implanted, presumably, by evolution; or, according to some theories, occurring first as responses to environmental conditions or to accidental experiences – which is how Charles Lamb imagined the discovery of cooking, as a consequence of a pig partially immolated in a house-fire. Darwin depicted the beginnings of agriculture in the same way, when a ‘wise old savage’ noticed seeds sprouting from a midden (see here).
In these, as in every early instance of emerging ideas, the judgements we make are bound to be imperfectly informed. Do we talk and write, for instance, because the idea of language – of symbols deployed systematically to mean things other than themselves – occurred to some ancestor or ancestors? Or is it because we are ‘hard-wired’ to express ourselves symbolically? Or because symbols are products of a collective human ‘subconscious’?13 Or because language evolved out of a propensity for gesture and grimace?14 Do most of us clothe ourselves and adorn our environments because ancestors imagined a clothed and adorned world? Or is it because animal urges drive seekers of warmth and shelter to expedients of which art is a by-product? Some of our longest-standing notions may have originated ‘naturally’, without human contrivance.
The next tasks for this chapter are therefore to configure means for evaluating the evidence by establishing the antiquity of the earliest records of thinking, the usefulness of artefacts as clues to thought, and the applicability of anthropologists’ recent observations of foragers. We can then go on to enumerate ideas that formed, frozen like icicles, in the depths of the last ice age.
The Clash of Symbols
The claim that we can start so far back may seem surprising, because a lot of people suppose that the obvious starting point for the history of ideas is no deeper in the past than the first millennium bce, in ancient Greece.
Greeks of the eighth to the third centuries bce or thereabouts have, no doubt, exerted influence out of all proportion to their numbers. Like Jews, Britons, Spaniards, at different times, and perhaps the Florentines of the Quattrocento, or nineteenth-century Mancunians, or Chicagoans in the twentieth century, ancient Greeks would probably feature in any historian’s list of the most surprisingly impactful of the world’s peoples. But their contribution came late in the human story. Homo sapiens had been around for nearly 200,000 years before Greeks came on the scene and, of course, a lot of thinking had already happened. Some of the best ideas in the world had appeared many millennia earlier.
Maybe, if we are looking for reliable records, we should start the story with the origins of writing. There are three reasons for doing so. All are bad.
First, people assume that only written evidence can disclose ideas. But in most societies, for most of the past, oral tradition has attracted more admiration than writing. Ideas have been inscribed in other ways. Archaeologists sieve them from fragmentary finds. Psychologists may exhume them from deep in the subconscious stratigraphy – the well-buried layers – of modern minds. Sometimes anthropologists elicit them from among the practices traditional societies have preserved from way back in the past. No other evidence is as good as evidence we can read in explicitly documented form, but most of the past happened without it. To foreclose on so much history would be an unwarrantable sacrifice. At least in patches, we can clarify the opacity of preliterate thinking by careful use of such data as we have.
Second, an impertinent assumption – which almost everyone in the West formerly shared – supposes that ‘primitive’ or ‘savage’ folk are befogged by myth, with few or no ideas worth noting.15 ‘Prelogical’ thought or ‘superstition’ retards them or arrests their development. ‘For the primitive mind’, asserted Lucien Lévy-Bruhl, one of the founders of modern anthropology, in 1910, ‘everything is a miracle, or rather nothing is; and therefore everything is credible and there is nothing either impossible or absurd.’16 But no mind is literally primitive: all human communities have the same mental equipment, accumulated over the same amount of time; they think different things, but all in principle are equally likely to think clearly, to perceive truth, or to fall into error.17 Savagery is not a property of the past – just a defect of some minds that forswear the priorities of the group or the needs or sensibilities of others.
Third, notions of progress can mislead. Even enquiries unprejudiced by contempt for our remote ancestors may be vulnerable to the doctrine that the best thoughts are the most recent, like the latest gizmo or the newest drug, or, at least, that whatever is newest and shiniest is best. Progress, however, even when it happens, does not invalidate all that is old. Knowledge accumulates, to be sure, and can perhaps pile up so high as to break through a formerly respected ceiling to new thoughts in, as it were, a previously unaccessed attic. As we shall see repeatedly in the course of this book, ideas multiply when people exchange views and experiences, so that some periods and places – like ancient Athens, or Renaissance Florence, or Sezessionist Vienna, or any crossroads of culture – are more productive of creativity than others. But it is false to suppose that thinking gets better as time unfolds or that its direction is always towards the future. Fashion gyrates. Revolutions and renaissances look back. Traditions revive. The forgotten is recovered with freshness more surprising than that of genuine innovations. The notion that there was ever a time when nothing was worth recalling is contrary to experience.
In any case, although nothing we can easily recognize as writing survives from the Ice Age, representative symbols do appear with unmistakable clarity in the art of twenty to thirty thousand years ago: a lexicon of human gestures and postures that recur so often that we can be sure that they meant something at the time – in the sense, at least, of evoking uniform responses in beholders.18 Artworks of the time often parade what look like annotations, including dots and notches that suggest numbers, and puzzling but un
deniably systematic conventional marks. No one can now read, for instance, a bone fragment from Lortet in France, with neat lozenges engraved over a dashingly executed relief of a reindeer crossing a ford; but perhaps the artist and his or her audience shared an understanding of what the marks once meant. A swirl that resembles a P occurs widely in ‘cave art’, attracting would-be decipherers who have noticed a resemblance to the hooplike curves the artists sketched to evoke the shapes of women’s bodies. It may be fanciful to read the P-form as meaning ‘female’; but to recognize it as a symbol is irresistible.
The idea that one thing can signify something else seems odd (though to us, who are so used to it, the oddness is elusive and demands a moment’s self-transposition to a world without symbols, in which what you see is always exactly what you get and nothing more, like a library to an illiterate or a junkyard full of road signs but without a road). Presumably, the idea of a symbol is a development from association – the discovery that some events cue others, or that some objects advertise the proximity of others. Mental associations are products of thought, noises from rattled chains of ideas. When the idea of symbolic representation first occurred, the devisers of symbols had a means of conveying information and making it available for critical examination: it gave humans an advantage over rival species and, ultimately, a means of broadening communication and protracting some forms of memory.