by Mark Pagel
Competing strategies ranged from those that relied strongly on innovation to those that always copied others. Startlingly, the winning strategy in Laland’s tournament exclusively copied others—it never innovated! By comparison, a strategy that relied almost exclusively on innovation finished ninety-fifth out of one hundred contenders. This is a result that flies in the face of all expectations, but the strategy of always copying works for two very simple but profound reasons. One is that when others around us make decisions and act on them, they have little choice but to demonstrate the best strategy in their repertoire: when you do something, you will typically do what you think is in your best interest. This presents imitators with a set of alternatives from which the truly bad ones have probably already been filtered out. The second is that by virtue of being alive and available to copy, those whom we imitate are survivors, and so what they are doing must be reasonably good.
Remarkably, it matters less exactly whom you copy or precisely what than that you copy rather than try to innovate. Laland’s computer tournament, therefore, also lays bare the social implications of learning from others. Our ability to copy and imitate is why our culture can accumulate knowledge and technology. But the winning strategy in the tournament acted like a social parasite, plagiarizing the hard-won knowledge and strategies of others, and thereby avoiding any of the costs of having to try out new ideas on its own. Indeed, its parasitical nature was revealed when Laland ran the winning strategy alone and it performed badly. Just as Alan Rogers’s thought experiment would have led us to expect, if no one is innovating, then copiers will end up copying each other, and this will mean that many bad strategies will be copied and maintained.
We have seen this before: social learning—imitation and copying—is visual theft. It is unavoidably steeped in conflict and cooperation because knowledge itself becomes a valuable commodity that might otherwise grant an advantage to the person you visually steal it from. If I can perform some behavior that you wish to learn, I might wish to hide it or even modify it in your presence, or perhaps trade it for some of your knowledge. For your part, you might wish to conceal your interest, act deceptively or furtively, hoping that I will let down my guard. We see these conflicts of interest—and the deceptions they produce—manifesting themselves in patent applications and patent law, industrial and even national espionage, and outright theft. But we also see them in the reluctance, for example, to share old family recipes, reveal where our favorite fishing spot is, where to find the best mushrooms, or what bait we use to catch fish. Deception, competition, and exploitation are built into us because most of us rely on copying others most of the time.
Even when we have access to the so-called facts, we often misuse them, and this too might be because copying has played an important role throughout our history. We know that we are highly susceptible to contagion, false beliefs, neuroses—especially medical and psychological—and conspiracy theories. Why we should be is surprising because our brains have surely evolved to judge risks, to assess likelihoods or probabilities, to defend our minds against undue worry, and to infer what others are thinking. But our minds probably evolved to make these judgments drawing on the experiences of small groups of people—most probably throughout our history the small number of people in our tribe. The trouble is that now we are often confronted with vastly more information about risks, from newspapers and radio or the Internet, and yet we don’t always make the best use of it.
We misuse it because our brains assume that the rate at which these things come to our attention from all over the world is the same as the rate in our local area. It is a case of doing bad mathematics. In the past, my assessment of the risk of being blown up by a terrorist, or of getting swine flu, or of my child being snatched by a pedophile on the way to school, was calculated from averaging the input of information I received mainly from my small local group, because these were the people I spoke to or heard from, and these were the people whose actions affected me. What the Internet does—and what mass communication does more generally—is to sample those inputs from the 6.8 billion people on Earth. But without my being aware of it, my brain is still considering that the inputs arose from my local community, because that is the case its assessment circuits were built for.
The bad mathematics occurs because my brain assumes a small denominator (the bottom number in a fraction, and here that number is the number of people in my village), but it is using the inputs from the whole world as its numerator (the top number of a fraction). The answer it produces to the question of how likely something is to happen is, then, way too big. So, when I hear every day of children being snatched, my brain gives me the wrong answer to the question of risk: it has divided a big number (the children snatched all over the world) by a small number (the tribe). Call this the “Madeleine McCann effect.” We all witnessed months of coverage of this sad case of a kidnapping of a young girl in Portugal that occurred in 2007—as of this writing still unresolved. Although the worry this caused in the rest of us is trivial compared to what the McCanns have suffered, it was probably largely misplaced. But even knowing this, it is hard to shake the feeling that our children are at risk, and this just shows us how deep are the biases in our decision making.
The effects of the bad mathematics don’t stop with judging risks. Doing the mathematics wrong means that contagion can leap across the Internet. Contagion arises when people perceive that the numerator (input from Internet) grows far more rapidly than the denominator (village or tribe). Our tendency to copy others just reinforces this perception. Once contagion starts on the Internet, everyone’s copying means that the bad mathematics make it explode. The same happens with conspiracy theories: if it seems everyone is talking about something, it must be true!
But this is just the wrong denominator again, because in fact “most” people are not talking about that thing, it is just that the ones who are choose to appear on the Internet (or radio phone-ins, etc.). Neuroses and false beliefs are buttressed: we all worry about our health and in the past would look around us and find that no one else is worrying or ill. But consult the Internet and you might find tens of thousands—maybe more—people are worrying, and they’ve even developed Web sites to talk about their worry. The 2009 swine flu pandemic turned out to be a damp squib, but you wouldn’t have known that from the frenzy at the time.
All of these problems arise because we seldom have access to the truth, and we normally arrive at some guess as to what it is by copying others. The conclusion from tournaments such as Laland’s that the number of innovators can be small might be surprising, but when we look around us, this is indeed what we see: successful inventors and entrepreneurs are rare and efforts to find them in television reality shows or to produce them in the classroom only serve to reinforce the point. And this is because for most of what we do, we cannot simply work out in our minds the best course of action, but social learning can sample (or steal) from others’ good luck or occasional good judgment. We see an awareness of this even at the highest levels of technical competition in, for example, yachting events such as the America’s Cup or racing events such as Formula 1. Boats and cars are often shrouded to conceal, until the very last moment, if ever, the complex shape of a rudder, or the bewildering configuration of airflow across an engine.
A feebleness about knowing what to do has evidently been true throughout our evolution, not just now when we have complicated things like yachts and Formula 1 cars, but also computers and derivative financial products to reach decisions about. What is the best way to shape a hand ax, or to make an arrowhead? And how would you know if you stumbled upon the right answer? You wouldn’t until you or someone else tried it. And the difficulty even now of answering these comparatively simple questions has meant that we have evolved to be good at what we can be good at: to take advantage of a sort of natural tournament of cultural selection played out in front of us every day, and which presents us with good solutions. Most of us are copiers. Natural selection ha
s seized on the power of copying to make our minds very good at working within what cultural systems have to offer.
Part IV
THE MANY AND
THE FEW
* * *
Prologue
TAXIING TO THE terminal at Hong Kong airport, you notice a point across the bay where there is a forest of thin, white structures standing hundreds of feet tall. They have a peculiar monolithic appearance. They do not move or make any sound, and if you arrive at one of those times of the year in Hong Kong when the weather is hot and the air is hazy with pollution and humidity, these white structures look like some giant fungi that has sent up its fruiting bodies from the steaming forest floor, ready to disperse its spores.
But they are not fruiting bodies, at least not of fungi. These stalks have been made by humans. They are high-rise apartment towers, which house tens of thousands of people. And their remarkable feature is that they serve the same purpose as the fungal stalk does for its spores: both are vehicles that carry and promote the survival and reproduction of their inhabitants. But we are not fungi, or even ants, bees, or termites, so how is it that so many of us can live so tightly packed like this, reliant on such a small number of others to govern our lives?
CHAPTER 10
Termite Mounds and the
Exploitation of Our
Social Instincts
That large groups of humans can be led by a small number of elite
for the same reasons as termites, ants, bees, and wasps
A DILEMMA
TERMITES’ MOUNDS and ants’ nests can house millions of individuals toiling in dark, cramped, and steamy conditions on behalf of a queen who lives a life devoted almost entirely to reproduction. Most of us instinctively recoil from such a scene as not being part of our nature. Yet it was vividly depicted in the dystopian view of the city of Los Angeles in the film Blade Runner, a crowded, teeming, drizzly place full of anonymous strangers. In cities all over the world millions of people live and work side by side ruled by a small elite, and in countries such as China and India over 1 billion people fall under the rule of a few. When we marvel at the purposeful and yet orderly behavior of a colony of ants, scurrying in and out of their nest, some carrying objects, others scouting for prey or invaders, we need not cast our imagination very far to think of construction workers on a large building site or laborers building a pyramid in ancient Egypt. We attend sporting events and musical performances in stadiums at which tens of thousands of us remain for hours only inches from each other, all following the actions of a few on the field or stage. There is something both strange and remarkable about this behavior: hypersocial and hyper-orderly. Apart from the social insects, no other animals can work together in such large numbers. Imagine a construction site or a sports stadium filled with tens of thousands of hyenas, or baboons, or even dogs, a species we have bred in our image.
We are able to live and work among others in our millions. And yet this poses a dilemma for one of the main ideas of this book: nothing in our evolutionary history specifically prepared us for this. If humans evolved a tribal nature that revolves around life in relatively small and exclusive cooperative societies, how do we explain the enormous social groupings of the modern world in which so many can be so willingly led by so few? The growth of human populations happened far too quickly for biological changes to our nature to have kept up. Until perhaps 10,000 years ago, all humans lived in small hunter-gatherer bands. The invention of agriculture changed all that as having the capacity to produce rather than simply gather food meant larger numbers of people could reside in the same place. Small bands of maybe ten to three hundred people gradually came to be replaced by tribes that were effectively bands of bands. Tribes gave way to chiefdoms, in which for the first time in our history societies became centralized. There was stratification by class and the chief sat at the top of a formal hierarchy of authority. Chiefdoms eventually gave way in turn to large city-states such as Jericho (in modern-day Israel) and Çatal Hüyük in Turkey, or the Mesopotamian cities of Ur and Babylon. These were later succeeded by fledgling nation-states.
The forces propelling this growth were many, but mainly of three sorts—protection, economic well-being, and reproductive output. People were, in a word, better off, even if it is by now well established that we were often less healthy in these large groupings. But being better off does not alone tell us why it worked. Were we to provide 10,000 dogs, hyenas, or even apes with unlimited food and protection, we would not get the happy outcome we might have sought. Paul Seabright in The Company of Strangers suggests that human societies have been able to grow large because we have acquired an ability to trust strangers. We pay our taxes to unknown bureaucrats, buy things made in foreign lands and from people we do not know, walk past strangers in the street and even allow them into our homes without fear of being robbed or killed. We are able do these things because we have evolved rules and dispositions that allow us to exchange goods and services with people we have never met. And indeed, we have seen in earlier chapters how cooperation and trust can arise. The looming shadow of future encounters with the same people softens our tendencies to cheat them; we acquire reputations and learn those of others; and we count on the knowledge that you and those you would do business with bring to every exchange a sense of fairness that protects you and them from exploitation.
But if these rules remind us of anything, it is that we cannot possibly ever know enough about strangers per se to trust them. In fact, we have seen that there is reason to believe we have a hard-wired wariness of strangers. Rather, what we have acquired throughout our brief evolution is a taste for the benefits of cooperation and some rules that can make it work in the right circumstances. Thus, when we do appear to trust strangers, it is probably because they are not really strangers—we know or think we know something about them, the institutions they work for, or we think there are institutions such as the police, banks, or insurance companies ready to protect us from them. When a man knocks at my door asking to read my electricity meter, if I do let him in, it is with a mild apprehension. And even then I only do it because I know that it is my electrical company’s practice to send such people around and that my meter hasn’t been read for awhile. If this man knocked late at night, looked threatening, or it wasn’t my company’s practice to send such people around, it is doubtful I would let him in. Even when I do so, it is only after I have asked for his ID and sized him up, making a quick calculation as to whether I could overpower him should he try to rob me. It will also help if I haven’t heard anything in the local news about thieves or muggers who masquerade as meter-checkers as a way of gaining access to people’s homes.
When the waitress puts my credit card into a restaurant’s electronic scanner, I allow it not because I trust strangers, but because I observe others doing it, or have been told by people I do trust that others have used their credit cards at this establishment, or because I happen to know the restaurant has been there for some time. Still, I often feel a slight anxiety, wondering if some cloning device has been fitted to the scanner and I will receive word the next day that a large loan has been taken out against my card. Reminded that my bank will not charge me for purchases I have not made, I go ahead with the transaction anyway. And when I use the services of taxi drivers, banks, airline pilots, the police, and eBay, it is not that I trust them per se, but that I notice over long periods of time that in general airplanes are flown well, the police are not on the take, taxi drivers don’t take advantage of their passengers (on the whole), the reputation comments on eBay seem helpful, and my bank is fair with my money (or is it?). But even this is only true in parts of the world where these various services do work, or where I am familiar with the local culture. Many cities have “no-go” areas. Until recently, it was common in many parts of Africa to avoid putting your money into a bank—the widespread belief, often confirmed in practice, being that you wouldn’t get it back.
We learn from this that our capacity to li
ve and work in large societies exploits the tactics we have acquired throughout our evolution for making cooperation work, and even then we begin with the most tentative of exchanges. So, if nothing in our evolutionary history specifically prepared us to live in large societies, almost everything about the way culture works does. Mathematicians call a mechanism scale-free if it doesn’t change as the size or scale of the group or phenomenon it is applied to changes. This chapter examines evidence that the large social groupings that began to emerge around 10,000 years ago did so by exploiting key evolved features of our cooperative behavior and psychology that happen to be scale-free. Our language, our diffuse and indirect style of cooperation and exchange based on reputation, our ability to specialize, and even our willingness to suspend disbelief—thereby making it more likely we might accept some chief as God’s representative on Earth—can all act relatively unfettered by the size of the group in which we reside. Having these scale-free cultural mechanisms meant that our societies could automatically grow to a larger size without having to invent new mechanisms beyond those that were in place by perhaps 160,000 to 200,000 years ago when our species arose.
Even these scale-free mechanisms cannot on their own explain why we chose to live in larger societies; they merely made it possible. Instead, we need to look for properties of our societies that make them not only an easy but also a productive thing to be a part of. Here, it turns out that our larger societies could naturally emerge by taking advantage of three properties they all seem to share: one is that they emerge from local rules; the second is that there can be some surprising efficiencies of larger groupings; and the third is social viscosity, or our tendency to maintain local ties within a larger society. The first of these allows larger societies simply to emerge so long as they pay their way; the second tells us how they pay their way; and the third shows us how our tribal psychology can still operate in a larger society. Revealingly, it is also these three features that oppressive and dictatorial regimes attack or exploit to break down a society and hold it within their grip.