Book Read Free

Wired for Culture: Origins of the Human Social Mind

Page 38

by Mark Pagel


  There is a growing belief that many of our moral decisions might be made before we become aware of them, as if we have an innate moral sense. One suggestion is that we might have an ancient affective or emotional system in our brains that makes split-second decisions about things that have moral content, and then it presents those decisions to a younger, more recently evolved cognitive or deliberative side of our brains that might have been bolted on in the last 100,000 to 200,000 years of our evolution. The evidence for this is that people presented with moral dilemmas can often quickly tell you how they would behave but often struggle to explain why.

  Here is an example. A runaway train carriage on a rail line is about to run over and kill five people. You can save them by pushing a button that will divert the carriage onto another track. The trouble is there is a person on that track who will be killed by the carriage. Should you push the button? Most people say yes to this question, even if feeling slightly uneasy about doing so. But now consider a different scenario. Once again, a runaway carriage is bearing down on five people. You can save them, but to do so you must push someone else standing nearby into the carriage’s path. Should you do it? Most people say no. When asked to explain the differences, many people are simply dumbfounded—at a loss to provide a rational explanation for what they say they feel instinctively they should do, or why they feel differently about the two situations. Others can eventually provide explanations but much too slowly to have used them to save anyone.

  If there is a strand that links these two scenarios, it is that we have an instinctive reluctance to cause harm to others directly, especially when, as in these examples, they pose no threat to us. It is a disposition that might put a brake on our more violent tendencies, and this is something that should be valuable, at least in most circumstances, in our social groups. It is an instinct that also appears, to a far larger extent than most people would have guessed, to be hard-wired into our brains, and they—our brains—act without directly consulting our conscious minds. It could be that our brains get us to make moral decisions and we don’t really even know how they arrived at them. Structures in our brains that support this moral decision making will have spread in our evolutionary past, so long as the actions they influence serve our well-being. Much of our so-called moral nature might be just this—dispositions and the behaviors they bring about, some acquired or burnished by learning, others part of our genetic makeup—linked by promoting actions that work for us in the peculiar outlines of our social systems.

  Where do these features of our minds leave consciousness, and what is its role? Here is one possibility. Life in a complex social environment such as our own requires us to make decisions about a fluid and constantly shifting situation. The role of a conscious mind might be to weigh up alternative courses of action sent to us from our subconscious minds. This role might be given over to our conscious mind rather than allowing our brains to follow some rule of thumb or algorithm because the social contingencies multiply endlessly—“If I do this, that might occur, and then she might do this or he might do that and that would lead to this… .” If not merely a problem of contingencies, the rapid pace of cultural change means that new situations constantly arise and these require conscious deliberation rather than less flexible subconscious rules. We see this today in the field of the social management of technology: should we allow elderly men and women to have children; should we make it possible for someone to clone themselves, or for that matter someone else? Should a woman be allowed to gestate someone else’s baby for them? It might be that precisely because of the complexity and novelty of the social relations we regularly engage in, we need something that works in real time and is flexible.

  Giving consciousness this role in deliberating and updating us on a real-time basis is a scenario particularly apt for our moral decision making, even if the conscious part appears to happen after the decision has been made by our subconscious minds. After-the-event moral reasoning might help us to understand the connections between external events we can witness, and our own emotions. We can then use these to help us predict others’ feelings and emotions, and, importantly, how they might behave. We can frequently observe the same things as others can; it is just that we cannot have direct access to how they feel about those things. Simulating what might be going on in their minds and then comparing the outcomes of our simulations to their actions might be useful. Is someone likely to be bothered, amused, enraged, or euphoric at some set of external events? Analyzing the links between our own emotions and those same external events may therefore give us a better-developed theory of mind that we can put to use in real time as we encounter others. It might be precisely an inability or an impoverished ability to conduct such simulations that plagues people with autism and its milder manifestation in Asperger’s syndrome: patients who routinely say they don’t know what is going on in the minds of the people around them.

  A simpler explanation for consciousness looks to the properties of successful ideas themselves. The ideas that we carry around in our heads are predominantly those that have in our past been good at getting themselves transmitted from one mind to another. Catchy songs like “Fly Me to the Moon,” or phrases like “Watch out,” or, “Mind your head,” or useful pieces of knowledge, such as “Train conductors stop working after 10 p.m.,” or “The angle of Polaris from the horizon can be used [at least in the northern hemisphere] to work out your latitude,” are more likely to get themselves transmitted than dull or incorrect ones. Countless millions of ideas probably never even see the light of day—or perhaps a better metaphor would be are never given a hearing—because they don’t get us to talk about them. Others do, and they are the ones that are, on balance, more likely to be transmitted. Perhaps, then, our ideas in the form of active memes created consciousness as a way to get us to think about and transmit them! Who knows? Maybe it is even our memes constantly agitating and clamoring to be heard that creates the cacophony of “stimulus-independent thought” we saw at the start of Part III.

  But there is something unsatisfactory about all of these scenarios for consciousness, and it is this: why is it necessary to conduct deliberations “consciously”? Why is it necessary for a meme to “pop into consciousness” for us to tell someone else about it? These explanations for consciousness beg the question they are meant to answer by assuming the value of consciousness they set out to explain. (As an aside, the meaning of “begging the question” has been changing over recent years so that now many people use it to mean “demanding to be answered.” But its original meaning to philosophers was “an answer that assumes in its premise the proposition it sets out to explain.”) They assume that consciousness makes us more likely or better able to think about something or act on it. But why do we think consciousness improves deliberation, decision making, or for that matter the transmission of memes? Maybe it does, but if we are willing to assume this, there is nothing really to explain.

  To see why this assumption is not as obvious as you might think, consider that the game of chess is surely an extreme case of mulling things over before deciding how to act, and of infinite and evanescent possibilities. But this is a game at which computers now routinely beat humans and no one would say that the computers are conscious. When in the 1990s Garry Kasparov played against, and was finally beaten by, the IBM Deep Blue computer, it was his realization that the machine was not conscious that he found most distressing. Kas-parov explained that chess is a game of warfare in which terrifying your opponents—striking fear into their hearts—with moves they don’t understand or have not seen coming is a vital part of a winning tactic in a grandmaster’s game. But computers have no fear; they don’t mind losing; and they don’t get tired.

  It might be objected that computers play chess differently from humans, and this might be true. But this still doesn’t tell us why what we think of as our conscious awareness must be conscious to be effective. Here is a suggestion that does not so obviously suffer from begging the question
about consciousness, but must be regarded as little more than speculation. Perhaps consciousness arises as a true “sixth sense,” albeit a virtual one (our other five senses conventionally being touch, hearing, smell, sight, and taste). Like the Persian “King’s Eyes,” who were charged with keeping the King informed, perhaps our “consciousness” keeps our hungry-for-knowledge subconscious mind informed of an ever-changing and socially complex outside world that it cannot see. What we perceive as consciousness is just a byproduct of the vast amount of brain activity required to produce this sixth sense, and then manage all the continuous cross-talk between it and our subconscious minds, all the while updating the sixth sense with the new perceptions flowing in. Consciousness, or the “I” we see inside us, might just be an artifact of the “post-processing” step that tries to summarize and make sense of the material flowing in, and manage the disagreements between it and what is “downstairs.”

  For example, our social world changes continually, so that a former ally might have just a moment ago become a competitor. When we search our subconscious mind for how to accommodate these changed circumstances, it might get the wrong item off the memory shelf. We have to send it back, updating it with the new information. Listen to St. Augustine musing in his Confessions in the fourth century AD about what he called the “palaces of my memory”:

  I come to the fields and spacious palaces of my memory, where are the treasures of innumerable images, brought into it from things of all sorts perceived by the senses. There is stored up, whatsoever besides we think, either by enlarging or diminishing, or any other way varying those things which the sense hath come to; and whatever else hath been committed and laid up, which forgetfulness hath not yet swallowed up and buried. When I enter there, I require what I will to be brought forth, and something instantly comes; others must be longer sought after, which are fetched, as it were, out of some inner receptacle; others rush out in troops, and while one thing is desired and required, they start forth, as who should say, “Is it perchance I?” These I drive away with the hand of my heart, from the face of my remembrance; until what I wish for be unveiled, and appear in sight, out of its secret place. Other things come up readily, in unbroken order, as they are called for; those in front making way for the following; and as they make way, they are hidden from sight, ready to come when I will. All which takes place when I repeat a thing by heart.

  Students of the Great Apes might complain this explanation for our consciousness could equally apply to the apes’ complex social circumstances, and we should also grant them consciousness. Perhaps we should, but even so, there might be two differences between us and the Great Apes that challenge this objection. One is that our social world is even more complex than that of a Great Ape, including social exchange and the extended forms of cooperation we have seen in earlier chapters. A computational state that keeps the “I” centerstage might be particularly valuable for reminding our subconscious minds to put our social system to best use. But the other is even more fundamental: our minds have discovered language. We alone have a symbolic code for translating our subconscious thoughts from whatever form they might take into the same audible (or tactile) language that we use to communicate with others. It might not be an accident that for most of us consciousness is expressed in our native language. Perhaps it is this aspect of our virtual sixth sense that tips our awareness over into something we can label as “I” or “me.”

  tRUTH AND THE DIFFICULTY OF KNOWING WHAT TO DO

  THE WORD “truth” is heavily laden with difficult philosophical baggage, but as a shorthand we can take it colloquially to mean knowing the right answer, or knowing what really happened in some situation, or knowing the best course of action, or the best solution to some problem. If we take this as a working definition of truth, then we probably have precious little access to it. The American baseball player and coach Casey Stengel famously advised: “Never make predictions, especially about the future.” It is good advice. In the 1950s, the president of IBM, Thomas Watson, Jr., is reported to have said, “I think there’s a world market for about five computers.” Ken Olson, president of Digital Equipment Corporation in 1977, believed that “There is no reason anyone would want a computer in their home.” It is rumored that one publishing executive returned a manuscript to J. K. Rowling, saying that “children just aren’t interested in witches and wizards anymore,” and that an MGM internal memo about The Wizard of Oz said, “That rainbow song’s no good. Take it out.”

  For most of us, much of everyday life is a series of easy decisions that we think we know how to make. But for many of the most important things we do, and most of the important decisions we need to make, we don’t have and might not even be able to acquire the information we need to be confident of making the right or best decision. It might also be that our best action depends on what others do. Should I fight those people who live in the next valley and who keep stealing my sheep? What lure should I use on my fishing line in this stream? Is that snake poisonous or is it one of those that just looks like a poisonous one? Is that berry edible? How much should I offer for a house I am thinking of buying? Should I pay more or less than I am into my pension fund? What is the best car for me? Which computer should I buy? How strict should I be with my children? Should I invest in that stock or buy a government bond? Which is the best airline? Should I marry this person?

  An amusing but potentially serious manifestation of not knowing what to do or how to behave is called “collective ignorance.” You are in a crowded elevator that comes to a halt between floors. Maybe it is just a temporary problem, but maybe not. What should you do? Not wishing to appear foolish or anxious, you look to others for clues. But of course the others are in the same position as you and they are looking to you for the same clues. The result is that everyone inadvertently sends the message to do nothing and you all stand there in silence. It is the position we all occasionally find ourselves in when a fire alarm goes off, a subway train comes grinding to a halt between stations, or, in a big city, we pass someone lying in the street. Should we help them, or are they just some drunk passed out from their own exuberance? But collective ignorance is also why stock markets can rise and fall with such exciting or jarring urgency—few investors know what to do so they just follow what others are doing.

  These are questions about whether we should copy others or try to figure out best solutions on our own. As the most intelligent species on the planet, we might think that not only can we work out good solutions, but that doing so rather than relying on others is our best strategy. A simple thought experiment posed by Alan Rogers leads to a different and surprising conclusion. Rogers asks us to imagine a group of people who live in a constantly changing environment such that new problems continually arise that require new solutions. Over time, these people—we can call them innovators—work out solutions for surviving and reproducing on their own. This takes time and effort, and they occasionally make mistakes. But they can be expected to maintain a more or less steady level of health and well-being as their innovations just keep up with changes to their environment.

  But now imagine that someone is introduced to this group who merely imitates or copies these innovators. This imitator or social learner would not have to spend the time and energy trying to work out solutions to problems posed by the environment, and would not suffer the inevitable losses of making the occasional error. This tells us that a social learner who copies what others do, introduced into an environment of innovators, would survive and prosper better than the innovators. Over time, the imitators will therefore increase in number until at some point the population of people is made up mostly of them. But now consider what happens. Once imitators become common, they will frequently copy each other. This is fine so long as it works, but mistakes in copying will creep in, and the imitators will have no way to correct them. The environment will also continue to change. So now the imitators will begin to suffer losses and ill-health because they are employing obsolete sol
utions.

  We learn from this that neither all innovation nor all copying will ever take over in society: in the language of Chapter 3, neither innovation nor copying is an evolutionarily stable strategy. There is also a hint of something we have seen before: that only a handful of innovators is needed. But if this is true, and most of us do copy others, whom should we copy, and how much? Perhaps your neighbor has just purchased a new car; you have been thinking of buying one as well. Should you get the same one? Kevin Laland posed these questions more formally in a computer tournament organized to understand social learning. Elizabeth Pennisi in Science describes how people were asked: “Suppose you find yourself in an unfamiliar environment where you don’t know how to get food, avoid predators, or travel from A to B. Would you invest time working out what to do on your own, or observe other individuals and copy them? If you copy, who would you copy? The first individual you see? The most common behavior? Do you always copy, or do so selectively? What would you do?”

  A young boy I put this question to replied by saying he would copy the most overweight people. There is something to this, especially in the evolutionary setting of being a hunter-gatherer. If body weight is an indication that you have been good at getting food, maybe you are doing something right. It was just this logic that we used to speculate on the meaning of the Venus statues. But Laland wanted to know how we decide whom to copy when we only have access to what others are doing. Entrants to his tournament had to write a computer program that would somehow juggle the alternatives of someone trying to innovate or work out for themselves the best course of action, versus copying or imitating others, and if the latter, whom to imitate. The computer programs operated in a kind of in silico social environment in which they could “see” the choices that other programs had made, and thus what behaviors they were displaying. These programs then competed against each other inside a large supercomputer.

 

‹ Prev