The most successful titans of industry share the overconfidence bias. People like James Cameron, the mastermind filmmaker behind Titanic and Avatar, are extremely confident—overconfident, by the account of many who have worked with them. Cameron’s production of Titanic went so far over budget that the studio executives stopped paying bills and demanded that he accede to their desires. Rather than heed their demands, though, Cameron refused to negotiate. Instead, he used his own money to pay for the movie, which cost a whopping $270 million—more than any previous movie had ever cost. In the end, Cameron’s confidence was vindicated. The movie went on to rake in $2.1 billion worldwide, making it the highest-grossing film at the time, since topped only by Cameron’s own Avatar, which grossed $2.7 billion.
And despite his hobo appearance and pungent scent, Steve Jobs didn’t do too shabbily either. Jobs was so convinced of his own ideas that he didn’t see any reason not to follow his gut. But not everyone shared his confidence. Jobs initially convinced Ron Wayne, an engineer at Atari, to become the third partner with him and Steve Wozniak in the proposed Apple company. After watching the overconfident Jobs borrow start-up money, however, Wayne got cold feet and sold his 10 percent share in the new company back to Jobs and Wozniak for $800. Jobs’s persistent self-confidence paid off, and Apple became the single most valuable company in world history. If Ron Wayne had shared Jobs’s confidence, his $800 investment would have been worth $2.6 billion in 2010.
BORN TO BE BIASED
“Bias” is often seen as a dirty little word. We are taught that we should avoid bias and instead strive to be accurate, rational, and smart. Yet the reality is that our minds evolved to be biased—to predictably make specific types of errors and decisions that appear irrational. As we’ve been discovering, what seems foolish and even delusional from a traditional perspective can be smart from an evolutionary vantage point.
Whether we are conscious of it or not, our brains have been designed to do whatever it takes to solve perennial evolutionary challenges. When it comes to danger and disease, our minds are set to be overly sensitive to strange outsiders and to the smell and sight of sickness. This produces occasional paranoia or hypochondria, but it beats the alternative—being naively inattentive to people who might beat us up or sneeze a lungful of deadly viruses all over us. Similarly, when a man’s mate-acquisition subself is at the helm, his mind is biased to be hypersensitive to any remote signs of interest from desirable women. And when a woman’s ovulatory phase triggers her mate-acquisition subself, her mind is biased to think that a big handsome Lothario will finally settle down with her. Both sexes occasionally make fools of themselves, but if we always played the statistics, we’d remain celibate. By occasionally warping our perception of reality, our brains are better able to accomplish their job—to keep us alive, reproducing, and solving perennial evolutionary challenges.
Consider what might be the grandest of all delusions. If you are a typical American, you are perfectly aware that about 50 percent of marriages end in divorce; yet 86 percent of people believed that their marriage will last forever (of 1,000 newlyweds, then, at least 360 of the 500 that break up will have made an overconfident prediction). These individuals might be foolish and irrational, but without this grand delusion veiled as love, they might never get married. And here is where our evolutionary tendencies can be seen pulling our emotional strings from behind the curtain of consciousness. Marriage is of course a powerful predictor of having children. People who get married tend to have kids soon after, often producing several offspring within only a few years of saying “I do.” This means that any delusion that motivates people to get married is a bias that enhances the likelihood that their genes will replicate. So yes, love does make people irrational when it comes to the accuracy of their judgments, but the underlying system is based on a deeply rational foundation.
EACH OF OUR subselves is biased to make occasional errors (like ignoring the base rates on divorce when we exchange wedding bands), but those same biases often lead us to avoid big evolutionary mistakes (like failing to replicate our genes). Does this mean that our deep-seated biases always incline us to make smart decisions? No. In fact, sometimes they produce serious problems, because those biases, though well fitted to an ancestral village, are mismatched with the modern world full of strangers, skyscrapers, and SAT college admissions examinations. We next examine how a better understanding of this mismatch can dramatically improve our decisions, starting with investigation of a curious question: Why do uneducated tribespeople from deep in the Amazonian jungle easily solve logical problems that stump sophisticated students at Harvard?
5
Modern Cavemen
DEEP IN THE AMAZON RAIN FOREST, on the shores of treacherous rivers between Ecuador and Peru, live the Shiwiar. Tucked away in the middle of nowhere, the Shiwiar people have had almost no contact with outsiders, and their way of life mirrors many aspects of our evolutionary past. They gather nuts and fruits, spear fish, and hunt a range of animals from armadillos to toucans, often taking down their prey with a blowgun. The Shiwiar are wary of outsiders and interact mostly with close relatives. Daily life entails a smidge of gossip, a dab of witchcraft, and the ever-present threat of disease and death. Venomous snakebites and insect stings pose ubiquitous dangers, as do malaria and other infections. And if the parasites don’t get you, your skull might be impaled by a jaguar—a local carnivore who likes to bite prey through the head to deliver a fatal blow directly to the brain.
Larry Sugiyama, an anthropologist who studies the Shiwiar, is interested in human reasoning and cognition. He’s the kind of guy who enjoys giving tricky logic tests to see who’s smart enough to get the correct answer. Sugiyama had just finished giving a test to students at Harvard when he began to wonder how the Shiwiar would do on the same mental challenge.
“It is difficult to imagine two populations that differ more than Shiwiar villagers and Harvard students,” Sugiyama explains. The Harvard students, selected for possessing the brightest minds in the world, have been exposed to over twelve years of Western schooling and have consequently absorbed an immense amount of humanity’s accumulated wisdom. The Shiwiar, by contrast, are illiterate, with no formal schooling at all. While the Harvard kids grew up with Baby Einstein brain puzzles to exercise their minds, Shiwiar children were given machetes and sent into the jungle to bring home dinner. Given the educational gap, it hardly seems fair to compare the Shiwiar and Harvard students’ performance on logic tests.
But just how big is the intellectual gap between inhabitants of an ancestral-style society and those living in the ultrasophisticated modern world anyway? Well, when Sugiyama conducted his reasoning test in the Amazon, he discovered something astonishing. The Shiwiar aced the tough problems. They not only matched the performance of the budding brainiacs who make the pages of the Harvard Crimson, they actually did a little bit better. How is it that illiterate jungle dwellers could outmatch the reasoning abilities of students at one of the most prestigious academic institutions in the world?
To understand this puzzle, we first need to know more about the ancestral nature of the human mind. In this leg of our tour, we’ll rummage through our evolutionary past, exploring the surprising implications of the idea that our modern skulls house Stone Age brains. Indeed, there is an emerging body of evidence to suggest that we are all modern cavemen, approaching the problems of our complex contemporary world using brains that evolved to confront ancestral problems.
By better understanding how the ancestral mind works, though, researchers like Sugiyama have discovered that many of our highly touted deficiencies in decision making are less the fault of the test takers and more the fault of the tests and the test makers. And by applying insights from our evolutionary past, Sugiyama and his colleagues have found that small alterations to complex questions can instantly transform test takers from seemingly dim-witted dodos into deeply rational savants.
LOGICALLY DEFICIENT MINDS
Unless
you decided to skip the first half of this book to read about why illiterate jungle dwellers outperform Harvard students, you have by now picked up a central theme of our argument: that many of our seemingly irrational biases in judgment and decision making turn out to be pretty smart on closer examination. Seemingly irrational biases like loss aversion, overconfidence, and men’s overperception of female sexual interest seem a bit more rational when viewed through an evolutionary lens. On average, those biases ultimately produced wise choices for our ancestors.
But not all errors mask an underlying wisdom. Sometimes an error is just an error—and many human errors are quite tragic. In the United States, about sixty thousand people die each year from mistakes in medical decision making. Medical errors are in fact the sixth-most-common cause of death in the United States, more likely to cause death than Alzheimer’s disease, breast cancer, suicide, or homicide. These are not small errors that help us avoid big mistakes—these errors are the big mistakes, and they cost thousands of lives each month.
Studies find that people are particularly error-prone when it comes to problems that require a little bit of math and logical reasoning. Consider the classic “Linda problem” below, developed by pioneering behavioral economists Daniel Kahneman and Amos Tversky:
Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice and also participated in antinuclear demonstrations.
Which is more probable?
A.Linda is a bank teller.
B.Linda is a bank teller and is active in the feminist movement.
The correct answer is A. It’s a better bet to guess that Linda is simply a bank teller. Yet Kahneman and Tversky found that almost 90 percent of people incorrectly answer B. Most people believe that it’s more probable that Linda is not just a bank teller but also an active feminist. This answer is a bad bet because the probability of two events occurring together is less than the probability of either one occurring alone. To see why, look at Figure 5.1. Even if there is only a small chance that someone like Linda is a bank teller and a large chance that she’s an active feminist, we already know she is a bank teller—both option A and option B state that she works at a bank. Given that both options include the presumption that she is a bank teller, the probability that Linda resides somewhere in the circle on the left (all bank tellers) has to be larger than the probability that she is both a bank teller and a feminist and thus resides somewhere in the tiny overlap of the two circles. Mathematically, there are simply fewer women who are bank tellers and feminists than there are women who are just bank tellers, so the odds are strongly stacked against you if you choose option B.
Figure 5.1. A visual depiction of the Linda problem
Behavioral economists and psychologists find that people get stumped by all sorts of questions that aren’t really all that difficult (you don’t need any knowledge of trigonometry, calculus, linear algebra, or differential equations to solve the Linda problem). Yet hundreds of studies have exposed the limitations of the human mind. The common explanation for such poor performance is that people are generally inept when it comes to math, logic, and reasoning. In that view, the way to fix this deficiency is to provide more formal training in these areas. If only people had more education and practice, the argument goes, they would surely come to the correct conclusion.
But there is a problem with this explanation for people’s poor performance. It turns out that very smart people, including those with college degrees from Harvard, Yale, and Princeton, make the same errors at almost the same high rates. Despite many years of formal education, including classes in math, logic, and reasoning, most people nevertheless have difficulty solving basic problems like the Linda problem. And what about those sixty thousand deaths each year that stem from medical errors? These are errors made by university-educated experts, with years of training in life-and-death decision making. Yet they too have problems with making good decisions.
On closer inspection, there is something strikingly peculiar about the kinds of questions that give people so much trouble. As in the Linda problem, once people have the logic explained to them (once they see the picture with the two circles), the answer makes perfect sense—the question is not that difficult. Yet it was almost impossible to see this simple answer when the question was asked in its original form. Although the answer is obvious, this obviousness was obscured. This pattern provides an important clue about what could be producing so many errors in decision making. Our errors might have less to do with our lack of mental ability and more to do with the way the questions are being asked.
TALKING VERSUS WRITING
Some things in life are easy. Consider learning how to talk. Most children start talking by age two. By age three the average kid knows hundreds of words and can often use a new word after hearing it only once. And by the ripe old age of four, kids from Cambridge to Cambodia are already master communicators, expounding on which toys they want, which foods they hate, and which parent they love bossing around more. The ability to talk seems to bloom independently of how much it is nurtured by the kid’s parents. Some parents begin a continual stream of chitchat with their tots from day one. Others find it a waste of time to have extended monologues with nonverbal newborns. It doesn’t much matter. As long as they overhear an occasional conversation, children learn to talk. You need only turn on Sesame Street and let kids listen to Elmo, Big Bird, Kermit, Miss Piggy, and Oscar the Grouch, or let them crawl around the kitchen listening to their parents and grandparents gossiping, and they will eventually start talking spontaneously. If only everything in life were as easy as learning to talk.
Other things in life are difficult—like mastering the written as opposed to the spoken word. Reading and writing (whether with a pen, a typewriter, or a cell phone) are hard. You can give your baby girl as many crayons and Sesame Street books as you like, but she is not going to spontaneously start expressing her thoughts in comprehensible prose. Without a parent or a teacher providing years of incentives and instruction—connecting sounds to letters, letters to words, and punctuation to spoken pauses and emphases—the likelihood that a child would ever write a coherent sentence is approximately zero. Reading and writing are so difficult that modern schooling devotes year after year of formal instruction to developing these skills. We practice and memorize, and then practice and memorize some more.
Yet, despite all that practice, the same kids who are eloquent speakers (perhaps even poetic rappers) suddenly become deficient communicators if asked to express their ideas in writing. Many people in the world never learn to write at all, including the Shiwiar tribespeople in the jungles of the Amazon. And even among the educated urbanites who are drilled in writing for years, most will never feel completely comfortable communicating through this channel. As university professors, we hear nonstop complaints about how poorly college students write—and they have been explicitly taught how to write in school for over twelve years!
Why is learning to talk easy while learning to write is difficult? The answer lies in our evolutionary history. Our ancestors have been talking for hundreds of thousands of years. The ability to talk gave such an advantage that humans across the world have been naturally selected for being good talkers. Talking is like walking. We don’t need to sign up for a walking class—we just do it. But writing is very different. The written word is the new evolutionary kid on the block. From the perspective of our 2-million-year hominid lifespan, we have been writing for only a brief time. And most of the writing in the last few thousand years has been done by a tiny number of select individuals. Even in the modern world, the majority of people are still illiterate—they speak but don’t read or write. Whereas talking is like walking, writing is like doing ballet. If you buy your child a pair of ballet slippers, it’s highly unlikely she will spontaneously start doing triple pirouettes. You’d be fortunate if the kid merely starts hopping clumsily up in
the air without hitting her head on the floor. If you want anything even vaguely resembling Swan Lake, better sign her up for a few years of ballet classes.
Many things in life are like talking—evolutionarily old and easy. We don’t have to try hard to learn how to see, breathe, eat, or run. But many other things in the modern world are like writing—evolutionarily novel and difficult. This includes most of the skills we need to read, write, play the violin, perform brain surgery, and do rocket science. And when it comes to making decision errors, many of them stem from something we’ve all spent over a dozen years and thousands of hours desperately trying to learn: math.
WHY CAN’T JOHNNY DO THE MATH?
Imagine you are a woman at your gynecologist’s office. Your doctor suggests a mammogram to screen for breast cancer, and your worst fear comes true: it comes back positive. But you’ve heard that these kinds of tests aren’t always right, so you ask your doctor, “Does this really mean I have breast cancer? What’s the likelihood?”
It would certainly make a huge difference if your chances of having cancer are 1 percent or 90 percent. And many people assume that every physician knows the precise answer. But we shouldn’t be so sure. In a recent study, 160 respected doctors were provided with the relevant statistical information needed to calculate the chances that a woman with a positive test actually has the disease. Here is everything you need to know:
•The probability that a woman has breast cancer is 1 percent.
•If a woman has breast cancer, the probability that she will test positive is 90 percent.
•If a woman does not have breast cancer, the probability that she will nevertheless test positive is 9 percent.
The Rational Animal: How Evolution Made Us Smarter Than We Think Page 12