The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home

Home > Other > The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home > Page 25
The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home Page 25

by Dan Ariely


  “Please, I will do anything. Just stop!” I begged, but I had no say in the matter. They held me even tighter. “Wait, wait,” I tried, for the last time, but the doctor proceeded to make cuts in each of my fingers. All the while I counted backward, shouting every time I reached ten. I counted over and over until he finally stopped cutting. My hand was unbelievably sensitive and the pain was endless, but I was still conscious and alive. Bleeding and crying, I was left to rest.

  AT THE TIME, I didn’t understand the importance of this operation, nor how counting can help a person who is under duress.* The surgeon who operated on my arm was trying valiantly to save it, against the advice of some other physicians. He also caused me great suffering that day, the memory of which lasted for years. But his efforts were successful.

  SEVERAL MONTHS LATER, a new assembly of doctors told me that my painfully rescued arm was not doing very well and that it would be best to amputate it below the elbow. I reacted to the whole idea with revulsion, but they put their cold, rational case before me: Replacing my arm with a hook would dramatically reduce my daily pain, they said. It would cut down on the number of operations I would have to undergo. The hook would be relatively comfortable and, once I’d adapted to it, more functional than my injured hand. They also told me that I could choose a prosthetic arm that would make me look less like Captain Hook, though this type of prosthesis would be less functional.

  This was a very difficult decision. Despite the lack of functionality and pain I endured every day, I was loath to lose my arm. I just could not see how I would ever live without it, nor how I could possibly adapt to using a hook or a piece of flesh-colored plastic. In the end, I decided to hold on to my poor, limited, eviscerated limb and make the best of things.

  Fast-forward to 2010. Over the last twenty-plus years I’ve produced a lot of written material, mostly in the form of academic papers, but I can’t physically type for very long. I can type perhaps a page a day and answer a few e-mails by pecking short sentences, but if I try to do more, I feel deep pain in my hand that lasts hours or days. I can’t lift or straighten my fingers; when I try, it feels as if the joints are being pulled from their sockets. On a more positive note, I’ve learned to rely heavily on the help of able assistants and a little on voice recognition software, and I have also figured out, at least to some degree, how to live with daily pain.

  IT’S DIFFICULT, FROM my current standpoint, to say whether I made the right decision about keeping my arm. Given the arm’s limited functionality, the pain I experienced and am still experiencing, and what I now know about flawed decision making, I suspect that keeping my arm was, in a cost/benefit sense, a mistake.

  Let’s look at the biases that affected me. First, it was difficult for me to accept the doctors’ recommendation because of two related psychological forces we call the endowment effect and loss aversion. Under the influence of these biases, we commonly overvalue what we have and we consider giving it up to be a loss. Losses are psychologically painful, and, accordingly, we need a lot of extra motivation to be willing to give something up. The endowment effect made me overvalue my arm, because it was mine and I was attached to it, while loss aversion made it difficult for me to give it up, even when doing so might have made sense.

  A second irrational influence is known as the status quo bias. Generally speaking, we tend to want to keep things as they are; change is difficult and painful, and we’d rather not change anything if we can help it. In my particular case, I preferred not to take any action (partly because I feared that I would regret a decision to make a change) and live with my arm, however damaged.

  A third human quirk had to do with the irreversibility of the decision. As it turns out, making regular choices is hard enough, but making irreversible decisions is especially difficult. We think long and hard about buying a house or choosing a career because we don’t have much data about what the future holds for us. But what if you knew that your decision would be etched in stone and that you could never change your job or house? It’s pretty scary to make any choice when you have to live with the result for the rest of your life. In my case, I had trouble with the idea that once the surgery was done, my hand would be gone forever.

  Finally, when I thought about the prospect of losing my forearm and hand, I wondered about whether I could ever adapt. What would it feel like to use a hook or a prosthesis? How would people look at me? What would it be like when I wanted to shake someone’s hand, write a note, or make love?

  Now, if I had been a perfectly rational, calculating being who lacked any trace of emotional attachment to my arm, I would not have been bothered by the endowment effect, loss aversion, the status quo bias, or the irreversibility of my decision. I would have been able to accurately predict what the future with an artificial arm would hold for me, and as a consequence I would probably have been able to see my situation the way my doctors did. If I were that rational, I might very well have chosen to follow their advice, and most likely I would have eventually adapted to the new apparatus (as we learned in chapter 6, “On Adaptation”). But I was not so rational, and I kept my arm—resulting in more operations, reduced flexibility, and frequent pain.

  ALL OF THIS sounds like the stories old people tell (try this with a slow, Eastern European accent: “If I’d only known then what I know now, life would have been different”). You might also be asking the obvious question: if the decision was wrong, why not have the amputation done now?

  Again, there are a few irrational reasons for this. First, the mere idea of going back to the hospital for any treatment or operation makes me deeply depressed. In fact, even now, whenever I visit someone in hospital, the smells bring back memories of my experience and with them comes a heavy emotional burden. (As you can probably guess, one of the things that worries me the most is the prospect of being hospitalized for a prolonged period of time.) Second, despite the fact that I understand and can analyze some of my decision biases, I still experience them. They never completely cease to influence me (this is something to keep in mind as you attempt to become a better decision maker). Third, after investing years of time and effort into making my hand function as best it can, living with the daily pain, and figuring out how to work with these limitations, I’m a victim of what we call the sunk cost fallacy. Looking back at all my efforts, I’m very reluctant to write them off and change my decision.

  A fourth reason is that, twenty-some years after the injury, I have been able to rationalize my choice somewhat. As I’ve noted, people are fantastic rationalizing machines, and in my case I have been able to tell myself many stories about why my decision was the right one. For example, I feel a deep tickling sensation when someone touches my right arm, and I have been able to convince myself that this unique sensation gives me a wonderful way to experience the world of touch.

  Finally, there is also a rational reason for keeping my arm: over the years many things have changed, including me. As a teenager, before the accident, I could have taken many different roads. As a result of my injuries, I’ve followed particular personal, romantic, and professional paths that more or less fit with my limitations and abilities, and I have figured out ways to function this way. If, as an eighteen-year old, I’d decided to replace my arm with a hook, my limitations and abilities would have been different. For example, maybe I could have operated a microscope and as a consequence might have become a biologist. But now, as I approach middle age and given my particular investment in organizing my life just so, it is much harder to make substantial changes.

  The moral of this story? It is very difficult to make really big, important, life-changing decisions because we are all susceptible to a formidable array of decision biases. There are more of them than we realize, and they come to visit us more often than we like to admit.

  Lessons from the Bible and Leeches

  In the preceding chapters, we have seen how irrationality plays out in different areas of our lives: in our habits, our dating choices, our motivations a
t work, the way we donate money, our attachments to things and ideas, our ability to adapt, and our desire for revenge. I think we can summarize our wide range of irrational behaviors with two general lessons and one conclusion:

  1. We have many irrational tendencies.

  2. We are often unaware of how these irrationalities influence us, which means that we don’t fully understand what drives our behavior.

  Ergo, We—and by that I mean You, Me, Companies, and Policy Makers—need to doubt our intuitions. If we keep following our gut and common wisdom or doing what is easiest or most habitual just because “well, things have always been done that way,” we will continue to make mistakes—resulting in a lot of time, effort, heartbreak, and money going down the same old (often wrong) rabbit holes. But if we learn to question ourselves and test our beliefs, we might actually discover when and how we are wrong and improve the ways we love, live, work, innovate, manage, and govern.

  So how can we go about testing our intuitions? We have one old and tried method for this—a method whose roots are as old as the Bible. In chapter 6 of the Book of Judges, we find a guy named Gideon having a little conversation with God. Gideon, being a skeptical fellow, is not sure if it’s really God he’s talking to or an imagined voice in his head. So he asks the Unseen to sprinkle a little water on a fleece. “If You will save Israel by my hand, as You have said,” he says to the Voice, “look, I will put a fleece of wool on the threshing floor; if there be dew on the fleece only, and it be dry upon all the ground, then shall I know that You will save Israel by my hand, as You have said.”

  What Gideon is proposing here is a test: If this is indeed God he’s talking with, He (or She) should be able to make the fleece wet, while keeping the rest of the ground dry. What happens? Gideon gets up the next morning, discovers that the fleece is wet, and squeezes a whole bowlful of water out of it. But Gideon is a clever experimentalist. He is not certain if what happened was just by chance, whether this pattern of wetness occurs often, or whether it happens every time he leaves a fleece on the ground overnight. What Gideon needs is a control condition. So he asks God to indulge him again, only this time he runs his experiment a different way: “And Gideon said to God: ‘Do not be angry with me, and I will speak just this once: let me try just once more, I ask You, with the fleece; let it now be dry only upon the fleece, and upon all the ground let there be dew.’ ” Gideon’s control condition turns out to be successful. Lo and behold, the rest of the ground is covered with dew and the fleece is dry. Gideon has all the proof he needs, and he has learned a very important research skill.

  IN CONTRAST TO Gideon’s careful experiment, consider the way medicine was practiced for thousands of years. Medicine has long been a profession of received wisdom; early practitioners in ancient days worked according to their own intuitions, combined with handed-down wisdom. These early physicians then passed on their accumulated knowledge to future generations. Doctors were not trained to doubt their intuitions nor to do experiments; they relied heavily on their teachers. Once their term of learning was complete, they were supremely confident in their knowledge (and many physicians continue with this practice). So they kept doing the same thing over and over again, even in the face of questionable evidence.*

  For one instance of received medical wisdom gone awry, take the medicinal use of leeches. For hundreds of years, leeches were used for bloodletting—a procedure that, it was believed, helped rebalance the four humors (blood, phlegm, black bile, and yellow bile). Accordingly, the application of bloodsucking, sluglike creatures was thought to cure everything from headaches to obesity, from hemorrhoids to laryngitis, from eye disorders to mental illness. By the nineteenth century, the leech trade was booming; during the Napoleonic Wars, France imported millions upon millions of the critters. In fact, the demand for the medicinal leech was so high that the animal nearly became extinct.

  Now, if you are a nineteenth-century French doctor just beginning your practice, you “know” that leeches work because, well, they’ve been used “successfully” for centuries. Your knowledge has been reinforced by another doctor who already “knows” that leeches work—either from his own experience or from received wisdom. Your first patient arrives—say, a man with a pain in his knee. You drape a slimy leech onto the man’s thigh, just above the knee, to relieve the pressure. The leech sucks the man’s blood, draining the pressure above the joint (or so you think). Once the procedure is over, you send the man home and tell him to rest for a week. If the man stops complaining, you assume that the leech treatment worked.

  Unfortunately for both of you, you didn’t have the benefit of modern technology back then, so you couldn’t know that a tear in the cartilage was the real culprit. Nor was there much research on the effectiveness of rest, the influence of attention from a person wearing a white coat, or the many other forms of the placebo effect (about which I wrote in some length in Predictably Irrational). Of course, physicians are not bad people; on the contrary, they are good and caring. The reason that most of them picked their profession is to make people healthy and happy. Ironically, it is their goodness and their desire to help each and every one of their patients that makes it so difficult for them to sacrifice some of their patients’ well-being for the sake of an experiment.

  Imagine, for example, that you are a nineteenth-century physician who truly believes that the leech technology works. Would you do an experiment to test your belief? What would the cost of such an experiment be in terms of human suffering? For the sake of a well-controlled experiment, you would have to divert a large group of your patients from the leech treatment into a control condition (maybe using something that looked like leeches and hurt like leeches but didn’t suck any blood). What kind of doctor would assign some patients to the control group and by doing so deprive them of this useful treatment? Even worse, what kind of doctor would design a control condition that included all the suffering associated with the treatment but omitted the part that was supposed to help—just for the sake of finding out whether the treatment was as effective as he thought?

  The point is this: it’s very unnatural for people—even people who are trained in a field like medicine—to take on the cost associated with running experiments, particularly when they have a strong gut feeling that what they are doing or proposing is beneficial. This is where the Food and Drug Administration (FDA) comes in. The FDA requires evidence that medications have been proven to be both safe and effective. As cumbersome, expensive, and complex as the process is, the FDA remains the only agency that requires the organizations dealing with it to perform experiments to prove the efficacy and safety of proposed treatements. Thanks to such experiments, we now know that some children’s cough medicines carry more risks than benefits, that surgeries for lower back pain are largely useless, that heart angioplasties and stents don’t really prolong the lives of patients, and that statins, while indeed reducing cholesterol, don’t effectively prevent heart diseases. And we are becoming aware of many more examples of treatments that don’t work as well as originally hoped.* Certainly, people can, and do, complain about the FDA; but the accumulating evidence shows that we are far better off when we are forced to carry out controlled experiments.

  THE IMPORTANCE OF experiments as one of the best ways to learn what really works and what does not seems uncontroversial. I don’t see anyone wanting to abolish scientific experiments in favor of relying more heavily on gut feelings and intuitions. But I’m surprised that the importance of experiments isn’t recognized more broadly, especially when it comes to important decisions in business or public policy. Frankly, I am often amazed by the audacity of the assumptions that businesspeople and politicians make, coupled with their seemingly unlimited conviction that their intuition is correct.

  But politicians and businesspeople are just people, with the same decision biases we all have, and the types of decisions they make are just as susceptible to errors in judgment as medical decisions. So shouldn’t it be clear that the need fo
r systematic experiments in business and policy is just as great?

  Certainly, if I were going to invest in a company, I’d rather pick one that systematically tested its basic assumptions. Imagine how much more profitable a firm might be if, for example, its leaders truly understood the anger of customers and how a sincere apology can ease frustration (as we saw in chapter 5, “The Case for Revenge”). How much more productive might employees be if senior managers understood the importance of taking pride in one’s work (as we saw in chapter 2, “The Meaning of Labor”). And imagine how much more efficient companies could be (not to mention the great PR benefits) if they stopped paying executives exorbitant bonuses and more seriously considered the relationship between payment and performance (as we saw in chapter 1, “Paying More for Less”).

  Taking a more experimental approach also has implications for government policies. It seems that the government often applies blanket policies to everything from bank bailouts to home weatherization programs, from agribusiness to education, without doing much experimentation. Is a $700 billion bank bailout the best way to support a faltering economy? Is paying students for good grades, showing up to class, and good behavior in classrooms the right way to motivate long-term learning? Does posting calorie counts on menus help people make healthier choices (so far the data suggest that it doesn’t)?

  The answers aren’t clear. Wouldn’t it be nice if we realized that, despite all our confidence and faith in our own judgments, our intuitions are just intuitions? That we need to collect more empirical data about how people actually behave if we want to improve our public policies and institutions? It seems to me that before spending billions on programs of unknown efficacy, it would be much smarter to run a few small experiments first and, if we have the time, maybe a few large ones as well.

 

‹ Prev