Book Read Free

The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home

Page 2

by Dan Ariely


  Why, you may ask, do my colleagues and I put so much time, money, and energy into experiments? For social scientists, experiments are like microscopes or strobe lights, magnifying and illuminating the complex, multiple forces that simultaneously exert their influences on us. They help us slow human behavior to a frame-by-frame narration of events, isolate individual forces, and examine them carefully and in more detail. They let us test directly and unambiguously what makes human beings tick and provide a deeper understanding of the features and nuances of our own biases.*

  There is one other point I want to emphasize: if the lessons learned in any experiment were limited to the constrained environment of that particular study, their value would be limited. Instead, I invite you to think about experiments as an illustration of general principles, providing insight into how we think and how we make decisions in life’s various situations. My hope is that once you understand the way our human nature truly operates, you can decide how to apply that knowledge to your professional and personal life.

  In each chapter I have also tried to extrapolate some possible implications for life, business, and public policy—focusing on what we can do to overcome our irrational blind spots. Of course, the implications I have sketched are only partial. To get real value from this book and from social science in general, it is important that you, the reader, spend some time thinking about how the principles of human behavior apply to your life and consider what you might do differently, given your new understanding of human nature. That is where the real adventure lies.

  READERS FAMILIAR WITH Predictably Irrational might want to know how this book differs from its predecessor. In Predictably Irrational, we examined a number of biases that lead us—particularly as consumers—into making unwise decisions. The book you hold in your hands is different in three ways.

  First—and most obviously—this book differs in its title. Like its predecessor, it’s based on experiments that examine how we make decisions, but its take on irrationality is somewhat different. In most cases, the word “irrationality” has a negative connotation, implying anything from mistakenness to madness. If we were in charge of designing human beings, we would probably work as hard as we could to leave irrationality out of the formula; in Predictably Irrational, I explored the downside of our human biases. But there is a flip side to irrationality, one that is actually quite positive. Sometimes we are fortunate in our irrational abilities because, among other things, they allow us to adapt to new environments, trust other people, enjoy expending effort, and love our kids. These kinds of forces are part and parcel of our wonderful, surprising, innate—albeit irrational—human nature (indeed, people who lack the ability to adapt, trust, or enjoy their work can be very unhappy). These irrational forces help us achieve great things and live well in a social structure. The title The Upside of Irrationality is an attempt to capture the complexity of our irrationalities—the parts that we would rather live without and the parts that we would want to keep if we were the designers of human nature. I believe that it is important to understand both our beneficial and our disadvantageous quirks, because only by doing so can we begin to eliminate the bad and build on the good.

  Second, you will notice that this book is divided into two distinct parts. In the first part, we’ll look more closely at our behavior in the world of work, where we spend much of our waking lives. We’ll question our relationships—not just with other people but with our environments and ourselves. What is our relationship with our salaries, our bosses, the things we produce, our ideas, and our feelings when we’ve been wronged? What really motivates us to perform well? What gives us a sense of meaning? Why does the “Not-Invented-Here” bias have such a foothold in the workplace? Why do we react so strongly in the face of injustice and unfairness?

  In the second part, we’ll move beyond the world of work to investigate how we behave in our interpersonal relations. What is our relationship to our surroundings and our bodies? How do we relate to the people we meet, those we love, and faraway strangers who need our help? And what is our relationship to our emotions? We’ll examine the ways we adapt to new conditions, environments, and lovers; how the world of online dating works (and doesn’t); what forces dictate our response to human tragedies; and how our reactions to emotions in a given moment can influence patterns of behavior long into the future.

  The Upside of Irrationality is also very different from Predictably Irrational because it is highly personal. Though my colleagues and I try to do our best to be as objective as possible in running and analyzing our experiments, much of this book (particularly the second part) draws on some of my difficult experiences as a burn patient. My injury, like all severe injuries, was very traumatic, but it also very quickly shifted my outlook on many aspects of life. My journey provided me with some unique perspectives on human behavior. It presented me with questions that I might not have otherwise considered but, because of my injury, became central to my life and the focus of my research. Far beyond that, and perhaps more important, it led me to study how my own biases work. In describing my personal experiences and biases, I hope to shed some light on the thought process that has led me to my particular interest and viewpoints and illustrate some of the essential ingredients of our common human nature—yours and mine.

  AND NOW FOR the journey. . .

  Part I

  The Unexpected Ways

  We Defy Logic at Work

  Chapter 1

  Paying More for Less

  Why Big Bonuses Don’t Always Work

  Imagine that you are a plump, happy laboratory rat. One day, a gloved human hand carefully picks you out of the comfy box you call home and places you into a different, less comfy box that contains a maze. Since you are naturally curious, you begin to wander around, whiskers twitching along the way. You quickly notice that some parts of the maze are black and others are white. You follow your nose into a white section. Nothing happens. Then you take a left turn into a black section. As soon as you enter, you feel a very nasty shock surge through your paws.

  Every day for a week, you are placed in a different maze. The dangerous and safe places change daily, as do the colors of the walls and the strength of the shocks. Sometimes the sections that deliver a mild shock are colored red. Other times, the parts that deliver a particularly nasty shock are marked by polka dots. Sometimes the safe parts are covered with black-and-white checks. Each day, your job is to learn to navigate the maze by choosing the safest paths and avoiding the shocks (your reward for learning how to safely navigate the maze is that you aren’t shocked). How well do you do?

  More than a century ago, psychologists Robert Yerkes and John Dodson* performed different versions of this basic experiment in an effort to find out two things about rats: how fast they could learn and, more important, what intensity of electric shocks would motivate them to learn fastest. We could easily assume that as the intensity of the shocks increased, so would the rats’ motivation to learn. When the shocks were very mild, the rats would simply mosey along, unmotivated by the occasional painless jolt. But as the intensity of the shocks and discomfort increased, the scientists thought, the rats would feel as though they were under enemy fire and would therefore be more motivated to learn more quickly. Following this logic we would assume that when the rats really wanted to avoid the most intense shocks, they would learn the fastest.

  We are usually quick to assume that there is a link between the magnitude of the incentive and the ability to perform better. It seems reasonable that the more motivated we are to achieve something, the harder we will work to reach our goal, and that this increased effort will ultimately move us closer to our objective. This, after all, is part of the rationale behind paying stockbrokers and CEOs sky-high bonuses: offer people a very large bonus, and they will be motivated to work and perform at very high levels.

  SOMETIMES OUR INTUITIONS about the links between motivation and performance (and, more generally, our behavior) are accurate; at other times, r
eality and intuition just don’t jibe. In Yerkes and Dodson’s case, some of the results aligned with what most of us might expect, while others did not. When the shocks were very weak, the rats were not very motivated, and, as a consequence, they learned slowly. When the shocks were of medium intensity, the rats were more motivated to quickly figure out the rules of the cage, and they learned faster. Up to this point, the results fit with our intuitions about the relationship between motivation and performance.

  But here was the catch: when the shock intensity was very high, the rats performed worse! Admittedly, it is difficult to get inside a rat’s mind, but it seemed that when the intensity of the shocks was at its highest, the rats could not focus on anything other than their fear of the shock. Paralyzed by terror, they had trouble remembering which parts of the cage were safe and which were not and, so, were unable to figure out how their environment was structured.

  * * *

  The graph below shows three possible relationships between incentive (payment, shocks) and performance. The light gray line represents a simple relationship, where higher incentives always contribute in the same way to performance. The dashed gray line represents a diminishing-returns relationships between incentives and performance.

  The solid dark line represents Yerkes and Dodson’s results. At lower levels of motivation, adding incentives helps to increase performance. But as the level of the base motivation increases, adding incentives can backfire and reduce performance, creating what psychologists often call an “inverse-U relationship.”

  * * *

  Yerkes and Dodson’s experiment should make us wonder about the real relationship between payment, motivation, and performance in the labor market. After all, their experiment clearly showed that incentives can be a double-edged sword. Up to a certain point, they motivate us to learn and perform well. But beyond that point, motivational pressure can be so high that it actually distracts an individual from concentrating on and carrying out a task—an undesirable outcome for anyone.

  Of course, electric shocks are not very common incentive mechanisms in the real world, but this kind of relationship between motivation and performance might also apply to other types of motivation: whether the reward is being able to avoid an electrical shock or the financial rewards of making a large amount of money. Let’s imagine how Yerkes and Dodson’s results would look if they had used money instead of shocks (assuming that the rats actually wanted money). At small bonus levels, the rats would not care and not perform very well. At medium bonus levels, the rats would care more and perform better. But, at very high bonus levels, they would be “overmotivated.” They would find it hard to concentrate, and, as a consequence, their performance would be worse than if they were working for a smaller bonus.

  So, would we see this inverse-U relationship between motivation and performance if we did an experiment using people instead of rats and used money as the motivator? Or, thinking about it from a more pragmatic angle, would it be financially efficient to pay people very high bonuses in order to get them to perform well?

  The Bonus Bonanza

  In light of the financial crisis of 2008 and the subsequent outrage over the continuing bonuses paid to many of those deemed responsible for it, many people wonder how incentives really affect CEOs and Wall Street executives. Corporate boards generally assume that very large performance-based bonuses will motivate CEOs to invest more effort in their jobs and that the increased effort will result in higher-quality output.* But is this really the case? Before you make up your mind, let’s see what the empirical evidence shows.

  To test the effectiveness of financial incentives as a device for enhancing performance, Nina Mazar (a professor at the University of Toronto), Uri Gneezy (a professor at the University of California at San Diego), George Loewenstein (a professor at Carnegie Mellon University), and I set up an experiment. We varied the amount of financial bonuses participants could receive if they performed well and measured the effect that the different incentive levels had on performance. In particular, we wanted to see whether offering very large bonuses would increase performance, as we usually expect, or decrease performance, analogous to Yerkes and Dodson’s experiment with rats.

  We decided to offer some participants the opportunity to earn a relatively small bonus (equivalent to about one day’s pay at their regular pay rate). Others would have a chance to earn a medium-sized bonus (equivalent to about two weeks’ pay at their regular rate). The fortunate few, and the most important group for our purposes, could earn a very large bonus, equal to about five months of their regular pay. By comparing the performances of these three groups, we hoped to get a better idea of how effective the bonuses were in improving performance.

  I know you are thinking “Where can I sign up for this experiment?” But before you make extravagant assumptions about my research budget, let me tell you that we did what many companies are doing these days—we outsourced the operation to rural India, where the average person’s monthly spending was about 500 rupees (approximately $11). This allowed us to offer bonuses that were very meaningful to our participants without raising the eyebrows and ire of the university’s accounting system.

  Once we decided where to run our experiments, we had to select the tasks themselves. We thought about using tasks that were based on pure effort, such as running, doing squats, or lifting weights, but since CEOs and other executives don’t earn their money by doing those kinds of things, we decided to focus on tasks that required creativity, concentration, memory, and problem-solving skills. After trying out a whole range of tasks on ourselves and on some students, the six tasks we selected were:

  1. Packing Quarters: In this spatial puzzle, the participant had to fit nine quarter-circle wedges into a square. Fitting eight of them is simple, but fitting all nine is nearly impossible.

  2. Simon: A bold-colored relic of the 1980s, this is (or was) a common electronic memory game requiring the participant to repeat increasingly longer sequences of lit-up colored buttons without error.

  3. Recall Last Three Numbers: Just as it sounds, this is a simple game in which we read a sequence of numbers (23, 7, 65, 4, and so on) and stopped at a random moment. Participants had to repeat the last three numbers.

  4. Labyrinth: A game in which the participant used two levers to control the angle of a playing surface covered with a maze and riddled with holes. The goal was to advance a small ball along a path and avoid the holes.

  5. Dart Ball: A game much like darts but played with tennis balls covered with the looped side of Velcro and a target covered with the hooked side so that the balls would stick to it.

  6. Roll-up: A game in which the participant moved two rods apart in order to move a small ball as high up as possible on an inclining slope.

  * * *

  A graphic illustration of the six games used in the experiment in India

  * * *

  Having chosen the games, we packed six of each type into a large box and shipped them to India. For some mysterious reason, the people at customs in India were not too happy with the battery-powered Simon games, but after we paid a 250 percent import tax, the games were released and we were ready to start our experiment.

  We hired five graduate students in economics from Narayanan College in the southern Indian city of Madurai and asked them to go to a few of the local villages. In each of these, the students had to find a central public space, such as a small hospital or a meeting room, where they could set up shop and recruit participants for our experiment.

  One of the locations was a community center, where Ramesh, a second-year master’s student, got to work. The community center was not fully finished, with no tiles on the floors and unpainted walls, but it was fully functional and, most important, it provided protection from wind, rain, and heat.

  Ramesh positioned the six games around the room and then went outside to hail his first participant. Soon a man walked by, and Ramesh immediately tried to interest him in the experiment. “We have a
few fun tasks here,” he explained to the man. “Would you be interested in participating in an experiment?” The deal sounded suspiciously like a government-sponsored activity to the passerby, so it wasn’t surprising that the fellow just shook his head and continued to walk on. But Ramesh persisted: “You can make some money in this experiment, and it’s sponsored by the university.” And so our first participant, whose name was Nitin, turned around and followed Ramesh into the community center.

  Ramesh showed Nitin all the tasks that were set up around the room. “These are the games we will play today,” he told Nitin. “They should take about an hour. Before we start, let’s find out how much you could get paid.” Ramesh then rolled a die. It landed on 4, which according to our randomization process placed Nitin in the medium-level bonus condition, which meant that the total bonus he could make from all six games was 240 rupees—or about two weeks’ worth of pay for the average person in this part of rural India.

  Next, Ramesh explained the instructions to Nitin. “For each of the six games,” he said, “we have a medium level of performance we call good and a high level of performance we call very good. For each game in which you reach the good level of performance, you will get twenty rupees, and for each game in which you reach the very good level of performance you will get forty rupees. In games in which you don’t even reach the good level, you will get nothing. This means that your payment will be somewhere between zero rupees and two hundred forty rupees, depending on your performance.”

 

‹ Prev