Smart Choices

Home > Other > Smart Choices > Page 17
Smart Choices Page 17

by Howard Raiffa


  Clearly, much depends on how you ask the question. Psychologists have even shown that when the same question is framed two different ways—ways that are objectively equivalent—people choose differently. Why? Because each framing makes different objectives more salient.

  Decision researchers have documented two types of frames that distort decision making with particular frequency.

  Framing as gains versus losses. In one experiment, patterned after a classic study by decision researchers Daniel Kahneman and Amos Tversky, we explored the impact of framing by posing the following problem to a group of experienced insurance professionals:

  You are a marine property adjuster charged with minimizing the loss of cargo on three insured barges that sank yesterday off Alaska. Each barge holds $200,000 worth of cargo, which will be lost if not salvaged within 72 hours. The owner of a local marine salvage company gives you two options, both of which will cost the same:

  Plan A: This plan will save the cargo of one of the three barges, worth $200,000.

  Plan B: This plan has a one-third probability of saving the cargo on all the barges, worth $600,000, but has a two-thirds probability of saving nothing.

  Which plan would you choose?

  If you’re like 71 percent of the respondents in the study, you chose the ‘‘less risky’’ plan A, which will save one barge for sure. Another group in the study, however, chose between alternatives C and D:

  Plan C: This plan will result in the loss of two of the three cargoes, worth $400,000.

  Plan D: This plan has a two-thirds probability of resulting in the loss of all three cargoes and the entire $600,000, but has a one-third probability of losing no cargo.

  Faced with this choice, 80 percent of respondents preferred plan D.

  The pairs of alternatives are, of course, equivalent—plan A is the same as plan C, and plan B is the same as plan D—they’ve just been framed in different ways. The strikingly different responses reveal that people are risk-averse when a problem is posed in terms of gains (barges saved) but risk-seeking when a problem is posed in terms of avoiding losses (barges lost). Furthermore, they tend to adopt the frame as it is presented to them rather than restating the problem their own way.

  Framing with different reference points. The same problem can also elicit very different responses when frames use different reference points. Let’s say you have $2,000 in your checking account, and you are asked the following question:

  Would you accept a 50-50 chance that offered the possibility of either losing $300 or winning $500?

  What if you were asked this question:

  Would you prefer keeping your current checking account balance of $2,000 to accepting a 50-50 chance that would result in your having either $1,700 or $2,500 in your account?

  Once again, the two questions pose the same problem. Although your answers to both questions should, rationally speaking, be the same, studies have shown that many people would refuse the 50-50 chance in the first question but accept it in the second. Their different reactions result from the different reference points of the two frames. The first frame, with its reference point of 0, emphasizes incremental gains and losses, and the thought of losing triggers a conservative response in many people’s minds. The second frame, with its reference point of $2,000, puts things into perspective by emphasizing the broader financial impact of the decision.

  What can you do about it? A poorly framed problem can undermine even the best-considered decision. But the effect of improper framing can be limited by imposing discipline on the decision-making process:

  •Remind yourself of your fundamental objectives, and make sure that the way you frame your problem advances them.

  •Don’t automatically accept the initial frame, whether it was formulated by you or by someone else. Always try to reframe the problem in different ways. Look for distortions caused by the frames.

  •Try posing problems in a neutral, redundant way that combines gains and losses or embraces different reference points. For example:

  Would you accept a 50-50 chance that offered the possibility of losing $300, resulting in a bank balance of $1,700, or winning $500, resulting in a bank balance of $2,500?

  •Think hard throughout your decision-making process about the framing of the problem. At points throughout the process, particularly near the end, ask yourself how your thinking might change if the framing changed.

  •When your subordinates at work recommend decisions, examine the way they framed the problem. Challenge them with different frames.

  Being Too Sure of Yourself: The Overconfidence Trap

  What’s your forecast for the average temperature in your city tomorrow? How sure are you about your estimate? Now predict a high value, one for which you think there’s only a 1 percent chance that the actual average temperature will exceed, and a low value, one for which you think there’s only a 1 percent chance that the actual average will fall below. In other words, set a range such that there is a 98 percent chance that the actual average temperature will fall between your low and high figures.

  If you make many, many estimates of this sort and your self-appraisal of your estimating skills is good, statistically you should expect that only about 2 percent of the time would the actual value fall outside your assessed ranges. Unfortunately, that’s not what hundreds of experiments have shown. Typically, the actual value falls outside the range 20 to 30 percent of the time, not 2 percent! Overly confident about the accuracy of their prediction, people set too narrow a range of possibilities.

  Think of the implications. If you underestimate the high end or overestimate the low end of a range of values for a crucial variable (such as potential sales) and you act accordingly, you may expose yourself to far greater risk than you realize—or you may miss out on wonderful opportunities.

  A major cause of overconfidence is anchoring. When you make an initial estimate about a variable, you naturally focus on midrange possibilities. This thinking then anchors your subsequent thinking about the variable, leading you to estimate an overly narrow range of possible values.

  What can you do about it? To reduce the effects of overconfidence:

  •Avoid being anchored by an initial estimate. Consider the extremes (low and high) first when making a forecast or judging probabilities.

  •Actively challenge your own extreme figures. Try hard to imagine circumstances in which the actual figure would fall below your low or above your high, and adjust accordingly. For example, if your forecast is 80 degrees and your high figure is 88, how might the high turn out to be 95?

  •Challenge any expert’s or advisor’s estimates in a similar fashion. They’re as susceptible as anyone to this bias. Suppose you’re the president of a company considering the launch of a new product, and your marketing manager says that there’s only a 1 percent chance that you will sell less than 35,000 units of the product next year. You might ask, ‘‘What if it sells only 20,000, what could have happened?’’ His response: ‘‘A competitor might have come out with an improved version of its product.’’ You then ask, ‘‘What’s the chance of that occurring?’’ He says, ‘‘Oh, about 10 percent.’’ If there is a 10 percent chance of selling around 20,000 units, there is certainly more than a 1 percent chance of selling less than 35,000. Your marketing manager anchored on business as usual, meaning no new competitive products, in making his original estimates.

  Focusing on Dramatic Events: The Recallability Trap

  What’s the probability of a randomly selected jet flight on a major U.S. airline ending in a fatal crash?

  What’s your answer? If you’re like most people, you will have overestimated the probability. The actual chance of such a crash? According to statistics provided by researchers at MIT, it is only about one in 10,000,000!

  Because human beings infer the chances of events from experience, from what we can remember, we can be overly influenced by dramatic events—those that leave a strong impression on our memory. We all, for ex
ample, exaggerate the probability of rare but catastrophic occurrences, such as plane crashes, because they get disproportionate attention in the media. A dramatic or traumatic event in your own life can also distort your thinking. You will assign a higher probability to traffic accidents if you’ve passed one on the way to work, and you will assign a higher chance to someone’s dying of cancer if a close family member or friend has died of the disease.

  In fact, anything that distorts your ability to recall events in a balanced way will distort your probability assessments or estimates. In one experiment, lists of well-known men and women were read to different groups of people. Each list had an equal number of men and women, but on some lists the men were more famous than the women while on others the women were the more famous. Afterward, the participants were asked about the percentage of men and women on each list. Those who had heard the list with the more famous men thought there were more men on the list, while those who had heard the list with the more famous women thought there were more women.

  What can you do about it? To minimize this type of error,

  •Each time you make a forecast or estimate, examine your assumptions so that you are not being unduly swayed by memorability distortions.

  •Where possible, try to get statistics. Don’t rely on your memory if you don’t have to.

  •When you don’t have direct statistics, take apart the event you’re trying to assess and build up an assessment piece by piece. For example, to estimate the likelihood that a scheduled airline flight will result in a fatality, put together a statistic for the average number of fatal airline crashes per year in the United States with a rough estimate (derived from an Internet reservations system, perhaps) of the number of flights per year in the United States. The resulting probability may not be as accurate as that of the MIT study, but it’s better than relying on your unaided memory.

  Neglecting Relevant Information: The Base-Rate Trap

  Donald Jones is either a librarian or a salesman. His personality can best be described as retiring. What are the odds that he is a librarian?

  When we use this little problem in seminars, the typical response goes something like this: ‘‘Oh, it’s pretty clear that he’s a librarian. It’s much more likely that a librarian will be retiring; salesmen usually have outgoing personalities. The odds that he’s a librarian must be at least 90 percent.’’ Sounds good, but it’s totally wrong.

  The trouble with this logic is that it neglects to consider that there are far more salesmen than male librarians. In fact, in the United States, salesmen outnumber male librarians 100 to 1. Before you even considered the fact that Donald Jones is ‘‘retiring,’’ therefore, you should have assigned only a 1 percent chance that Jones is a librarian. That is the base rate.

  Now, consider the characteristic ‘‘retiring.’’ Suppose half of all male librarians are retiring, whereas only 5 percent of salesmen are. That works out to 10 retiring salesmen for every retiring librarian— making the odds that Jones is a librarian closer to 10 percent than to 90 percent. Ignoring the base rate can lead you wildly astray.

  What can you do about it? Analyze your thinking about decision problems carefully to identify any hidden or unacknowledged assumptions you may have made. Use these suggestions as guides:

  •Don’t ignore relevant data; make a point of considering base rates explicitly in your assessment.

  •Don’t mix up one type of probability statement with another. (Don’t mix up the probability that a librarian will be retiring with the probability that a retiring person is a librarian.)

  Slanting Probabilities and Estimates: The Prudence Trap

  You are a researcher on a team designing a medical program to respond to a potential cancer-causing agent. Having reviewed the empirical data and relevant literature, you think that the probability that the potential carcinogen actually leads to cancer is on the order of 1 in 100, but you don’t know for sure. What probability should you give?

  Many people in this situation might think it prudent to slant the probability from 1 in 100 to, say, 1 in 20, just to be ‘‘safe.’’ But if several such judgments are to be made and if they are all similarly slanted and then cascaded together, all in the spirit of prudence, the result may be a hopelessly distorted understanding of the seriousness of the problem. The recommended response will most likely be far more costly or drastic than is warranted.

  As this example shows, even one of our best decision-making impulses—caution—can lead us into error. Consider the methodology of ‘‘worst-case analysis,’’ which was once popular in the design of weapons systems and is still used in certain engineering and regulatory settings. Using this approach, weapons were designed to operate under the worst possible circumstances, even though the odds of those circumstances actually coming to pass were infinitesimal. Worst-case analysis added huge costs with no practical benefit, proving that too much prudence can lead to inappropriate decisions.

  In business, the cascading nature of the prudence trap can be disastrous. A number of years ago, for example, one of the Big Three U.S. automakers was deciding how many of a new-model car to produce in anticipation of its busiest sales season. The market planning department, responsible for the decision, asked other departments to supply forecasts of key variables such as anticipated sales, dealer inventories, competitor actions, and costs. Knowing the purpose of the estimates, each department slanted its forecast to favor building more cars—‘‘just to be safe.’’ But the market planners took the numbers at face value and then made their own ‘‘just-to-be-safe’’ adjustments. Not surprisingly, the number of cars produced far exceeded demand, and the company took six months to sell off the surplus, resorting in the end to deep price cuts.

  What can you do about it? For sound decision making, honesty is the best policy.

  •State your probabilities and give your estimates honestly.In communicating to others, state that your figures are not adjusted for prudence, or for any other reason.

  •Document the information and reasoning used in arriving at your estimates, so others can understand them better.

  •Emphasize to anyone supplying you with information the need for honest input.

  •Vary each of the estimates over a range to assess its impact on the final decision. Think twice about the more sensitive estimates.

  Seeing Patterns Where None Exist: The Outguessing Randomness Trap

  At the gaming table, the dice seem to be running hot. The last four rolls produced four sevens in a row. Is this the time to bet heavily on seven? Or, perhaps, after four straight sevens, does it make sense to bet heavily against seven?

  Your lucky cousin selects a number for you to bet in your state lottery. Does this increase your chances of winning?

  The answer to these questions—and many like them—is a resounding ‘‘No!’’

  Despite our innate desire to see patterns, random phenomena remain just that—random. Dice and lotteries have neither memory nor conscience—every roll, every number choice is a new and different event, uninfluenced by all previous events. If a run of sevens affected the next throw of the dice in a predictable way, casinos would go broke.

  What can you do about it? To avoid distortions in your thinking, you must curb your natural tendency to see patterns in random events. Be disciplined in your assessments of probability.

  •Don’t try to outguess purely random phenomena. It can’t be done.

  •If you think you see patterns, check out your theory in a setting where the consequences aren’t too significant. If you think you have a system to beat the gaming tables or the stock market by capitalizing on past results, try it out with fake money. Use your system on a long record of past data, wagering your hypothetical stake. (A computer buff familiar with simulation techniques would be a big help.) The exercise will save you lots of real money!

  Going Mystical about Coincidences: The Surprised-by-Surprises Trap

  John Riley is a legend. On two separate occasi
ons he has won a one-in-a-million lottery. The chance of that happening is so rare—1 in 1 trillion—that some people attribute it to divine intervention. Others conclude that perhaps the lottery is rigged. What should we think about such events? What do they say about logic and the laws of probability?

  Just how unlikely is it that someone who has won a one-in-a-million lottery will win it a second time? Well, let’s suppose that 1,000 people have won such a lottery and that each of them tries 100 times to repeat the ‘‘miracle.’’ That adds up to 100,000 chances in a one-in-a-million lottery—or 1 in 10—that someone will repeat. Not only is it not a miracle, it’s not even a rare event.

  As with the outguessing randomness trap, the surprised-by-surprises trap results from a failure or an unwillingness to give reality its sometimes surprising due. Many people think themselves truly anointed or gifted because they’ve won a succession of bets (or made a series of very successful investments). But we should not be impressed by these seemingly dramatic occurrences. Just by chance, someone will be lucky. The chance that it will be you in particular may be minuscule, but the chance that it will be someone in some context may not be all that small. Some wealthy people out there may not be winners because of their business acumen but because of sheer luck. But take heart: some unfortunates may not be losers because of their stupidity or ineptness—they may just be the unlucky ones.

 

‹ Prev