The Undoing Project

Home > Other > The Undoing Project > Page 21
The Undoing Project Page 21

by Michael Lewis


  The article was called “Judgment Under Uncertainty: Heuristics and Biases.” It was in equal parts familiar and strange—what the hell was a “heuristic”? Redelmeier was seventeen years old, and some of the jargon was beyond him. But the article described three ways in which people made judgments when they didn’t know the answer for sure. The names the authors had given these—representativeness, availability, anchoring—were at once weird and seductive. They made the phenomenon they described feel like secret knowledge. And yet what they were saying struck Redelmeier as the simple truth—mainly because he was fooled by the questions they put to the reader. He, too, guessed that the guy they named “Dick” and described so blandly was equally likely to be a lawyer or an engineer, even though he came from a pool that was mostly lawyers. He, too, made a different prediction when he was given worthless evidence than when he was given no evidence at all. He, too, thought that there were more words in a typical passage of English prose that started with K than had K in the third position, because the words that began with K were easier to recall. He, too, made predictions about people from mere descriptions of them with a degree of confidence that was totally unjustified—even uncertain Don Redelmeier fell prey to overconfidence! And when asked quickly to guess the product of 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8, he saw how he, too, would think it less than the product of 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1.

  What struck Redelmeier wasn’t the idea that people made mistakes. Of course people made mistakes! What was so compelling is that the mistakes were predictable and systematic. They seemed ingrained in human nature. Reading the article in Science reminded Redelmeier of all the times he had made what seemed in retrospect to be an obvious mistake on a math problem—because it was so much like the other mistakes he and others had made. One passage in particular stuck with him—it was in the section on this thing they called “availability.” It talked about the role of the imagination in human error. “The risk involved in an adventurous expedition, for example, is evaluated by imagining contingencies with which the expedition is not equipped to cope,” the authors wrote. “If many such difficulties are vividly portrayed, the expedition can be made to appear exceedingly dangerous, although the ease with which disasters are imagined need not reflect their actual likelihood. Conversely, the risk involved in an undertaking may be grossly underestimated if some possible dangers are either difficult to conceive of, or simply do not come to mind.”

  This wasn’t just about how many words in the English language started with the letter K. This was about life and death. “That article was more thrilling than a movie to me,” said Redelmeier. “And I love movies.”

  Redelmeier had never heard of the authors—Daniel Kahneman and Amos Tversky—though at the bottom of the page it said that they were members of the Department of Psychology at Hebrew University in Jerusalem. To him it was more important that his older brothers had never heard of them, either. Aha, finally. I know something more than my brothers! he thought. Kahneman and Tversky offered what felt like a private glimpse of the act of thinking. Reading their article was like getting a peek behind the magician’s curtain.

  Redelmeier didn’t have much trouble figuring out what he wanted to do with his life. As a kid he’d fallen in love with the doctors on television—Leonard McCoy on Star Trek and, especially, Hawkeye Pierce on M*A*S*H. “I sort of wanted to be heroic,” he said. “I would never cut it in sports. I would never cut it in politics. I would never make it in the movies. Medicine was a path. A way to have a truly heroic life.” He felt the pull so strongly that he applied to medical school at the age of nineteen, during his second year of college. Just after his twentieth birthday he was training, at the University of Toronto, to become a doctor.

  And that’s where the problems started: The professors didn’t have much in common with Leonard McCoy or Hawkeye Pierce. A lot of them were self-important and even a bit pompous. Something about them, and what they were saying, led Redelmeier to seditious thoughts. “Early on in medical school there are a whole bunch of professors who are saying things that are wrong,” he recalled. “I don’t dare say anything about it.” They repeated common superstitions as if they were eternal truths. (“Bad things come in threes.”) Specialists from different fields of medicine faced with the same disease offered contradictory diagnoses. His professor of urology told students that blood in the urine suggested a high chance of kidney cancer, while his professor of nephrology said that blood in the urine indicated a high chance of glomerulonephritis—kidney inflammation. “Both had exaggerated confidence based on their expert experience,” said Redelmeier, and both mainly saw only what they had been trained to see.

  The problem was not what they knew, or didn’t know. It was their need for certainty or, at least, the appearance of certainty. Standing beside the slide projector, many of them did not so much teach as preach. “There was a generalized mood of arrogance,” said Redelmeier. “ ‘What do you mean you didn’t give steroids!!????’” To Redelmeier the very idea that there was a great deal of uncertainty in medicine went largely unacknowledged by its authorities.

  There was a reason for this: To acknowledge uncertainty was to admit the possibility of error. The entire profession had arranged itself as if to confirm the wisdom of its decisions. Whenever a patient recovered, for instance, the doctor typically attributed the recovery to the treatment he had prescribed, without any solid evidence that the treatment was responsible. Just because the patient is better after I treated him doesn’t mean he got better because I treated him, Redelmeier thought. “So many diseases are self-limiting,” he said. “They will cure themselves. People who are in distress seek care. When they seek care, physicians feel the need to do something. You put leeches on; the condition improves. And that can propel a lifetime of leeches. A lifetime of overprescribing antibiotics. A lifetime of giving tonsillectomies to people with ear infections. You try it and they get better the next day and it is so compelling. You go to see a psychiatrist and your depression improves—you are convinced of the efficacy of psychiatry.”

  Redelmeier noticed other problems, too. His medical school professors took data at face value that should have been inspected more closely, for example. An old man would come into the hospital suffering from pneumonia. They’d check his heart rate and find it to be a reassuringly normal seventy-five beats per minute . . . and just move on. But the reason pneumonia killed so many old people was its power to spread infection. An immune system responding as it should generated fever, coughs, chills, sputum—and a faster than normal heartbeat. A body fighting an infection required blood to be pumped through it at a faster than normal rate. “The heart rate of an old man with pneumonia is not supposed to be normal!” said Redelmeier. “It’s supposed to be ripping along!” An old man with pneumonia whose heart rate appears normal is an old man whose heart may well have a serious problem. But the normal reading on the heart rate monitor created a false sense in doctors’ minds that all was well. And it was precisely when all seemed well that medical experts “failed to check themselves.”

  As it happens, a movement was taking shape right then and there in Toronto that came to be called “evidence-based medicine.” The core idea of evidence-based medicine was to test the intuition of medical experts—to check the thinking of doctors against hard data. When subjected to scientific investigation, some of what passed for medical wisdom turned out to be shockingly wrong-headed. When Redelmeier entered medical school in 1980, for instance, the conventional wisdom held that if a heart attack victim suffered from some subsequent arrhythmia, you gave him drugs to suppress it. By the end of Redelmeier’s medical training, seven years later, researchers had shown that heart attack patients whose arrhythmia was suppressed died more often than the ones whose condition went untreated. No one explained why doctors, for years, had opted for a treatment that systematically killed patients—though proponents of evidence-based medicine were beginning to look to the work of Kahn
eman and Tversky for possible explanations. But it was clear that the intuitive judgments of doctors could be gravely flawed: The evidence of the medical trials now could not be ignored. And Redelmeier was alive to the evidence. “I became very aware of the buried analysis—that a lot of the probabilities were being made up by expert opinion,” said Redelmeier. “I saw error in the way people think that was being transmitted to patients. And people had no recognition of the mistakes that they were making. I had a little unhappiness, a little dissatisfaction, a sense that all was not right in the state of Denmark.”

  Toward the end of their article in Science, Daniel Kahneman and Amos Tversky had pointed out that, while statistically sophisticated people might avoid the simple mistakes made by less savvy people, even the most sophisticated minds were prone to error. As they put it, “their intuitive judgments are liable to similar fallacies in more intricate and less transparent problems.” That, the young Redelmeier realized, was a “fantastic rationale why brilliant physicians were not immune to these fallibilities.” He thought back to the errors he had made while trying to solve math problems. “The same problem solving exists in medicine,” he said. “In math you always check your work. In medicine, no. And if we are fallible in algebra, where the answers are clear, how much more fallible must we be in a world where the answers are much less clear?” Error wasn’t necessarily shameful; it was merely human. “They provided a language and a logic for articulating some of the pitfalls people encounter when they think. Now these mistakes could be communicated. It was the recognition of human error. Not its denial. Not its demonization. Just the understanding that they are part of human nature.”

  But Redelmeier kept to himself any heretical thoughts he harbored as a young medical student. He had never felt the impulse to question authority or flout convention, and had no talent for either. “I was never shocked and disappointed before in my life,” he said. “I was always very obedient. Law-abiding. I vote in all elections. I show up at every university staff meeting. I’ve never had an altercation with the police.”

  In 1985, he was accepted as a medical resident at the Stanford University hospital. At Stanford he began, haltingly, to voice his professional skepticism. One night during his second year, he was manning the intensive care unit and was assigned to keep a young man alive long enough to harvest his organs. (The American euphemism—“harvesting”—sounded strange to his ears. In Canada they called it “organ retrieval.”) His brain-dead patient was a twenty-one-year-old who had wrapped his motorcycle around a tree.

  It was the first time Redelmeier had been confronted with the dying body of a person younger than himself, and it bothered him, in a way that the deaths of older people he had witnessed had not. “It was such a loss of so many life years,” he said. “It was such a preventable case. And the guy hadn’t been wearing a helmet.” Redelmeier was newly struck by the inability of human beings to judge risks, even when their misjudgment might kill them. When making judgments, people obviously could use help—say, by requiring all motorcyclists to wear helmets. Later Redelmeier said as much to one of his fellow students, an American. What is it with you freedom-loving Americans? he asked. Live free or die. I don’t get it. I say, “Regulate me gently. I’d rather live.” His fellow student replied, Not only do a lot of Americans not share your view; other physicians don’t share your view. Redelmeier’s fellow student told him about Stanford’s famous head of cardiac surgery, Norm Shumway, who had actively lobbied against the creation of a law that would require motorcyclists to wear helmets. “It dropped my jaw,” said Redelmeier. How could a guy so smart be so stupid about that? We’re definitely capable of errors. And human fallibility should be paid attention to.”

  At the age of twenty-seven, as he finished his Stanford residency, Redelmeier was creating the beginnings of a worldview that internalized the article by the two Israeli psychologists that he had read as a teenager. Where this worldview would lead he did not know. He still thought it possible that, upon his return to Canada, he might just move back up to northern Labrador, where he had spent one summer during medical school delivering health care to a village of five hundred people. “I didn’t have great memory skills or great dexterity,” he said. “I was afraid I wouldn’t be a great doctor. And if I wasn’t going to be great, I might as well go to serve someplace that was underserved, where I was needed and wanted.” Redelmeier actually still believed that he might wind up practicing medicine in a conventional manner. But then he met Amos Tversky.

  * * *

  Redelmeier had long made a habit of anticipating his own mental errors and correcting for them. Alive to the fallibility of his memory, he carried a notepad wherever he went and wrote down thoughts and problems as they occurred to him. When awakened late at night by a phone call from the hospital, he always lied and told the fast-talking resident on the other end of the line that they had a bad connection, and so he needed to repeat everything he had just said. “You can’t tell a resident he is speaking too quickly. You blame yourself—and it facilitates not only his thinking but my own.” When a visitor turned up in Redelmeier’s office when he was between rounds, he would set a kitchen timer to make sure he didn’t get lost in conversation and wind up late for his patients. “Redelmeier loses track of time when he is having fun,” said Redelmeier. In advance of any social situation, he went to unusual lengths to correct whatever he imagined might go wrong. When he gave a talk—still a massive challenge for him, with his stammer—he cased the lecture hall and simulated his entire performance.

  And so, in the spring of 1988, for Redelmeier it felt perfectly normal, two days before his first lunch with Amos Tversky, to walk through the Stanford Faculty Club dining room where they were scheduled to meet. On the day of the lunch, he moved his hospital tour of patients from 6:30 in the morning to 4:30, to reduce the risk that anyone’s medical problems would interfere with his meeting. He didn’t eat breakfast usually, but on this day he did, so that he wouldn’t be distracted by hunger during lunch. As was also his habit, he jotted down in advance little notes—potential topics of discussion—“for fear of blanking.” Not that he intended to say much. Hal Sox, Redelmeier’s superior at Stanford, who would be joining them, had told Redelmeier, “Don’t talk. Don’t say anything. Don’t interrupt. Just sit and listen.” Meeting with Amos Tversky, Hal Sox, said, was “like brainstorming with Albert Einstein. He is one for the ages—there won’t ever be anyone else like him.”

  Hal Sox happened to have coauthored the first article Amos ever wrote about medicine. Their paper had sprung from a question Amos had put to Sox: How did a tendency people exhibited when faced with financial gambles play itself out in the minds of doctors and patients? Specifically, given a choice between a sure gain and a bet with the same expected value (say, $100 for sure or a 50-50 shot at winning $200), Amos had explained to Hal Sox, people tended to take the sure thing. A bird in the hand. But, given the choice between a sure loss of $100 and a 50-50 shot of losing $200, they took the risk. With Amos’s help, Sox and two other medical researchers designed experiments to show how differently both doctors and patients made choices when those choices were framed in terms of losses rather than gains.

  Lung cancer proved to be a handy example. Lung cancer doctors and patients in the early 1980s faced two unequally unpleasant options: surgery or radiation. Surgery was more likely to extend your life, but, unlike radiation, it came with the small risk of instant death. When you told people that they had a 90 percent chance of surviving surgery, 82 percent of patients opted for surgery. But when you told them that they had a 10 percent chance of dying from the surgery—which was of course just a different way of putting the same odds—only 54 percent chose the surgery. People facing a life-and-death decision responded not to the odds but to the way the odds were described to them. And not just patients; doctors did it, too. Working with Amos, Sox said, had altered his view of his own profession. “The cognitive aspects are not at all understood in
medicine,” he said. Among other things, he could not help but wonder how many surgeons, consciously or unconsciously, had told some patient that he had a 90 percent chance of surviving a surgery, rather than a 10 percent of dying from it, simply because it was in his interest to perform the surgery.

  At that first lunch, Redelmeier mainly just watched as Sox and Amos talked. Still, he noticed some things. Amos’s pale blue eyes darted around, and he had a slight speech impediment. His English was fluent but spoken with a thick Israeli accent. “He was a little bit hypervigilant,” said Redelmeier. “He was bouncy. Energetic. He had none of the lassitude of some of the tenured faculty. He did 90 percent of the talking. Every word was worth listening to. I was surprised by how little medicine he knew, because he was already having a big effect on medical decision making.” Amos had all sorts of questions for the two doctors; most of them had to do with probing for illogic in medical behavior. After watching Hal Sox answer or try to answer Amos’s questions, Redelmeier realized that he was learning more about his superior in a single lunch than he’d gathered from the previous three years. “Amos knew exactly what questions to ask,” said Redelmeier. “There were no awkward silences.”

  At the end of the lunch, Amos invited Redelmeier to visit him in his office. It didn’t take long before Amos was bouncing ideas about the human mind off Redelmeier, as he had bounced them off Hal Sox, to listen for an echo in medicine. The Samuelson bet, for instance. The Samuelson bet was named for Paul Samuelson, the economist who had cooked it up. As Amos explained it, people offered a single bet in which they have a 50-50 chance either to win $150 or lose $100 usually decline it. But if you offer those same people the chance to make the same bet one hundred times over, most of them accept the bet. Why did they make the expected value calculation—and respond to the odds being in their favor—when they were allowed to make the bet a hundred times, but not when they are offered a single bet? The answer was not entirely obvious. Yes, the more times you play a game with the odds in your favor, the less likely you are to lose; but the more times you play, the greater the total sum of money you stood to lose. Anyway, after Amos finishing explaining the paradox, “He said, ‘Okay, Redelmeier, find me the medical analogy to that!’”

 

‹ Prev