Book Read Free

Attending

Page 11

by Ronald Epstein


  The child was warm to the touch and had a fever of 39.5°C (about 103°F). Protocol in these situations called for a careful physical examination focusing on finding an explanation for the fever, and if no explanation could be found, to do a lumbar puncture—a spinal tap. His eardrums were a bit red, likely from crying and not from infection, and he had a cough but no signs of pneumonia. The examination was otherwise unrevealing.

  Then I examined the child’s neck to assess if it was stiff or supple. He did not like being taken from his mother’s arms, and as I flexed his neck gently, he cried and reached for her. I tried it again, with him in his mother’s lap, and it went a bit better. His neck didn’t feel stiff—or at least not stiff enough to be called “stiff.” But it wasn’t really supple either. I flexed his neck again, noting that he didn’t flex his hips; if he had, that would have been a classic sign of meningitis. I was a bit relieved, but not completely so; still I needed to convince myself that the child would be okay. I drew some blood and sent it off to the lab. A few minutes later the results came back; the white count was normal, and that tipped the balance. I could now justify not obtaining spinal fluid. I sent the child home and told his parents to bring him back to the clinic in the morning, just a few hours hence.

  I went back to the on-call room, but I didn’t sleep well, a telling sign that something still felt unresolved. In retrospect I knew that I had convinced myself that things were more “all right” than they might have been. That this all happened at four in the morning changed everything. To do a lumbar puncture, I’d have needed to wake my supervising resident. He was somewhat disagreeable. He’d say that he didn’t “mind” being woken at night, but I and others knew otherwise. So I didn’t wake him. Had it been at four in the afternoon rather than four in the morning, I would likely have done the lumbar puncture. Fortunately, within a few hours the child had improved. But not doing the lumbar puncture simply because I didn’t want to face the possibility of humiliation was still the wrong choice. This time I was lucky, and so was the child; another child with the exact same symptoms and lab tests could have had meningitis. A near miss. Next time I might not be so lucky.

  To put all of this into perspective, I’m not alone. A recent article by oncologist Ranjana Srivastava chronicled how she didn’t speak up about the safety of planned surgery for a lung tumor;19 her gut told her to contact the surgeon, but she didn’t because she assumed that the surgeon—who was well-known and well-respected—knew what he was doing. She was right and the surgeon was wrong; the patient died. Three other physicians later voiced that they had had doubts, but were afraid that they might have been wrong and so didn’t speak up. The surgeon was horrified that others perceived him to be so intimidating.

  It is not just in the emergency room, in cancer surgery, or in terminal illness that complex decisions arise. In primary care, I often have to assess which patients with uncontrolled type 2 diabetes should start insulin injections rather than continue with their oral medications. On the surface this might seem a simple problem—there are clinical practice guidelines about this issue. However, the real wisdom of clinical practice is to know when to break the rules. In the past couple of years I’ve broken the diabetes rule several times—with an obese man who was successfully losing weight and whose diabetes would likely improve if he continued, with a woman with progressing metastatic breast cancer, with a homeless young man with a history of suicide attempts who had panic attacks at the thought of a needle, with a frail eighty-year-old woman who wouldn’t live long enough to suffer the long-term consequences of diabetes, with a woman who lived alone in a remote location making it harder to get help if she had an insulin reaction. Each of these situations required phronesis based on an appreciation of the particulars; amassing more facts and calculating probabilities would amount to what internist Faith Fitzgerald called “the punctilious quantification of the amorphous,”20 trying to divide a raw egg by slicing it with a sharp knife. No matter how sharp the knife, you end up with a runny mess. The knife is a good tool, just not the right one.

  Nonetheless, physicians are now judged by that punctilious quantification. If I don’t prescribe insulin and the patient’s blood sugar becomes slightly out of control, my care will be considered inadequate even if tighter control might actually have made the patient’s health worse;21 the patient is now considered to have “uncontrolled diabetes,” the medical assistants in my office flag the chart for special attention, and insurers consider it a blemish that justifies denying financial bonuses based on “quality.”

  Glouberman and Zimmerman point out that addressing complex problems is like raising a child. The goal is a healthy outcome, not necessarily a predictable or identical one. If you have more than one child, each turns out differently, even if you’ve provided the same love, nurturance, guidance, caring, and patience. If you’re fortunate, each child will live a fulfilling life, but in different and unpredictable ways. Parenting books and advice are helpful but only up to a point. Then you have to muddle through. As soon as you feel a sense of mastery, though, a new phase arises for which you are again unprepared. There is no clear path and sometimes no map at all.

  DECISION SCIENCE

  In medicine, sometime in the 1960s decision making was elevated from the realm of intuition and experience to what is now called decision science. By the 1970s, experts proposed that medical decisions be made based on clinical evidence.22 Clinical questions would be translated quantitatively, such as “Consider a hundred patients with atrial fibrillation [an abnormal heart rhythm that can lead to blood clots and strokes]. Assuming that their risk of stroke is about 5 percent per year, how many patients would you have to treat with blood thinners to reduce the risk of stroke to 2 percent?” No one wants to have a stroke, but blood thinners have their downsides too—not only hemorrhage, but also the annoyance of getting blood tests every week or two to adjust the dose of the medication. It’s a balance. To address the balance, researchers developed the concept of “utilities” to quantify the degree to which a life with a small stroke would be worth living, and the degree to which taking a blood thinner, going for frequent blood draws, and the possibility of a hemorrhage would diminish quality of life. This approach often displayed decision trees with multiple branches, each branch ultimately leading to the “right” decision in a specific circumstance, depending on the individual’s level of risk.

  Proponents of decision algorithms were puzzled when the algorithms were infrequently adopted. They learned that clinicians and patients just didn’t think that way, nor did they want to. The idea that every patient would have the same “utilities” seemed particularly presumptuous; individual patients might have different values and preferences than the group as a whole,23 and modeling the sometimes-unstable preferences of individuals faced with complex decisions quickly becomes a statistical nightmare. These models tended to assume that humans are rational decision makers, a proposition that is both attractive and ludicrous—attractive because it improves the likelihood that decisions reflect our underlying values, and ludicrous because so many nonrational factors influence the choices that we make. Psychologists Daniel Kahneman and Amos Tversky, for example, described how decisions framed as avoiding a loss result in different choices when framed as opportunities for gain.24 Their experiments explored how humans are not entirely rational even when they seem to be acting logically, and that the biases and heuristics that drive decisions are often below the level of awareness. If a close friend had a complication from a particular medication, wouldn’t you feel a bit more reluctant to take it—even if you knew that complication was a one-in-a-million event? And if you were that friend’s physician, wouldn’t you also be more reluctant to prescribe it even knowing that the chance of the same event happening twice is vanishingly small? Recognizing that patients and physicians are just as vulnerable to unconscious biases as anyone else, decision scientists embraced models of “bounded rationality.”25 But if we’re not entirely rational and biases are bel
ow the level of awareness, how do we monitor them? Here’s where metacognition comes in—literally thinking about your own thinking (and feelings too), regulating your inner operating system, or, in emergency-room physician Patrick Croskerry’s words, regulating your own “mindware.”26

  As Croskerry points out, good decisions require a kind of education that most clinicians don’t yet receive—an education about how their own minds work so that they can more readily assess their biases and engage strategies to correct them. This “education” is more than reading a book about neuropsychology. He adds that de-biasing involves uncoupling one’s observations from expectations, interpretations, and premature judgments. Good decisions also require mental stability, affect regulation, and clarity of purpose—in a word, mindfulness.

  Research now demonstrates how attention training and compassion training can increase awareness of our own biases and thus reduce their influence on decision making.27 Through training, implicit mental processes—including biases—are more accessible to awareness. You hesitate for a moment before reacting; you become more discerning and you uncouple expectations (what you think you’ll see) from observations (what you do see). You reconsider.

  INTUITION

  In medical school we were taught a protocol for diagnosing skin conditions. We’d describe the location of the lesions, their color, whether their margins were distinct or fuzzy, whether their surface was smooth or scaly, and whether they were raised or flat. Based on those features we were taught to propose a list of possible diagnoses, rank them in order of likelihood, and select the most likely. This seems simple enough, but it’s not the way that experienced dermatologists actually diagnose skin lesions. When experienced dermatologists try to follow the medical student protocol, they fall down; instead they rely on first impressions.28 Similarly, experienced doctors know when a patient’s story just doesn’t “add up” or something doesn’t quite “look right.”

  At first, decision scientists tried to debunk intuition. This is understandable. They were trying to correct the excesses of a prior generation of physicians who, in their arrogance, believed that the care they delivered was better than their colleagues’ and relied only on anecdotes to support their views. In their zeal, though, decision scientists were blind to the opposite problem, that when decision making is totally dominated by analytic thinking, doctors get into a different kind of trouble. The challenge is to cultivate an informed intuition that can guide but not completely dominate the decision-making process.

  Intuition is murky, visceral, impressionistic, and irrational, making it difficult to describe and study. However, intuition is vital for making sense of complex situations. The current generation of decision scientists take intuition seriously, but still have difficulty describing what it is and how it works. That is because, in part, intuition isn’t just one thing, and it goes by different names—gut feelings, fast thinking, fuzzy traces, Type 1 processing—each of which is slightly different in its emphasis on thoughts, emotions, visceral sensations, and memories.29 Some types of intuition may be employed when encountering familiar problems, such as the dermatologists’ pattern recognition, whereas other types of intuition act as a guide to novel situations by helping the decision maker see similarity or analogy to prior situations, or they involve emotional or social intelligence—how to know and interact with others.

  In the last decades of the twentieth century, groundbreaking work by neurologist Antonio Damasio suggested that the ability to make wise choices depends on awareness of one’s own emotions and those of others—a view that had previously been considered radical by research psychologists and cognitive scientists. In his book, Descartes’ Error, Damasio discusses the case of Phineas Gage, an unfortunate man who, in 1848, sustained a severe brain injury that impaired one of his frontal cortices, the part of the brain that confers awareness and regulates emotions. Mr. Gage survived the injury, and the rest of his brain was remarkably intact. His memory and abstract logic were good and he could carry on normal conversations. He was able to hold a job, at least for a while. Yet, according to his physician at the time, Gage’s personality changed; he was “manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires, at times pertinaciously obstinate, yet capricious and vacillating, devising many plans of future operations, which are no sooner arranged than they are abandoned in turn for others appearing more feasible.”30 With the loss of his prefrontal cortex, he had lost the ability to recognize and regulate his emotions, anticipate the consequences of his actions, or learn from his mistakes; as a result many of his personal decisions were disastrous.

  It makes sense that emotions and intuitions would be important in making life choices. Imagine if this weren’t so. Few people would choose a life partner by starting with a list of attributes, then going down a checklist with each potential candidate until the one with the most points wins. Most of us would consider the potential partner’s list of attributes while also listening with the heart and the gut—messages from the body that only later can we frame as part of the story of falling in love. This may be as true with complex medical decisions; clinical evidence can only take us so far, and the heart and the gut must also speak.31

  If emotions, social cognition, and intuition are essential to decision making, how can they be cultivated? The clinician who is willing to engage in the inner work that it requires has two main tasks: moving from fragmented mind to whole mind, and from individual mind to shared mind. Evidence suggests that the potential for whole mind and shared mind is innate. But physicians’ professional training undernourishes these capacities.

  SAYING NO

  A colleague, Stu Farber, died of acute myelogenous leukemia at age sixty-seven. Stu was a palliative care physician and was dealt a difficult and rare diagnosis, one from which only a quarter of patients live for five years and even fewer are cured. In an article written shortly before his death,32 Stu recounts that he was hospitalized with pneumonia. The chemotherapy had predictably suppressed his immune system. The infectious disease consultant explained that while Stu’s pneumonia was likely due to a virus and would likely resolve on its own, it might be a more lethal infection, Pneumocystis, the agent responsible for most of the early deaths from AIDS. To tell the difference between a virus and Pneumocystis would require an invasive lung biopsy under sedation, but given how ill Stu was, there was a greater risk of complications from the procedure, and at best it would take him a few days to recover.

  Stu knew that he was dying, but if this was Pneumocystis his life might be prolonged with treatment. His doctors implied that not to do the procedure would be unthinkable, but the pulmonologist still asked, “What do you want to do?” Stu was caught between two worlds—his familiar world as a clinician and the new, and strange, world of being a patient. Fortunately, Stu had the presence of mind to say that he was now comfortable and didn’t want to rock the boat; he didn’t want the biopsy and would take his chances. To his surprise, the physician then said, “That’s the same choice that I would make.”

  What is stunning about this story and many others like it is that physicians can make very different choices when it comes to their own care than what they might recommend to their patients.33 Physicians tend to recommend the most aggressive care for their patients, even when that care is likely to cause further discomfort and disability. Yet, when physicians themselves (reluctantly) join the “tribe of the sick,” they may want fewer aggressive life-prolonging measures, instead focusing to a greater degree on their comfort and dignity.

  How could this be? Even in his final weeks of life, Stu was astute, seeing that while his physicians couldn’t easily place themselves in his shoes, they might be able to move beyond protocol and cold logic and appreciate the patient’s perspective. But it took prompting by a particularly clear-sighted and knowledgeable patient. Here, Stu’s physician, like many others, breathed a sigh of relief when Stu chose to forgo aggressive treatment and to focus
on comfort—it unburdened the physician from having to make the decision. But few patients have the knowledge and the clarity to question physicians’ recommendations and would benefit from physicians’ efforts to apply a beginner’s mind to decisions that initially seem self-evident.

  BEING A DOCTOR BEING A PATIENT

  As I was recovering from an attack of kidney stones, my primary care physician ordered an ultrasound of my kidneys to determine if there were any residual stones. I had had stones before, and a number of ultrasounds and CT scan reports were in my medical records. The ultrasonographer was friendly and chatty, and I enjoyed being able to make out a few anatomical structures on the screen as she scanned my left kidney (where the stone had been), then my right. She seemed to be taking a particularly long time on the right side and taking a lot of snapshots on the computer. Then she went quiet. I noticed that she was lingering quite a bit north of kidneys, ureters, and bladder; we were in liver territory. Perhaps she had found a gallstone, I thought.

  I asked what was going on. Generally, technicians are not supposed to reveal findings until they are confirmed by the attending radiologist, but she knew that I was a doctor and had also been looking at the screen. She was reticent at first, then said, “There’s something interesting I need to check out with the radiologist.” In medicalese, interesting is never good. The radiologist explained, “Not only had the ultrasonographer looked at the urinary tract, she also noticed something going on in the liver.” She had found an incidentaloma—a baseball-size something-or-other that apparently no one had seen before. I had no interest in having it be my incidentaloma; it was an unwelcome guest.

  Over the past twenty years I have become personally acquainted with several incidentalomas. In medical slang, an incidentaloma is a surprise, a mass or lesion unexpectedly identified during a routine examination or imaging procedure—X-ray, scan, or ultrasound—or during surgery. Originally referring to benign tumors of the adrenal glands that were completely harmless, the term is now applied to any mass that you’re not looking for. Incidentalomas make simple situations complex. While an incidentaloma is usually of no clinical significance, it isn’t always. Occasionally, it might be cancer.

 

‹ Prev