SuperFreakonomics

Home > Other > SuperFreakonomics > Page 9
SuperFreakonomics Page 9

by Steven D. Levitt


  Here’s the answer, based on the likelihood of a patient dying within twelve months:*

  Shortness of breath is by far the most common high-risk condition. (It is usually notated as “SOB,” so if someday you see that abbreviation on your chart, don’t think the doctor hates you.) To many patients, SOB might seem less scary than something like chest pains. But here’s what the data say:

  So a patient with chest pains is no more likely than the average ER patient to die within a year, whereas shortness of breath more than doubles the death risk. Similarly, roughly 1 in 10 patients who show up with a clot, a fever, or an infection will be dead within a year; but if a patient is dizzy, is numb, or has a psychiatric condition, the risk of dying is only one-third as high.

  With all this in mind, let’s get back to the question at hand: given all these data, how do we measure the efficacy of each doctor?

  The most obvious course would be to simply look at the raw data for differences in patient outcomes across doctors. Indeed, this method would show radical differences among doctors. If these results were trustworthy, there would be few factors in your life as important as the identity of the doctor who happens to draw your case when you show up at the ER.

  But for the same reasons you shouldn’t put much faith in doctor report cards, a comparison like this is highly deceptive. Two doctors in the same ER are likely to treat very different pools of patients. The average patient at noon, for instance, is about ten years older than one who comes in the middle of the night. Even two doctors working the same shift might see very different patients, based on their skills and interests. It is the triage nurse’s job to match patients and doctors as best as possible. One doc may therefore get all the psychiatric cases on a shift, or all the elderly patients. Because an old person with shortness of breath is much more likely to die than a thirty-year-old with the same condition, we have to be careful not to penalize the doctor who happens to be good with old people.

  What you’d really like to do is run a randomized, controlled trial so that when patients arrive they are randomly assigned to a doctor, even if that doctor is overwhelmed with other patients or not well equipped to handle a particular ailment.

  But we are dealing with one set of real, live human beings who are trying to keep another set of real, live human beings from dying, so this kind of experiment isn’t going to happen, and for good reason.

  Since we can’t do a true randomization, and if simply looking at patient outcomes in the raw data will be misleading, what’s the best way to measure doctor skill?

  Thanks to the nature of the emergency room, there is another sort of de facto, accidental randomization that can lead us to the truth. The key is that patients generally have no idea which doctors will be working when they arrive at the ER. Therefore, the patients who show up between 2:00 and 3:00 P.M. on one Thursday in October are, on average, likely to be similar to the patients who show up the following Thursday, or the Thursday after that. But the doctors working on those three Thursdays will probably be different. So if the patients who came on the first Thursday have worse outcomes than the patients who came on the second or third Thursday, one likely explanation is that the doctors on that shift weren’t as good. (In this ER, there were usually two or three doctors per shift.)

  There could be other explanations, of course, like bad luck or bad weather or an E. coli outbreak. But if you look at a particular doctor’s record across hundreds of shifts and see that the patients on those shifts have worse outcomes than is typical, you have a pretty strong indication that the doctor is at the root of the problem.

  One last note on methodology: while we exploit information about which doctors are working on a shift, we don’t factor in which doctor actually treats a particular patient. Why? Because we know that the triage nurse’s job is to match patients with doctors, which makes the selection far from random. It might seem counterintuitive—wasteful, even—to ignore the specific doctor-patient match in our analysis. But in scenarios where selection is a problem, the only way to get a true answer is, paradoxically, to throw away what at first seems to be valuable information.

  So, applying this approach to Craig Feied’s massively informative data set, what can we learn about doctor skill?

  Or, put another way: if you land in an emergency room with a serious condition, how much does your survival depend on the particular doctor you draw?

  The short answer is…not all that much. Most of what looks like doctor skill in the raw data is in fact the luck of the draw, the result of some doctors getting more patients with less-threatening ailments.

  This isn’t to say there’s no difference between the best and worst doctors in the ER. (And no, we’re not going to name them.) In a given year, an excellent ER doctor’s patients will have a twelve-month death rate that is nearly 10 percent lower than the average. This may not sound like much, but in a busy ER with tens of thousands of patients, an excellent doctor might save six or seven lives a year relative to the worst doctor.

  Interestingly, health outcomes are largely uncorrelated to spending. This means the best doctors don’t spend any more money—for tests, hospital admittance, and so on—than the lesser doctors. This is worth pondering in an era when higher health-care spending is widely thought to produce better health-care outcomes. In the United States, the health-care sector accounts for more than 16 percent of GDP, up from 5 percent in 1960, and is projected to reach 20 percent by 2015.

  So what are the characteristics of the best doctors?

  For the most part, our findings aren’t very surprising. An excellent doctor is disproportionately likely to have attended a top-ranked medical school and served a residency at a prestigious hospital. More experience is also valuable: an extra ten years on the job yields the same benefit as having served a residency at a top hospital.

  And oh yes: you also want your ER doctor to be a woman. It may have been bad for America’s schoolchildren when so many smart women passed up teaching jobs to go to medical school, but it’s good to know that, in our analysis at least, such women are slightly better than their male counterparts at keeping people alive.

  One factor that doesn’t seem to matter is whether a doctor is highly rated by his or her colleagues. We asked Feied and the other head physicians at WHC to name the best docs in the ER. The ones they chose turned out to be no better than average at lowering death rates. They were, however, good at spending less money per patient.

  So the particular doctor you draw in the ER does matter—but, in the broader scheme of things, not nearly as much as other factors: your ailment, your gender (women are much less likely than men to die within a year of visiting the ER), or your income level (poor patients are much more likely to die than rich ones).

  The best news is that most people who are rushed to the ER and think they are going to die are in little danger of dying at all, at least not any time soon.

  In fact, they might have been better off if they simply stayed at home. Consider the evidence from a series of widespread doctor strikes in Los Angeles, Israel, and Colombia. It turns out that the death rate dropped significantly in those places, anywhere from 18 percent to 50 percent, when the doctors stopped working!

  This effect might be partially explained by patients’ putting off elective surgery during the strike. That’s what Craig Feied first thought when he read the literature. But he had a chance to observe a similar phenomenon firsthand when a lot of Washington doctors left town at the same time for a medical convention. The result: an across-the-board drop in mortality.

  “When there are too many physician-patient interactions, the amplitude gets turned up on everything,” he says. “More people with nonfatal problems are taking more medications and having more procedures, many of which are not really helpful and a few of which are harmful, while the people with really fatal illnesses are rarely cured and ultimately die anyway.”

  So it may be that going to the hospital slightly increases your odds of surviving i
f you’ve got a serious problem but increases your odds of dying if you don’t. Such are the vagaries of life.

  Meanwhile, there are some ways to extend your life span that have nothing to do with going to the hospital. You could, for instance, win a Nobel Prize. An analysis covering fifty years of the Nobels in chemistry and physics found that the winners lived longer than those who were merely nominated. (So much for the Hollywood wisdom of “It’s an honor just to be nominated.”) Nor was the winners’ longevity a function of the Nobel Prize money. “Status seems to work a kind of health-giving magic,” says Andrew Oswald, one of the study’s authors. “Walking across that platform in Stockholm apparently adds about two years to a scientist’s life span.”

  You could also get elected to the Baseball Hall of Fame. A similar analysis shows that men who are voted into the Hall outlive those who are narrowly omitted.

  But what about those of us who aren’t exceptional at science or sport? Well, you could purchase an annuity, a contract that pays off a set amount of income each year but only as long as you stay alive. People who buy annuities, it turns out, live longer than people who don’t, and not because the people who buy annuities are healthier to start with. The evidence suggests that an annuity’s steady payout provides a little extra incentive to keep chugging along.

  Religion also seems to help. A study of more than 2,800 elderly Christians and Jews found that they were more likely to die in the thirty days after their respective major holidays than in the thirty days before. (One piece of evidence proving a causal link: Jews had no aversion to dying in the thirty days before a Christian holiday, nor did Christians disproportionately outlast the Jewish holidays.) In a similar vein, longtime friends and rivals Thomas Jefferson and John Adams each valiantly struggled to forestall death until they’d reached an important landmark. They expired within fifteen hours of each other on July 4, 1826, the fiftieth anniversary of the ratification of the Declaration of Independence.

  Holding off death by even a single day can sometimes be worth millions of dollars. Consider the estate tax, which is imposed on the taxable estate of a person upon his or her death. In the United States, the rate in recent years was 45 percent, with an exemption for the first $2 million. In 2009, however, the exemption jumped to $3.5 million—which meant that the heirs of a rich, dying parent had about 1.5 million reasons to console themselves if said parent died on the first day of 2009 rather than the last day of 2008. With this incentive, it’s not hard to imagine such heirs giving their parent the best medical care money could buy, at least through the end of the year. Indeed, two Australian scholars found that when their nation abolished its inheritance tax in 1979, a disproportionately high number of people died in the week after the abolition as compared with the week before.

  For a time, it looked as if the U.S. estate tax would be temporarily abolished for one year, in 2010. (This was the product of a bipartisan hissy fit in Washington, which, as of this writing, appears to have been resolved.) If the tax had been suspended, a parent worth $100 million who died in 2010 could have passed along all $100 million to his or her heirs. But, with a scheduled resumption of the tax in 2011, such heirs would have surrendered more than $40 million if their parent had the temerity to die even one day too late. Perhaps the bickering politicians decided to smooth out the tax law when they realized how many assisted suicides they might have been responsible for during the waning weeks of 2010.

  Most people want to fend off death no matter the cost. More than $40 billion is spent worldwide each year on cancer drugs. In the United States, they constitute the second-largest category of pharmaceutical sales, after heart drugs, and are growing twice as fast as the rest of the market. The bulk of this spending goes to chemotherapy, which is used in a variety of ways and has proven effective on some cancers, including leukemia, lymphoma, Hodgkin’s disease, and testicular cancer, especially if these cancers are detected early.

  But in most other cases, chemotherapy is remarkably ineffective. An exhaustive analysis of cancer treatment in the United States and Australia showed that the five-year survival rate for all patients was about 63 percent but that chemotherapy contributed barely 2 percent to this result. There is a long list of cancers for which chemotherapy had zero discernible effect, including multiple myeloma, soft-tissue sarcoma, melanoma of the skin, and cancers of the pancreas, uterus, prostate, bladder, and kidney.

  Consider lung cancer, by far the most prevalent fatal cancer, killing more than 150,000 people a year in the United States. A typical chemotherapy regime for non-small-cell lung cancer costs more than $40,000 but helps extend a patient’s life by an average of just two months. Thomas J. Smith, a highly regarded oncology researcher and clinician at Virginia Commonwealth University, examined a promising new chemotherapy treatment for metastasized breast cancer and found that each additional year of healthy life gained from it costs $360,000—if such a gain could actually be had. Unfortunately, it couldn’t: the new treatment typically extended a patient’s life by less than two months.

  Costs like these put a tremendous strain on the entire health-care system. Smith points out that cancer patients make up 20 percent of Medicare cases but consume 40 percent of the Medicare drug budget.

  Some oncologists argue that the benefits of chemotherapy aren’t necessarily captured in the mortality data, and that while chemotherapy may not help nine out of ten patients, it may do wonders for the tenth. Still, considering its expense, its frequent lack of efficacy, and its toxicity—nearly 30 percent of the lung-cancer patients on one protocol stopped treatment rather than live with its brutal side effects—why is chemotherapy so widely administered?

  The profit motive is certainly a factor. Doctors are, after all, human beings who respond to incentives. Oncologists are among the highest-paid doctors, their salaries increasing faster than any other specialists’, and they typically derive more than half of their income from selling and administering chemotherapy drugs. Chemotherapy can also help oncologists inflate their survival-rate data. It may not seem all that valuable to give a late-stage victim of lung cancer an extra two months to live, but perhaps the patient was only expected to live four months anyway. On paper, this will look like an impressive feat: the doctor extended the patient’s remaining life by 50 percent.

  Tom Smith doesn’t discount either of these reasons, but he provides two more.

  It is tempting, he says, for oncologists to overstate—or perhaps over-believe in—the efficacy of chemotherapy. “If your slogan is ‘We’re winning the war on cancer,’ that gets you press and charitable donations and money from Congress,” he says. “If your slogan is ‘We’re still getting our butts kicked by cancer but not as bad as we used to,’ that’s a different sell. The reality is that for most people with solid tumors—brain, breast, prostate, lung—we aren’t getting our butts kicked as badly, but we haven’t made much progress.”

  There’s also the fact that oncologists are, once again, human beings who have to tell other human beings they are dying and that, sadly, there isn’t much to be done about it. “Doctors like me find it incredibly hard to tell people the very bad news,” Smith says, “and how ineffective our medicines sometimes are.”

  If this task is so hard for doctors, surely it must also be hard for the politicians and insurance executives who subsidize the widespread use of chemotherapy. Despite the mountain of negative evidence, chemotherapy seems to afford cancer patients their last, best hope to nurse what Smith calls “the deep and abiding desire not to be dead.” Still, it is easy to envision a point in the future, perhaps fifty years from now, when we collectively look back at the early twenty-first century’s cutting-edge cancer treatments and say: We were giving our patients what?

  The age-adjusted mortality rate for cancer is essentially unchanged over the past half-century, at about 200 deaths per 100,000 people. This is despite President Nixon’s declaration of a “war on cancer” more than thirty years ago, which led to a dramatic increase in funding and pu
blic awareness.

  Believe it or not, this flat mortality rate actually hides some good news. Over the same period, age-adjusted mortality from cardiovascular disease has plummeted, from nearly 600 people per 100,000 to well beneath 300. What does this mean?

  Many people who in previous generations would have died from heart disease are now living long enough to die from cancer instead. Indeed, nearly 90 percent of newly diagnosed lung-cancer victims are fifty-five or older; the median age is seventy-one.

  The flat cancer death rate obscures another hopeful trend. For people twenty and younger, mortality has fallen by more than 50 percent, while people aged twenty to forty have seen a decline of 20 percent. These gains are real and heartening—all the more so because the incidence of cancer among those age groups has been increasing. (The reasons for this increase aren’t yet clear, but among the suspects are diet, behaviors, and environmental factors.)

  With cancer killing fewer people under forty, fighting two wars must surely be driving the death toll higher for young people, no?

  From 2002 to 2008, the United States was fighting bloody wars in Afghanistan and Iraq; among active military personnel, there were an average 1,643 fatalities per year. But over the same stretch of time in the early 1980s, with the United States fighting no major wars, there were more than 2,100 military deaths per year. How can this possibly be?

  For one, the military used to be much larger: 2.1 million on active duty in 1988 versus 1.4 million in 2008. But even the rate of death in 2008 was lower than in certain peacetime years. Some of this improvement is likely due to better medical care. But a surprising fact is that the accidental death rate for soldiers in the early 1980s was higher than the death rate by hostile fire for every year the United States has been fighting in Afghanistan and Iraq. It seems that practicing to fight a war can be just about as dangerous as really fighting one.

 

‹ Prev