Open Heart
Page 20
Such changes, by the beginning of the twentieth century, came to be associated with the notion of “scientific medicine,” wherein the “scientific” and “objective” treatment of patients—treatment based on data gathered in exams, from machines, and from laboratory tests, and, later on, from randomized controlled studies and evidence-based medicine—became the respected and prevalent mode for the practicing physician.*
“As the scientific mode of gathering information, reaching a diagnosis, and planning a treatment increasingly took center stage in the clinical world,” Jackson writes, “a humanistic mode of knowing patients, relating to them personally, and working with them as suffering persons often became less valued.”*
American medical schools, in fact, still seem dominated by reforms recommended nearly one hundred years ago by Abraham Flexner in the report of 1910 that bears his name. After reviewing ways doctors were educated in European universities, Flexner recommended that education in American medical schools start with a strong foundation in basic sciences, followed by the study of clinical medicine in a hospital environment that encouraged critical thinking, and, especially, research.
The appearance, rise, and hegemony of clinical academic departments between the two world wars that came about in large part because of the Flexner Report—that began in the late nineteenth century, and that accelerated after World War II—resulted in the emergence of what we know as “clinical science,” wherein experimentation on patients or laboratory animals derived directly from the problems doctors encountered at the patient’s bedside.
“In effect,” David Weatherall comments, this “set the scene for the appearance of modern high-technology medical practice.”*
“Those who criticize modern methods of teaching doctors—” he continues, “—in particular, [a] Cartesian approach to the study of human biology and disease—believe that the organization of clinical departments along Flexner’s lines may have done much to concentrate their minds on diseases rather than on those who suffer from them.”
In Time to Heal, the second in his two-volume history of American medical education, Kenneth Ludmerer notes how the ascendancy of molecular biology in the 1970s and 1980s transformed biomedical research, especially in the fields of molecular biology and molecular disease, cell biology, immunobiology, and neuroscience, and created “a new theoretical underpinning of medical knowledge” wherein “the gaze of investigators focused on ever smaller particles, such as genes, proteins, viruses, antibodies, and membrane receptors.”* Although the results were often “gratifying in terms of medical discovery,” Ludmerer writes, “for the first time a conspicuous separation of functions occurred between clinical research on one hand and patient care and clinical education on the other.”
Many clinical departments established discrete faculty tracks: an academic track, pursued by “physician-scientists” (formerly called “clinical investigators”), and a “clinician-teacher,” or “clinical-scholar” track, pursued by those whose interests lay primarily in teaching and patient care. The result, according to Ludmerer: “the growing estrangement between medical science and medical practice.”
In addition, he submits, the premium put on speed and high productivity in academic hospitals (called “throughput”)—a direct result of fiscal measures derived from managed-care policies, “carried negative implications” for the education of medical students.* “Habits of thoroughness, attentiveness to detail, questioning, listening, thinking, and caring were difficult if not impossible to instill when both patient care and teaching were conducted in an eight- or ten-minute office visit,” Ludmerer explains. Few medical students “were likely to conclude that these sacrosanct qualities were important when they failed to observe them in their teachers and role models.”
In addition, Ludmerer shows, medical education began “to revert to the corporate form it had occupied before the Flexnerian revolution” and “a money standard [started] to replace a university standard.”* The greatest difficulties medical schools experienced in the 1990s were in receiving payment for time, yet “time remained the most fundamental ingredient of the rich educational environment that academic health centers had always been expected to provide,” Ludmerer explains. “Without time, instructors could not properly teach, students and residents could not effectively learn, and investigators could not study problems.”
“Medicine is still a guild really,” Jerry says, “and it possesses many aspects of a guild. Mentoring and the passing down of traditions and expertise from one generation to the next are central because it’s the way you become socialized into the profession. Unfortunately, we don’t have enough strong role models these days to exemplify the best traditions in medicine, and this is due in large part, it seems to me, to the fact that so much of medical education is now dominated by basic science and new technologies.”
Like Rich and Phil, Jerry believes that many of the deficiencies in the practice of medicine today derive from the kind of education common to most medical schools—two or three years of basic sciences, followed by pathology, and a further few years in which students acquire clinical skills by working on the wards of large teaching hospitals.
“In the early stages of medical education, you learn the bricks and mortar of physiology, anatomy, biochemistry, and microbiology,” Jerry says, “and all that is very, very important. Given the paths most of our careers will take, however, we probably learn much more than we need to know—and to the neglect of other essential elements of our profession.”
“Is this the best way to train a doctor?” Weatherall asks in his study of medical education, and he focuses on the questions my friends ask: “Do two years spent in the company of cadavers provide the best introduction to a professional lifetime spent communicating with sick people and their families? Does a long course of pathology, with its emphasis on diseased organs, and exposure to the esoteric diseases that fill the wards of many of our teaching hospitals, prepare students for the very different spectrum of illness they will encounter in the real world?* And is the protracted study of the ‘harder’ basic biological sciences, to the detriment of topics like psychology and sociology, the best way to introduce a future doctor to human aspects of clinical practice?”
“Because most of us were trained since World War II in an era of antibiotics and other interventions,” Jerry explains, “most doctors have come to believe they can cure most diseases. Certainly we were taught to believe that about my own specialty, infectious disease, where we saw that the administration of an appropriately chosen antibiotic could remarkably reverse the course of a virulent illness.
“But it turns out that most serious illness now is the result of chronic and not acute disease, and therefore not amenable to technical interventions.”
Numerous studies validate Jerry’s statement. Most of the conditions that afflict us as we age—heart disease, cancer, diabetes, depression, arthritis, stroke, Alzheimer’s, and so on—are chronic conditions that require long-term, often lifetime management. But though we now live in an era of chronic disease, our system of medical education, as well as our system of health-care financing and delivery, continues to be based upon an acute disease model, and this fact, I begin to understand, is at the core of many of our health-care problems.
“The contemporary disarray in health affairs in the United States,” Daniel Fox, president of the Milbank Memorial Fund, a foundation that engages in analysis, study, and research on issues in health policy, argues, “is a result of history.* It is the cumulative result of inattention to chronic disabling illness.
“Contrary to what most people—even most experts—believe,” he continues, “deaths from chronic disease began to exceed deaths from acute infections [more than] three-quarters of a century ago. But U.S. policy, and therefore the institutions of the health sector, failed to respond adequately to that increasing burden.”
Fox explains: “Leaders in government, business, and health affairs remain committed to policy priori
ties that have long been obsolete. Many of our most vexing problems in health care—soaring hospital and medical costs; limited insurance coverage, or no coverage at all, for managing chronic conditions; and the scarcity of primary care relative to specialized medical services—are the result of this failure to confront unpleasant facts.”
According to a report issued by the Robert Wood Johnson Foundation (Chronic Care in America: A 21st Century Challenge), approximately 105 million Americans now suffer from chronic conditions, and by the year 2030, largely because of the aging of our population, this number will rise to nearly 150 million, 42 million of whom will be limited in their ability to go to school, to work, or to live independently.
The report also notes that the question of how to provide adequately for people with chronic conditions has significant implications not just for our general well-being, but for national healthcare expenditures. We currently spend $470 billion (calculated in 1990 dollars) on the direct costs of medical services for people with chronic conditions; by 2030 it is estimated we will be spending $798 billion.* (In 2001, the Institute of Medicine reported that 46 percent of the U.S. population had one or more chronic illnesses, and that 75 percent of direct medical expenses went for the care of patients with chronic illnesses.)
These figures, however, represent only medical services, whereas treatment and care for people with chronic conditions require a multitude of non-medical services, from installing bathtub railings and finding supportive housing, to helping with basic activities such as shopping, cleaning, and cooking. In addition, the report emphasizes, “the best ways to provide these services often are not by medical specialists or in medical institutions. In fact, the services that keep people independent for as long as possible are frequently those that emphasize assistance and caring, not curing.”
For the millions of people who require help with everyday activities, the assistance of family and friends is indispensable. In 1990, for example, 83 percent of persons under age sixty-five with chronic disabilities, and 73 percent of disabled persons over sixty-five, relied exclusively on these informal caregivers. Yet even as the number of people with chronic conditions is rising, the number of caregivers is falling. Whereas in 1970 there were twenty-one “potential caregivers” (defined as people age fifty to sixty-four) for each very elderly person (age eighty-five or older) and in 1990 eleven potential caregivers for each very elderly person, by 2030 there will be only six such potential caregivers for each very elderly person.
Moreover, most doctors work in community settings, not in hospitals or clinics, and in helping their patients manage chronic conditions they rely on the knowledge and experience they acquired in medical school. Yet their medical school experience has taken place almost entirely in hospitals, and has consisted largely of work with patients who suffer from acute conditions.
Add to this the fact that prevention—crucial to lessening the burden of chronic disease—is barely taught in medical schools and underfunded in both the private and public sector, and we see more clearly the magnitude of the problem, and the reasons for its tenacity and persistence.
“The resistance to prevention among decision makers in the private and public sectors has a long history,” Daniel Fox writes.* Since the end of the nineteenth century, “experts and advocates in health affairs promised that increasing the supply of facilities, professionals, and research would lead first to more successful and available technology for diagnosis and treatment and then to better health for Americans. Preventive services that could be delivered by injections or in tablet form fulfilled this promise. Prevention that required people to change their behavior was, however, outside the conditions of the promise. The promise of better health through procedures administered by professionals was central to policy to support medical education, to define health insurance benefits, and to establish priorities for research.”
Despite the alarming situation that exists with respect to chronic conditions, “we still seem to believe,” Jerry comments, “that the goal should be to cure, the way it was with infectious diseases—so that anything less becomes a failure.
“But AIDS, a disease of long duration and considerable cost—a chronic infectious disease—assaults this belief,” he explains, “because it is all around us, and will be with us for the rest of human history, and because it creates an uncomfortable feeling of inadequacy and failure in physicians.
“AIDS forces us, especially if we’re physicians, to confront our own vulnerability and inability to substantially alter the power and force of natural events. So that even though the goal of curing remains paramount, the parallel ethic of preventing disease, prolonging life, improving the quality of life, and alleviating suffering is more realistic, and more appropriate.
“Central to this ethic, of course, is compassion, and compassion for the sick doesn’t just mean feeling for them—it means providing competent medical care, and I’m talking about the most comprehensive and technically superior care that’s available.
“Personal compassion toward AIDS patients, especially by individual health-care workers, can only exist and be maintained within a framework of competence that exists within a system that provides both the necessary resources and an appropriate environment for such care. In this kind of setting, individual acts of compassion can flourish.” Jerry stops, shrugs. “So the question’s always with us, you see: What does the world do in a time of plague?”
Sometimes, it seems, we are so beguiled by our new technologies, and by all the hype from drug companies and the popular media about them, that we come to believe about our technologies what we used to believe about infectious diseases—that every human ailment has a singular, specific cause and is therefore susceptible to a single and specific remedy.* If a laboratory test shows we have disease A, condition B, or illness C, then a doctor—or computer—will, with such knowledge, automatically prescribe medication C or treatment D or procedure E, and all will be well.
But diseases themselves are biologically variable, and make their homes in each of us in variable ways. “It is the sheer interactive complexity and unpredictability of the behavior of living organisms that sets the limits of the medical sciences,” David Weatherall explains, “regardless of whether they involve highly sophisticated molecular technology or the simplest observational studies.”*
Although our new technologies can be marvelously helpful, like heart scans and brain scans, they are only as good as the doctor who makes use of them. The more diagnostic testing mechanisms we have, and the more sophisticated they become, the more the judgment and diagnostic skills of the doctor are needed. As Sherwin Nuland notes, “it is not information that leads to the best medical care, but judgment.”*
Under most recent managed-care guidelines, however (and managed care, now the dominant form of medical care in the United States, is itself variable—a generic term for a variety of approaches to financing and delivering medical care), not only are doctors encouraged to limit the amount of time they spend with individual patients, but patients often see one doctor on one visit, another doctor on the next visit, and so on. More: in an increasingly mobile society, each time one of us changes jobs, or moves (according to the Census Bureau, nearly 45 million Americans move in any one year, and approximately 40 percent of these moves are to different counties and states), we usually change health plans and doctors.*
Thus, my friends lament, we frequently become strangers to our doctors and they become strangers to us, a condition profoundly inimical to the practice of medicine. The more this is so—the more we transfer from health plan to health plan (if we’re insured and have a health plan; at this writing, approximately 41 million Americans lack health insurance, and the United States remains virtually the only industrialized nation without universal health care), and the more we transfer individual responsibility and accountability to machines—the less often and the less well can doctors make informed judgments that inspire trust, and that are worthy of trust.* And without
trust, the quality of medical care, and of our well-being, is dangerously compromised.
As my recovery proceeds, so that I do not even think of myself as being in recovery, I realize more and more just how lucky I have been, not only to have survived the blockages in my arteries, but to have had full access to friends whose thoughtfulness, knowledge, and judgment both saved me and sustained me. My talks with them, and the reading this stimulates, continue to persuade me of the validity of the old truism: that the secret of the care of the patient is in caring for the patient.
“Clinical research never produces definitive conclusions,” Richard Horton explains, “for the simple reason that it depends on human beings, maddeningly variable and contrary subjects.* Although medical science is reported as a series of discontinuous events—a new gene for this, a fresh cure for that—in truth it is nothing more than a continuous many-sided conversation whose progress is marked not by the discovery of a single correct answer but by the refinement of precision around a tendency, a trend, a probability.”
Horton articulates what Rich said to me within a day or two of my surgery: “Advances in diagnosis and treatment depend on averaging the results from many thousands of people who take part in clinical trials. The paradoxical difficulty is that these averages, although valid statistically, tell us very little about what is likely to take place in a single person. Reading the findings of medical research and combining their deceptively exact numbers with the complexities of a patient’s circumstances is more of an interpretative than an evidence-based process. The aim is to shave off sharp corners of uncertainty, not to search for a perfect sphere of indisputable truth that does not and never could exist.”