Book Read Free

The Scientific Attitude

Page 20

by Lee McIntyre


  The theory was impressive and the explanatory potential intriguing. Since disease was thought to be due to an imbalance in the humors—colds were due to phlegm, vomiting was due to bile, and so on—health could be maintained by keeping them in balance, for instance through the practice of bloodletting.8

  Though bloodletting was invented by Hippocrates, Galen brought it to a high art, and for more than a thousand years thereafter (right through the nineteenth century) it was seen as a therapeutic treatment. Having written four separate books on the pulse, Galen thought that bloodletting allowed the healer to capitalize on the example of nature, where the removal of excess fluid—such as during menstruation—prevented disease.9 As Porter puts it, “whatever the disorder—even blood loss—Galen judged bleeding proper.”10 As such, he often bled his patients to unconsciousness (which sometimes resulted in their death).

  This is not the only ancient medical practice that we would judge barbarous from today’s point of view; there was also skull drilling, leeches, swallowing mercury, the application of animal dung, and more. What is shocking, however, is the extent to which such ignorance went unchallenged down through the centuries, with the result that until quite recently in the history of medicine, patients often had just as much to fear from their doctor as they did from whatever disease they were trying to cure.

  It is not just that the theories in circulation at the time were wrong—for, as we have seen, many scientific theories will turn out to be mistaken—but that most of these ideas were not even based on any sort of evidence or experiment in the first place. Medicine did not yet have the scientific attitude.

  The Dawn of Scientific Medicine

  The transition out of this nonempirical phase of medicine was remarkably slow. The Scholastic tradition lingered in medicine long after it had been abandoned by astronomy and physics, with the result that even two hundred years after the scientific revolution had begun in the seventeenth century, medical questions were customarily settled by theory and argument—to the extent that they were settled at all—rather than controlled experiment.11 Both the empirical and clinical practices of medicine remained quite backward until the middle of the nineteenth century. Even to the extent that a fledgling science of medicine began to emerge during the Renaissance, it had more of an effect on knowledge than on health.12 Despite the great breakthroughs in anatomy and physiology in early-modern times (e.g., Harvey’s seventeenth-century work on the circulation of blood), “[medicine’s] achievements proved more impressive on paper than in bedside practice.”13 In fact, even the one unambiguous improvement in medical care in the eighteenth century—the vaccine against smallpox—is seen by some not as the result of science so much as embracing popular folk wisdom.14 This “nonscientific” outlook persisted in medicine straight through the eighteenth century (which was known as the “age of quackery”); it was not until the middle of the nineteenth century that modern medicine truly began to arise.15

  In his delightful memoir, The Youngest Science, Lewis Thomas contrasts the kind of scientific medicine practiced today with

  the kind of medicine taught and practiced in the early part of the nineteenth century, when anything that happened to pop into the doctor’s mind was tried out for the treatment of illness. The medical literature of those years makes horrifying reading today: paper after learned paper recounts the benefits of bleeding, cupping, violent purging, the raising of blisters by vesicant ointments, the immersion of the body in either ice water or intolerably hot water, endless lists of botanical extracts cooked up and mixed together under the influence of nothing more than pure whim. … Most of the remedies in common use were more likely to do harm than good.16

  Bloodletting in particular seemed popular, owing in part to its enthusiastic support by the prominent physician Benjamin Rush. A holdover from pre-medieval times, bloodletting was thought to have enormous health benefits and constituted one of the “extreme interventions” that were thought necessary for healing. In his important book Seeking the Cure, Dr. Ira Rutkow writes that

  citizens suffered needlessly as a result of Rush’s egotism and lack of scientific methodologies. In an age when no one understood what it meant to measure blood pressure and body temperature, and physicians were first determining the importance of heart and breathing rates, America’s doctors had no parameters to prevent them from harming patients.17

  Yet harm them they did.

  Doctors bled some patients sixteen ounces a day up to fourteen days in succession. … Present-day blood donors, by comparison, are allowed to give one pint (sixteen ounces) per session, with a minimum of two months between each donation. Nineteenth-century physicians bragged about the totality of their bleeding triumphs as if it were a career-defining statistic. … The regard for bloodletting was so deep-seated that even the frequent complications and outright failures to cure did not negate the sway of Rush’s work.18

  Fittingly Rush died, in 1813, as a result of treatment by bloodletting for his typhus fever.19 Yet this was not the only horrific practice of the time.

  Medicine was performed literally in the dark. Electricity was newfangled and unpopular. Almost every act a doctor performed—invasive examinations, elaborate surgeries, complicated births—had to be done by sun or lamplight. Basics of modern medicine, such as the infectiousness of diseases, were still under heavy dispute. Causes of even common diseases were confusing to doctors. Benjamin Rush thought yellow fever came from bad coffee. Tetanus was widely thought to be a reflex irritation. Appendicitis was called peritonitis, and its victims were simply left to die. The role that doctors—and their unwashed hands and tools—played in the spread of disease was not understood. “The grim spectre of sepsis” was ever present. It was absolutely expected that wounds would eventually fester with pus, so much so that classifications of pus were developed. … Medicine was not standardized, so accidental poisoning was common. Even “professionally” made drugs were often bulky and nauseating. Bleeding the ill was still a widespread practice, and frighteningly large doses of purgatives were given by even the most conservative men. To treat a fever with a cold bath would have been “regarded as murder.” There was no anesthesia—neither general nor local. Alcohol was commonly used when it came to enduring painful treatments … and pure opium [was] sometimes available too. If you came to a doctor for a compound fracture, you had only a fifty percent chance of survival. Surgery on brains and lungs was attempted only in accident cases. Bleeding during operations was often outrageously profuse, but, as comfortingly described by one doctor, “not unusually fatal.”20

  It is important to contrast what was occurring in the United States (which was fairly late to the scientific party in medicine) with progress that was already underway in Europe. In the early nineteenth century, Paris in particular was a center for many advances in medical understanding and practice, owing perhaps to the revolutionary outlook that had seen hospitals move out of the hands of the church to become nationalized.21 In Paris, a more empirical outlook prevailed. Autopsies were used to corroborate bedside diagnosis. Doctors appreciated the benefits of palpation, percussion, and auscultation (listening), and René Laennec invented the stethoscope. A more naturalistic outlook in general brought medical students to learn at their patients’ bedsides in hospitals. Following these developments, the rise of scientific labs in Germany, and appreciation for the microscope by Rudolf Virchow and others, led to more advances in basic science.22 All of this activity attracted medical students from around the world—particularly the United States—to come to France and Germany for a more scientifically based medical education. Even so, the benefits of this for direct medical practice were slow to arise on either side of the Atlantic. As we have seen, Semmelweis was one of the earliest physicians to try to bring a more scientific attitude to patient care in Europe at roughly the same time the miracle of anesthesia was being demonstrated in Boston. Yet even these advances met with resistance.23

  The real breakthrough arrived in the 1860s with
the discovery of the germ theory of disease. Pasteur’s early work was on fermentation and the vexed question of “spontaneous generation.” In his experiments, Pasteur sought to show that life could not arise out of mere matter. But why, then, would a flask of broth that had been left open to the air “go bad” and produce organisms?24

  Pasteur devised an elegant sequence of experiments. He passed air through a plug of gun-cotton inserted into a glass tube open to the atmosphere outside his laboratory. The gun-cotton was then dissolved and microscopic organisms identical to those present in fermenting liquids were found in the sediment. Evidently the air contained the relevant organisms.25

  In later experiments, Pasteur demonstrated that these organisms could be killed by heat. It took a few more years for Pasteur to complete more experiments involving differently shaped flasks placed in different locations, but by February 1878 he was ready to make the definitive case for the germ theory of infection before the French Academy of Medicine, followed by a paper that made the case that microorganisms were responsible for disease.26 The science of bacteriology had been born.

  Lister’s work on antisepsis—which aimed at preventing these microorganisms from infecting the wounds caused during surgery—grew directly out of Pasteur’s success.27 Lister had been an early adopter of Pasteur’s ideas, which were still resisted and misunderstood throughout the 1870s.28 Indeed, when US president James Garfield was felled by an assassin’s bullet in 1871, he died many months later not, many felt, as a result of the bullet that was still lodged in his body, but the probing of the wound site with dirty fingers and instruments by some of the most prominent physicians of the time. At trial, Garfield’s assassin even tried to defend himself by saying that the president had died not of being shot but of medical malpractice.29

  Robert Koch’s laboratory work in Germany during the 1880s took things the next step. Skeptics had always asked about germs, “Where are the little beasts?”—not accepting the reality of something they could not see.30 Koch finally answered this with his microscopic work, where he was able not only to establish the physical basis for the germ theory of disease but even to identify the microorganisms responsible for specific diseases.31 This led to the “golden years” of bacteriology (1879–1900), when “the micro-organisms responsible for major diseases were being discovered at the phenomenal rate of one a year.”32

  All of this success, though, may now compel us to ask a skeptical question. If this was such a golden age of discovery in bacteriology—that provided such stunning demonstration of the power of careful empirical work and experiment in medicine—why did it not have a more immediate effect on patient care? Surely, there were some timely benefits (for instance, Pasteur’s work on rabies and anthrax), but one nonetheless feels a lack of proportion between the good science being done and its effect on treatment. If we here see the beginning of respect for empirical evidence in medical research, why was there such a lag for the realization of its fruit in clinical science?

  As of the late nineteenth century the few medicines that were effective included mercury for syphilis and ringworm, digitalis to strengthen the heart, amyl nitrate to dilate the arteries in angina, quinine for malaria, colchicum for gout—and little else. … Blood-letting, sweating, purging, vomiting and other ways of expelling bad humours had a hold upon the popular imagination and reflected medical confidence in such matters. Blood-letting gradually lost favour, but it was hardly superseded by anything better.33

  Perhaps part of the reason was that during this period experiment and practice tended to be done by different people. Social bifurcation within the medical community tended to enforce an unwritten rule whereby researchers didn’t practice and practitioners didn’t do research. As Bynum says in The Western Medical Tradition:

  “Science” had different meanings for different factions within the wider medical community. … “Clinical science” and “experimental medicine” sometimes had little to say to each other. … These two pursuits were practised increasingly by separate professional groups. … Those who produced knowledge were not necessarily those who used it.34

  Thus the scientific revolution that had occurred in other fields (physics, chemistry, astronomy) two hundred years earlier—along with consequent debates about methodology and the proper relationship between theory and practice—did not have much of an effect on medicine.35 Even after the “bacteriological revolution,” when the knowledge base of medicine began to improve, the lag in clinical practice was profound.

  Debates about the methodology of natural philosophy [science] touched medicine only obliquely. Medicine continued to rely on its own canonical texts, and had its own procedures and sites for pursuing knowledge: the bedside and the anatomy theatre. Most doctors swore by the tacit knowledge at their fingers’ ends.36

  For all its progress, medicine was not yet a science. Even if all of this hard-won new understanding was now available for those who cared to pursue it, a systemic problem remained: how to bridge the gap between knowledge and healing in those who were practicing medicine, and how to ensure that this new knowledge could be put into the hands of those students who were the profession’s future?37

  Medicine at this point faced an organizational problem not just for the creation but the transmission of new knowledge. Instead of a well-oiled machine for getting scientific breakthroughs into the hands of those who could make best use of them, at this time medicine was a “contested field occupied by rivals.”38 Although some of the leading lights in medicine had already embraced the scientific attitude, the field as a whole still awaited its revolution.

  The Long Transition to Clinical Practice

  For whatever reason—ideological resistance, ignorance, poor education, lack of professional standards, distance between experimenters and practitioners—it was a long transition to the use of the science of medicine in clinical practice. Indeed, some of the horror stories of untested medical treatments and remedies from the nineteenth century held over right through the early part of the twentieth century. In the United States, there continued to be poor physician education and lack of professional standards by which practitioners could be held accountable for their sometimes-shoddy practices.

  At the dawn of the twentieth century, physicians still did not know enough to cure most diseases, even if they were now better at identifying them. With the acceptance of the germ theory of disease, physicians and surgeons were now perhaps not as likely to kill their patients through misguided interventions as they had been in earlier centuries, but there was still not much that the scientific breakthroughs of the 1860s and 1870s could do for direct patient care. Lewis Thomas writes that

  explanation was the real business of medicine. What the ill patient and his family wanted most was to know the name of the illness, and then, if possible, what had caused it, and finally, most important of all, how it was likely to turn out. … For all its facade as a learned profession, [medicine] was in real life a profoundly ignorant profession. … I can recall only three or four patients for whom the diagnosis resulted in the possibility of doing something to change the course of the illness. … For most of the infectious diseases on the wards of the Boston City Hospital in 1937, there was nothing to be done beyond bed rest and good nursing care.39

  James Gleick paints a similarly grim portrait of medical practice in this era:

  Twentieth-century medicine was struggling for the scientific footing that physics began to achieve in the seventeenth century. Its practitioners wielded the authority granted to healers throughout human history; they spoke a specialized language and wore the mantle of professional schools and societies; but their knowledge was a pastiche of folk wisdom and quasi-scientific fads. Few medical researchers understood the rudiments of controlled statistical experimentation. Authorities argued for or against particular therapies roughly the way theologians argued for or against their theories, by employing a combination of personal experience, abstract reason, and aesthetic judgment.40

&
nbsp; But change was coming, and a good deal of it was social. With advances in anesthesia and antisepsis, joined by the bacteriological discoveries of Pasteur and Koch, by the early twentieth century clinical medicine was ripe to emerge from its unenlightened past. As word of these discoveries spread, the diversity of treatments and practices in clinical medicine suddenly became an embarrassment. Of course, as with any paradigm shift, Kuhn has taught us that one of the most powerful forces at work is that the holdouts die and take their old ideas with them, while new ideas are embraced by younger practitioners. Surely some of this occurred (Charles Meigs, one of the staunchest resisters of Semmelweis’s theory on childbed fever and anesthesia, died in 1869), but social forces were likely even more influential.

  Modern medicine is not just a science but also a social institution. And it is important to realize that even before it was truly scientific, social forces shaped how medicine was viewed and practiced, and had a hand in turning it into the science that it is today. Entire books have been written on the social history of medicine; here I will have the chance to tell only part of the story.

  In The Social Transformation of American Medicine, Paul Starr argues that the democratic principles of early America conflicted with the idea that medical knowledge was somehow privileged and that the practice of medicine should be left to an elite.41 In its earliest days, “all manner of people took up medicine in the colonies and appropriated the title of doctor.”42 As medical schools began to open, and those who had received more formal medical training sought to distance themselves from the “quacks” by forming medical societies and pursuing licensing, one might think that this would have been welcomed by a populace eager for better medical care. But this did not occur. Folk medicine and lay healing continued to be popular, as many viewed the professionalization movement in American medicine as an attempt to grab power and authority. As Starr writes,

  Popular resistance to professional medicine has sometimes been portrayed as hostility to science and modernity. But given what we now know about the objective ineffectiveness of early nineteenth-century therapeutics, popular skepticism was hardly unreasonable. Moreover, by the nineteenth century in America, popular belief reflected an extreme form of rationalism that demanded science be democratic.43

 

‹ Prev