by Thomas Goetz
The essay was well received, helping the British public understand the purpose and potential impact of the European science.
• • •
THE LABORATORY IS A LONG WAY FROM THE DOCTOR’S OFFICE. This was true in the late nineteenth century, and it is true in the early twenty-first. It is the rare physician who can keep track of the torrent of new medical research—and it’s even less common to find one who manages to integrate that information into the care he delivers in his office, day in, day out.
Recent research bears this out. In 2000, two data scientists from the University of Missouri, E. Andrew Balas and Suzanne A. Boren, measured the lag from lab to doctor’s office by analyzing a handful of common clinical procedures such as mammograms, cholesterol screenings, and flu vaccinations. They tracked when the breakthrough research was first discovered and then worked their way forward, through peer review, publication, into textbooks, and to widespread implementation. The question was, how long did it take? Two years? Five years? A decade?
In fact, Balas and Boren found an average of a seventeen-year delay between the original research establishing a best practice and the time when patients would be routinely treated accordingly. A report issued by the august Institute of Medicine the following year was downright despondent. “Scientific knowledge about best care is not applied systematically or expeditiously to clinical practice. It now takes an average of 17 years for new knowledge generated by randomized controlled trials to be incorporated into practice, and even then application is highly uneven.” Seventeen years—it was an astounding finding, one that shook the perception that doctors were practicing scientists fully versed and up-to-date in their disciplines.
The challenge here, what’s known as translational medicine, is that clinical doctors must accept the findings of research scientists, a different breed altogether. This requires an idea in one discipline to flow into a related but distinct field, what the theorist Everett Rogers called the “diffusion of innovations.” Rogers’s book of that title (published, as it happens, in 1962, the same year as Kuhn’s The Structure of Scientific Revolutions) offers a tidy model for the process. As Rogers describes it, a new innovation doesn’t simply erupt into widespread use all at once. Instead, it gradually spreads through a professional class, as some adopt the new tool with enthusiasm, while more reluctant actors sit back and let their peers take the risks and work out the kinks. It’s a steady process, but it can be a slow one.
Rogers developed his model by studying how a new weed killer spread among a community of farmers in Story County, Iowa, in the 1950s. Developed by British scientists during World War II to boost production of food crops, the herbicide, known as 2,4-D, is a synthetic plant hormone that causes weeds to, in effect, grow themselves to death. After the war, the chemical made its way to the American Grain Belt, and to Iowa, where Rogers’s story starts. First, one farmer (known as Farmer A) heard about the spray through an agricultural researcher. A science enthusiast, Farmer A decided to give it a try, and that year he used it on his corn crop. It worked well for him, and his yield increased substantially—a result keenly observed by Farmer A’s neighbor, Farmer B. Within two years, Farmer B had begun using the weed killer, too. Word of the success of Farmer B, who had more stature among the local farmers than Farmer A, spread quickly. The next year, four neighboring farmers adopted the herbicide, and by the following year, five others had as well. After eight years, the last holdouts had signed up to use the spray, and all thirteen farmers in the area had adopted the herbicide.
Though one may not associate 1950s Iowa farmers with the vanguard of technology, their example perfectly illustrates not only how an innovation spreads in terms of time, but the necessary elements for its diffusion as well. Rogers stipulates that four distinct components have to be in place. First is the new technology itself: the new herbicide. The technology is all well and good, Rogers argues, but it is inconsequential without the second component: communication and transmission of its virtues among a possible community of users. In the case of the herbicide, this was, first, the relationship between the scientist and Farmer A, and then, even more significantly in terms of adoption, the relationship between Farmers A and B, who had some influence in the community. Rogers’s third element is time, which in this case meant the annual adoption, season by season, by one farmer to the next, as word spread and evidence of the weed killer’s effectiveness proved itself. Finally, Rogers notes that an innovation must transition through a coherent social system, a group aligned toward a common goal—in this case, the goal of having a good corn crop that can successfully lead to another one.
The practicality of the herbicide meant that word spread through the network of Iowa farmers rather quickly, all things considered. Within eight years, the entire local community of thirteen farmers was on board with the new spray. Even the laggards (Rogers’s term for the latecomers) couldn’t deny the success of the herbicide after observing their neighbors’ bounty over the past seven years. Indeed, 2,4-D would turn out to be one of the essential agents for the so-called Green Revolution, the massive upswing in farm productivity and food production worldwide made possible by the widespread use of chemical fertilizers and herbicides.
Taken together, Kuhn and Everett offer a framework for the consecutive hoops that a radical idea such as the germ theory would have had to jump through on its way toward general recognition: first acceptance among scientists, then among doctors and physicians, and finally by society at large.
Doctors in the late 1800s weren’t so different from those today. The germ theory may have gotten scientists talking, but it had few practical implications, aside from smallpox and anthrax vaccines. Until new discoveries were joined by new treatments, doctors could justifiably sit on the fence. As an 1893 discussion of the germ theory in The Westminster Review put it, “the evidence that it is so is powerful and has been accepted by many eminent men of science. The working medical profession as a body have I believe been slower of conviction.”
One snag for the germ theory was that, despite the widespread discussion in esteemed journals of Koch’s and Pasteur’s research, practitioners stubbornly preferred to trust their own clinical experience. Rogers called this homophily—the tendency we all have to seek out like-minded folk. (The aversion to people who are opposed to our beliefs he called heterophily.) This created an additional burden for a novel idea such as the germ theory: Enough opinion leaders and change agents would have to accept the idea and promulgate it before the most determined opponents were convinced of its worth. Or one could simply wait for the old guard to pass and for a new, more accepting generation to come along.
As it happened, the timing was just about perfect for the latter: It had been fourteen years exactly since Lister began publishing on carbolic acid and antiseptic surgery, and twenty since Pasteur’s 1861 work debunking spontaneous generation. The time was right for the shift from the theoretical to the practical.
• • •
IN SOUTHSEA, CONAN DOYLE CONTINUED TO BUILD HIS MEAGER practice. He arranged for his younger brother, Innes, to join him. He had secured “as nicely furnished a little consulting room as a man need wish to have” and was working on the waiting room. To support himself and his brother, Conan Doyle took on the role as medical examiner for the local Gresham Life Assurance Society, performing autopsies upon request. And he wrote copious letters home to his mother—preferring to use the back side of patients’ notes, “as they serve the double purpose of saving paper and of letting you see that business is stirring.” He wrote an article on rheumatism for The Lancet, though it was never published. For an added bit of cash, he wrote the copy for an advertisement for the Gresham insurance agency, a dour bit of verse, his mind still clearly on germs and infection:
When pestilence comes from the pest-ridden South,
And no quarter of safety the searcher can find,
When one is afraid e’en to open one’s mouth,
> For the germs of infection are borne on the wind.
When fruit it is cheap, and when coffins are dear;
Ah then, my dear friends, ’tis a comfort to know
That whatever betide, we have by our side,
A policy good for a thousand or so.
When the winter comes down with its escort of ills,
Lumbago and pleurisy, toothache and cold;
When the doctor can scarce with his potions and pills
Keep the life in the young, or death from the old;
When rheumatic winds from each cranny and chink
Seize hold of our joints in their fingers of snow;
Still whatever betide, we retain by our side
That policy good for a thousand or so.
Considering a physician wrote it, the verse affords little promise to medicine. But then, the sad truth was that England in the 1880s would have been a lonely place to aspire to be a man of medicine, regardless.
Consider, first, the state of the art, or what passed for it. Practically speaking, medicine in the 1880s didn’t work that well, and routine practices tended to have the opposite of their intended effect. (The one consolation was that most patients’ expectations tended to be rather modest.) Even the royalty were vulnerable to the perils of disease, particularly infectious disease. In 1861, Prince Albert, the husband of Queen Victoria, died of typhoid, and a decade later the same disease brought his son the Prince of Wales (later King Edward VII) to the brink of death.
Humoral medicine was still common. The 1862 International Exhibition in London, the greatest display of forward-looking technology in human history up to that time, amazed visitors with stereoscope photography, ice-making machines, and Charles Babbage’s “difference engine.” But the display of medical equipment featured the latest in leech tubes, lancets, “scarificators,” and other bloodletting apparatuses. These instruments looked impressive and modern, in a grisly sort of way, but they were more connected with the Middle Ages, centuries prior, than with the revolutionary research under way in Europe.
And bloodletting was something real doctors were offering. There was also a booming industry in flimflam, with healers peddling bogus remedies and ineffective cures. Hydrotherapy was in vogue, constituting various baths at various temperatures for various conditions. With legitimate pharmacology scarce, hucksters made fortunes selling vegetable pills, Valentine’s Meat-Juice, and other “secret” medicines. Homeopathy, the sham practice of using that which causes disease to cure it, was also booming in the 1800s. Despite these quacks, Britain held off from regulating its medical profession until 1858. (France, by contrast, had regulated the practice of medicine since 1794, and Germany, which had long tightly regulated medical practitioners, took the opposite step, in 1869, of loosening the government’s hold on practice in order to increase access to care.)
The Lancet and the BMJ were both created, in equal parts, to take on the quacks and to undercut the elitism of the Royal College of Physicians. The Lancet came first, in 1823, when Thomas Wakley announced the creation of a journal that would hold medicine to high ideals and privilege the pursuit of scientific knowledge. Wakley imagined a new sort of communication channel, both to and from doctors. The Provincial Medical and Surgical Journal followed in 1840—it was renamed the British Medical Journal in 1888—and likewise endeavored to bring physicians the latest reports and research of medical science. Together, these journals, along with a tide of smaller journals, brought science to an audience hungry for some communication. This new wave promised to wash away inherited practices from the medical profession and allow for an approach steeped in rigor and science.
It’s at this point, when new ideas were uprooting the old, that Conan Doyle entered the field—and happily for him, in a place with a focus on the future rather than the past. Edinburgh was a global center of medicine, and a hub of the new scientific medicine in particular. It was in Edinburgh where the discovery of chloroform, a powerful anesthetic, had helped initiate the era of modern surgery in 1847. Edinburgh was home to James Syme, the brilliant Scottish surgeon who had led the cause of medical reform in the 1850s that culminated in the Medical Act of 1858. In 1869, Joseph Lister, then just a few years into his work on carbolic acid, replaced Syme as the new chair of clinical surgery. His work on antiseptic surgery was already stirring excitement (and some antipathy) among his peers, and he quickly brought Edinburgh’s wards around to his methods. As Conan Doyle described it later, the operations at Edinburgh were “conducted amid clouds of carbolic steam.”
Still, inside the lecture halls, there was conflict. The wizened professors scoffed at the germ theory, just as they had disparaged Virchow’s cellular pathology a few years earlier. Conan Doyle described these men as “the cold-water school,” which regarded “the whole germ theory as an enormous fad.” One miffed professor of surgery openly mocked the idea of germs. “Please shut the door, or the germs will be getting in,” he scoffed at students.
Conan Doyle quickly aligned himself with the new guard. His most celebrated teacher was Joseph Bell, a professor of surgery who practiced alongside Lister during his stint as chief of surgery in Edinburgh. Bell’s approach to germs was more pragmatic than Lister’s, abstaining from the philosophical debate but granting the validity of antiseptic methods. In an 1887 book on surgery, Bell made a thoroughly rational assessment: “These germs, they give us anxiety enough, and the practical question we have to settle is, How best to prevent them getting into a wound or harming it after they have got in? We need not greatly care to discuss what is called the germ theory, which some believe in, and some do not; but we may accept it as a valuable idea to keep always before our eyes—the prevention of all infection of a wound.”
Bell recruited Conan Doyle as an assistant, and there the latter had a firsthand look at Bell’s close powers of observation. “His strong point was diagnosis, not only of disease, but of occupation and character,” Conan Doyle would recall. “I had to array his outpatients, make simple notes of their cases, and then show them in, one by one, to the large room in which Bell sat in state surrounded by his dressers and students. Then I had ample chance of studying his methods and of noticing that he often learned more of the patient by a few quick glances than I had done by my questions.” Dr. Bell and his talents would make a lifelong impression on Conan Doyle and turn out to be very useful indeed.
And then it was on to Southsea.
In many respects, Conan Doyle and Koch had parallel circumstances: both had made their way from anonymous middle-class circumstances to a local university that, by good grace, was staffed by some of the leading scientists of the day. But after graduation, each had been dispatched to the hinterlands, left to make his own way. And each yearned to make his way back to a place where new ideas were being pursued.
“Let me once get my footing in a good hospital and my game is clear,” Conan Doyle wrote his mother during his medical training. “Observe cases minutely, improve my profession, write to the Lancet, supplement my income by literature, make friends and conciliate everyone I meet, wait ten years if need be, and then when my chance comes be prompt and decisive in stepping into an honorary surgeonship.” As life plans go, it was perhaps not the most ambitious. But Conan Doyle’s plan was practical, reasonable, and within reach. As it would happen, it would be both too ambitious in the short term and altogether irrelevant in the long term.
• • •
THOUGH THE FIELD OF MEDICINE WAS FAST SHIFTING TOWARD science, the public remained distrusting of the profession. This stemmed from not just ineffective practices, but also ones considered downright depraved. It started with human autopsies.
Cadavers were hard to come by in nineteenth-century England. Until the 1832 Anatomy Act, it was illegal to conduct an autopsy except on a criminal who had been explicitly sentenced to one, following execution. This provided a mere handful of legal corpses annually for several thou
sand medical students, physicians, and scientists. Given the law of supply and demand, the result, inevitably, was an active black market in bodies. Body snatching became a regular occurrence at English graveyards. After the burial of a loved one, family members routinely took turns at the grave, standing as sentries for several days to keep the body at peace.
Many physicians were willing to ignore the provenance of the corpses they studied—at least until the infamous case of William Burke and William Hare. Sometime in the 1820s, these two Irishmen began supplying fresh corpses to Dr. Robert Knox, a professor at Edinburgh medical school, who needed them to instruct his students, though Knox chose not to ask where the bodies were coming from. Burke and Hare found their supply among the local prostitutes and lodgers in Edinburgh’s slums, murdering them for an eight- or ten-pound bounty. By the time they were caught, they had procured at least seventeen corpses for Dr. Knox, who was not prosecuted but fled the city in disgrace, his house burned to the ground. Hare turned king’s evidence, and Burke was hanged, but such was the disgust at the case that his body was publicly anatomized and flayed, then skinned, tanned, and sold by the strip. Outrage at the case spurred passage of the Anatomy Act, which required anatomists to obtain a license but allowed them to use any criminal’s corpse and allowed individuals to donate a corpse in exchange for the cost of burial.
Even after the Anatomy Act, though, human cadavers remained scarce, and doctors began turning to animals. The use of animals for medical research dates at least as far back as Galen in AD 200. (He based his medical writings on necroscopies of apes and dogs, not humans, a difference that caused centuries of misunderstanding about human anatomy.) But the use of animals took off in the 1840s, with the discovery of cells and the emergence of physiology. As the humoral theory faded, scientists were eager to learn about the function of individual organs, blood vessels and nerves, and other anatomical systems. By the 1870s, the combination of Listerism and anesthesia had made surgery a safer, more viable treatment, increasing the demand for animal stand-ins. And animals (in particular, dogs) were the most available to study systematically.